Sample records for camera lroc images

  1. LROC - Lunar Reconnaissance Orbiter Camera

    Microsoft Academic Search

    M. S. Robinson; E. Eliason; H. Hiesinger; B. L. Jolliff; A. McEwen; M. C. Malin; M. A. Ravine; P. C. Thomas; E. P. Turtle

    2009-01-01

    The Lunar Reconnaissance Orbiter (LRO) went into lunar orbit on 23 June 2009. The LRO Camera (LROC) acquired its first lunar images on June 30 and commenced full scale testing and commissioning on July 10. The LROC consists of two narrow-angle cameras (NACs) that provide 0.5 m scale panchromatic images over a combined 5 km swath, and a wide-angle camera

  2. LROC - Lunar Reconnaissance Orbiter Camera

    Microsoft Academic Search

    M. S. Robinson; E. Bowman-Cisneros; S. M. Brylow; E. Eliason; H. Hiesinger; B. L. Jolliff; A. S. McEwen; M. C. Malin; D. Roberts; P. C. Thomas; E. Turtle

    2006-01-01

    The Lunar Reconnaissance Orbiter Camera (LROC) is designed to address two of the prime LRO measurement requirements. 1) Assess meter and smaller-scale features to facilitate safety analysis for potential lunar landing sites near polar resources, and elsewhere on the Moon. 2) Acquire multi-temporal synoptic imaging of the poles every orbit to characterize the polar illumination environment (100 m scale), identifying

  3. LROC - Lunar Reconnaissance Orbiter Camera

    NASA Astrophysics Data System (ADS)

    Robinson, M. S.; Bowman-Cisneros, E.; Brylow, S. M.; Eliason, E.; Hiesinger, H.; Jolliff, B. L.; McEwen, A. S.; Malin, M. C.; Roberts, D.; Thomas, P. C.; Turtle, E.

    2006-12-01

    The Lunar Reconnaissance Orbiter Camera (LROC) is designed to address two of the prime LRO measurement requirements. 1) Assess meter and smaller-scale features to facilitate safety analysis for potential lunar landing sites near polar resources, and elsewhere on the Moon. 2) Acquire multi-temporal synoptic imaging of the poles every orbit to characterize the polar illumination environment (100 m scale), identifying regions of permanent shadow and permanent or near-permanent illumination over a full lunar year. The LROC consists of two narrow-angle camera components (NACs) to provide 0.5-m scale panchromatic images over a 5-km swath, a wide-angle camera component (WAC) to provide images at a scale of 100 and 400 m in seven color bands over a 100-km swath, and a common Sequence and Compressor System (SCS). In addition to acquiring the two LRO prime measurement sets, LROC will return six other high-value datasets that support LRO goals, the Robotic Lunar Exploration Program (RLEP), and basic lunar science. These additional datasets include: 3) meter-scale mapping of regions of permanent or near-permanent illumination of polar massifs; 4) multiple co-registered observations of portions of potential landing sites and elsewhere for derivation of high-resolution topography through stereogrammetric and photometric stereo analyses; 5) a global multispectral map in 7 wavelengths (300-680 nm) to characterize lunar resources, in particular ilmenite; 6) a global 100-m/pixel basemap with incidence angles (60-80°) favorable for morphologic interpretations; 7) sub-meter imaging of a variety of geologic units to characterize physical properties, variability of the regolith, and key science questions; and 8) meter-scale coverage overlapping with Apollo era Panoramic images (1-2 m/pixel) to document the number of small impacts since 1971-1972, to ascertain hazards for future surface operations and interplanetary travel.

  4. LROC - Lunar Reconnaissance Orbiter Camera

    NASA Astrophysics Data System (ADS)

    Robinson, M. S.; Eliason, E.; Hiesinger, H.; Jolliff, B. L.; McEwen, A.; Malin, M. C.; Ravine, M. A.; Thomas, P. C.; Turtle, E. P.

    2009-12-01

    The Lunar Reconnaissance Orbiter (LRO) went into lunar orbit on 23 June 2009. The LRO Camera (LROC) acquired its first lunar images on June 30 and commenced full scale testing and commissioning on July 10. The LROC consists of two narrow-angle cameras (NACs) that provide 0.5 m scale panchromatic images over a combined 5 km swath, and a wide-angle camera (WAC) to provide images at a scale of 100 m per pixel in five visible wavelength bands (415, 566, 604, 643, and 689 nm) and 400 m per pixel in two ultraviolet bands (321 nm and 360 nm) from the nominal 50 km orbit. Early operations were designed to test the performance of the cameras under all nominal operating conditions and provided a baseline for future calibrations. Test sequences included off-nadir slews to image stars and the Earth, 90° yaw sequences to collect flat field calibration data, night imaging for background characterization, and systematic mapping to test performance. LRO initially was placed into a terminator orbit resulting in images acquired under low signal conditions. Over the next three months the incidence angle at the spacecraft’s equator crossing gradually decreased towards high noon, providing a range of illumination conditions. Several hundred south polar images were collected in support of impact site selection for the LCROSS mission; details can be seen in many of the shadows. Commissioning phase images not only proved the instruments’ overall performance was nominal, but also that many geologic features of the lunar surface are well preserved at the meter-scale. Of particular note is the variety of impact-induced morphologies preserved in a near pristine state in and around kilometer-scale and larger young Copernican age impact craters that include: abundant evidence of impact melt of a variety of rheological properties, including coherent flows with surface textures and planimetric properties reflecting supersolidus (e.g., liquid melt) emplacement, blocks delicately perched on terraces and rims, and both large and small radial and circumferential ejecta patterns, reflecting their ballistic emplacement and interaction with pre-existing topography and that created by earlier ejecta, extending out more than a crater diameter. Early efforts at reducing NAC stereo observations to topographic models show spatial resolutions of 2.5 m to 5 m will be possible from the 50 km orbit. Systematic seven-color WAC observations will commence at the beginning of the primary mapping phase. A key goal of the LROC experiment is to characterize future exploration targets in cooperation with the NASA Constellation program. By the end of the commissioning phase all fifty high priority targets will have partial reconnaissance mode coverage (0.5 m to 2 m per pixel).

  5. Lunar Reconnaissance Orbiter Camera (LROC) Instrument Overview

    Microsoft Academic Search

    M. S. Robinson; S. M. Brylow; M. Tschimmel; D. Humm; S. J. Lawrence; P. C. Thomas; B. W. Denevi; E. Bowman-Cisneros; J. Zerr; M. A. Ravine; M. A. Caplinger; F. T. Ghaemi; J. A. Schaffner; M. C. Malin; P. Mahanti; A. Bartels; J. Anderson; T. N. Tran; E. M. Eliason; A. S. McEwen; E. Turtle; B. L. Jolliff; H. Hiesinger

    2010-01-01

    The Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) and Narrow Angle Cameras (NACs) are on the NASA Lunar\\u000a Reconnaissance Orbiter (LRO). The WAC is a 7-color push-frame camera (100 and 400 m\\/pixel visible and UV, respectively), while\\u000a the two NACs are monochrome narrow-angle linescan imagers (0.5 m\\/pixel). The primary mission of LRO is to obtain measurements\\u000a of the Moon that

  6. NASA's Lunar Reconnaissance Orbiter Cameras (LROC)

    Microsoft Academic Search

    M. Robinson; A. McEwen; E. Eliason; B. Joliff; H. Hiesinger; M. Malin; P. Thomas; E. Turtle; S. Brylow

    2006-01-01

    The Lunar Reconnaissance Orbiter LRO mission is scheduled to launch in the fall of 2008 as part of NASA s Robotic Lunar Exploration Program and is the first spacecraft to be built as part of NASA s Vision for Space Exploration The orbiter will be equipped with seven scientific instrument packages one of which is LROC The Lunar Reconnaissance Orbiter

  7. Regolith thickness estimation over Sinus Iridum using morphology of small craters from LROC images

    NASA Astrophysics Data System (ADS)

    Liu, T.; Fa, W.

    2013-09-01

    Regolith thickness over Sinus Iridum region is estimated using morphology and size-frequency distribution of small craters that are counted from Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Cameras (NACs) images. Results show that regolith thickness for Sinus Iridum is from 2 m to more than 10 m, with a medium value between 4.1 m and 6.1 m.

  8. Characterization of previously unidentified lunar pyroclastic deposits using Lunar Reconnaissance Orbiter Camera (LROC) data

    USGS Publications Warehouse

    Gustafson, J. Olaf; Bell, James F.; Gaddis, Lisa R.R.; Hawke, B. Ray Ray; Giguere, Thomas A.

    2012-01-01

    We used a Lunar Reconnaissance Orbiter Camera (LROC) global monochrome Wide-angle Camera (WAC) mosaic to conduct a survey of the Moon to search for previously unidentified pyroclastic deposits. Promising locations were examined in detail using LROC multispectral WAC mosaics, high-resolution LROC Narrow Angle Camera (NAC) images, and Clementine multispectral (ultraviolet-visible or UVVIS) data. Out of 47 potential deposits chosen for closer examination, 12 were selected as probable newly identified pyroclastic deposits. Potential pyroclastic deposits were generally found in settings similar to previously identified deposits, including areas within or near mare deposits adjacent to highlands, within floor-fractured craters, and along fissures in mare deposits. However, a significant new finding is the discovery of localized pyroclastic deposits within floor-fractured craters Anderson E and F on the lunar farside, isolated from other known similar deposits. Our search confirms that most major regional and localized low-albedo pyroclastic deposits have been identified on the Moon down to ~100 m/pix resolution, and that additional newly identified deposits are likely to be either isolated small deposits or additional portions of discontinuous, patchy deposits.

  9. LROC NAC Stereo Anaglyphs

    NASA Astrophysics Data System (ADS)

    Mattson, S.; McEwen, A. S.; Speyerer, E.; Robinson, M. S.

    2012-12-01

    The Lunar Reconnaissance Orbiter Camera Narrow Angle Camera (LROC NAC) acquires high resolution (50 to 200 cm pixel scale) images of the Moon. In operation since June 2009, LROC NAC acquires geometric stereo pairs by rolling off-nadir on subsequent orbits. A new automated processing system currently in development will produce anaglyphs from most of the NAC geometric stereo pairs. An anaglyph is an image formed by placing one image from the stereo pair in the red channel, and the other image from the stereo pair in the green and blue channels, so that together with red-blue or red-cyan glasses, the 3D information in the pair can be readily viewed. These new image products will make qualitative interpretation of the lunar surface in 3D more accessible, without the need for intensive computational resources or special equipment. The LROC NAC is composed of two separate pushbroom CCD cameras (NAC L and R) aligned to increase the full swath width to 5 km from an altitude of 50 km. Development of the anaglyph processing system incorporates stereo viewing geometry, proper alignment of the NAC L and R frames, and optimal contrast normalization of the stereo pair to minimize extreme brightness differences, which can make stereo viewing difficult in an anaglyph. The LROC NAC anaglyph pipeline is based on a similar automated system developed for the HiRISE camera, on the Mars Reconnaissance Orbiter. Improved knowledge of camera pointing and spacecraft position allows for the automatic registration of the L and R frames by map projecting them to a polar stereographic projection. One half of the stereo pair must then be registered to the other so there is no offset in the vertical (y) direction. Stereo viewing depends on parallax only in the horizontal (x) direction. High resolution LROC NAC anaglyphs will be made available to the lunar science community and to the public on the LROC web site (http://lroc.sese.asu.edu).

  10. Investigating at the Moon With new Eyes: The Lunar Reconnaissance Orbiter Mission Camera (LROC)

    NASA Astrophysics Data System (ADS)

    Hiesinger, H.; Robinson, M. S.; McEwen, A. S.; Turtle, E. P.; Eliason, E. M.; Jolliff, B. L.; Malin, M. C.; Thomas, P. C.

    The Lunar Reconnaissance Orbiter Mission Camera (LROC) H. Hiesinger (1,2), M.S. Robinson (3), A.S. McEwen (4), E.P. Turtle (4), E.M. Eliason (4), B.L. Jolliff (5), M.C. Malin (6), and P.C. Thomas (7) (1) Brown Univ., Dept. of Geological Sciences, Providence RI 02912, Harald_Hiesinger@brown.edu, (2) Westfaelische Wilhelms-University, (3) Northwestern Univ., (4) LPL, Univ. of Arizona, (5) Washington Univ., (6) Malin Space Science Systems, (7) Cornell Univ. The Lunar Reconnaissance Orbiter (LRO) mission is scheduled for launch in October 2008 as a first step to return humans to the Moon by 2018. The main goals of the Lunar Reconnaissance Orbiter Camera (LROC) are to: 1) assess meter and smaller- scale features for safety analyses for potential lunar landing sites near polar resources, and elsewhere on the Moon; and 2) acquire multi-temporal images of the poles to characterize the polar illumination environment (100 m scale), identifying regions of permanent shadow and permanent or near permanent illumination over a full lunar year. In addition, LROC will return six high-value datasets such as 1) meter-scale maps of regions of permanent or near permanent illumination of polar massifs; 2) high resolution topography through stereogrammetric and photometric stereo analyses for potential landing sites; 3) a global multispectral map in 7 wavelengths (300-680 nm) to characterize lunar resources, in particular ilmenite; 4) a global 100-m/pixel basemap with incidence angles (60-80 degree) favorable for morphologic interpretations; 5) images of a variety of geologic units at sub-meter resolution to investigate physical properties and regolith variability; and 6) meter-scale coverage overlapping with Apollo Panoramic images (1-2 m/pixel) to document the number of small impacts since 1971-1972, to estimate hazards for future surface operations. LROC consists of two narrow-angle cameras (NACs) which will provide 0.5-m scale panchromatic images over a 5-km swath, a wide-angle camera (WAC) to acquire images at about 100 m/pixel in seven color bands over a 100-km swath, and a common Sequence and Compressor System (SCS). Each NAC has a 700-mm-focal-length optic that images onto a 5000-pixel CCD line-array, providing a cross-track field-of-view (FOV) of 2.86 degree. The NAC readout noise is better than 100 e- , and the data are sampled at 12 bits. Its internal buffer holds 256 MB of uncompressed data, enough for a full-swath image 25-km long or a 2x2 binned image 100-km long. The WAC has two 6-mm- focal-length lenses imaging onto the same 1000 x 1000 pixel, electronically shuttered CCD area-array, one imaging in the visible/near IR, and the other in the UV. Each has a cross-track FOV of 90 degree. From the nominal 50-km orbit, the WAC will have a resolution of 100 m/pixel in the visible, and a swath width of ˜100 km. The seven-band color capability of the WAC is achieved by color filters mounted directly 1 over the detector, providing different sections of the CCD with different filters [1]. The readout noise is less than 40 e- , and, as with the NAC, pixel values are digitized to 12-bits and may be subsequently converted to 8-bit values. The total mass of the LROC system is about 12 kg; the total LROC power consumption averages at 22 W (30 W peak). Assuming a downlink with lossless compression, LRO will produce a total of 20 TeraBytes (TB) of raw data. Production of higher-level data products will result in a total of 70 TB for Planetary Data System (PDS) archiving, 100 times larger than any previous missions. [1] Malin et al., JGR, 106, 17651-17672, 2001. 2

  11. Lunar Regolith Depths from LROC Images

    NASA Astrophysics Data System (ADS)

    Bart, Gwendolyn D.; Nickerson, R.; Lawder, M.

    2010-10-01

    Since the 1960's, most lunar photography and science covered the equatorial near side where the Apollo spacecraft landed. As a result, our understanding of lunar regolith depth was also limited to that region. Oberbeck and Quaide (JGR 1968) found regolith depths for the lunar near side: 3 m (Oceanus Procellarum), 16 m (Hipparchus), and 1-10 m at the Surveyor landing sites. The Lunar Reconnaissance Orbiter Camera recently released high resolution images that sample regions all around the lunar globe. We examined a selection of these images across the lunar globe and determined a regolith depth for each area. To do this, we measured the ratio of the diameter of the flat floor to the diameter of the crater, and used it to calculate the regolith thickness using the method of Quaide and Oberbeck (JGR 1968). Analysis of the global distribution of lunar regolith depths will provide new insights into the evolution of the lunar surface and the frequency, distribution, and effect of impacts.

  12. Exploring the Moon at High-Resolution: First Results From the Lunar Reconnaissance Orbiter Camera (LROC)

    NASA Astrophysics Data System (ADS)

    Robinson, Mark; Hiesinger, Harald; McEwen, Alfred; Jolliff, Brad; Thomas, Peter C.; Turtle, Elizabeth; Eliason, Eric; Malin, Mike; Ravine, A.; Bowman-Cisneros, Ernest

    The Lunar Reconnaissance Orbiter (LRO) spacecraft was launched on an Atlas V 401 rocket from the Cape Canaveral Air Force Station Launch Complex 41 on June 18, 2009. After spending four days in Earth-Moon transit, the spacecraft entered a three month commissioning phase in an elliptical 30×200 km orbit. On September 15, 2009, LRO began its planned one-year nominal mapping mission in a quasi-circular 50 km orbit. A multi-year extended mission in a fixed 30×200 km orbit is optional. The Lunar Reconnaissance Orbiter Camera (LROC) consists of a Wide Angle Camera (WAC) and two Narrow Angle Cameras (NACs). The WAC is a 7-color push-frame camera, which images the Moon at 100 and 400 m/pixel in the visible and UV, respectively, while the two NACs are monochrome narrow-angle linescan imagers with 0.5 m/pixel spatial resolution. LROC was specifically designed to address two of the primary LRO mission requirements and six other key science objectives, including 1) assessment of meter-and smaller-scale features in order to select safe sites for potential lunar landings near polar resources and elsewhere on the Moon; 2) acquire multi-temporal synoptic 100 m/pixel images of the poles during every orbit to unambiguously identify regions of permanent shadow and permanent or near permanent illumination; 3) meter-scale mapping of regions with permanent or near-permanent illumination of polar massifs; 4) repeat observations of potential landing sites and other regions to derive high resolution topography; 5) global multispectral observations in seven wavelengths to characterize lunar resources, particularly ilmenite; 6) a global 100-m/pixel basemap with incidence angles (60° -80° ) favorable for morphological interpretations; 7) sub-meter imaging of a variety of geologic units to characterize their physical properties, the variability of the regolith, and other key science questions; 8) meter-scale coverage overlapping with Apollo-era panoramic images (1-2 m/pixel) to document the number of small impacts since 1971-1972. LROC allows us to determine the recent impact rate of bolides in the size range of 0.5 to 10 meters, which is currently not well known. Determining the impact rate at these sizes enables engineering remediation measures for future surface operations and interplanetary travel. The WAC has imaged nearly the entire Moon in seven wavelengths. A preliminary global WAC stereo-based topographic model is in preparation [1] and global color processing is underway [2]. As the mission progresses repeat global coverage will be obtained as lighting conditions change providing a robust photometric dataset. The NACs are revealing a wealth of morpho-logic features at the meter scale providing the engineering and science constraints needed to support future lunar exploration. All of the Apollo landing sites have been imaged, as well as the majority of robotic landing and impact sites. Through the use of off-nadir slews a collection of stereo pairs is being acquired that enable 5-m scale topographic mapping [3-7]. Impact mor-phologies (terraces, impact melt, rays, etc) are preserved in exquisite detail at all Copernican craters and are enabling new studies of impact mechanics and crater size-frequency distribution measurements [8-12]. Other topical studies including, for example, lunar pyroclastics, domes, and tectonics are underway [e.g., 10-17]. The first PDS data release of LROC data will be in March 2010, and will include all images from the commissioning phase and the first 3 months of the mapping phase. [1] Scholten et al. (2010) 41st LPSC, #2111; [2] Denevi et al. (2010a) 41st LPSC, #2263; [3] Beyer et al. (2010) 41st LPSC, #2678; [4] Archinal et al. (2010) 41st LPSC, #2609; [5] Mattson et al. (2010) 41st LPSC, #1871; [6] Tran et al. (2010) 41st LPSC, #2515; [7] Oberst et al. (2010) 41st LPSC, #2051; [8] Bray et al. (2010) 41st LPSC, #2371; [9] Denevi et al. (2010b) 41st LPSC, #2582; [10] Hiesinger et al. (2010a) 41st LPSC, #2278; [11] Hiesinger et al. (2010b) 41st LPSC, #2304; [12] van der Bogert et al. (2010) 41st LPSC, #2165;

  13. Marius Hills: Surface Roughness from LROC and Mini-RF

    Microsoft Academic Search

    S. Lawrence; B. R. Hawke; B. Bussey; J. D. Stopar; B. Denevi; M. Robinson; T. Tran

    2010-01-01

    The Lunar Reconnaissance Orbiter Camera (LROC) Team is collecting hundreds of high-resolution (0.5 m\\/pixel) Narrow Angle Camera (NAC) images of lunar volcanic constructs (domes, ``cones'', and associated features) [1,2]. Marius Hills represents the largest concentration of volcanic features on the Moon and is a high-priority target for future exploration [3,4]. NAC images of this region provide new insights into the

  14. LROC Observations of Geologic Features in the Marius Hills

    Microsoft Academic Search

    S. Lawrence; J. D. Stopar; R. B. Hawke; B. W. Denevi; M. S. Robinson; T. Giguere; B. L. Jolliff

    2009-01-01

    Lunar volcanic cones, domes, and their associated geologic features are important objects of study for the LROC science team because they represent possible volcanic endmembers that may yield important insights into the history of lunar volcanism and are potential sources of lunar resources. Several hundred domes, cones, and associated volcanic features are currently targeted for high-resolution LROC Narrow Angle Camera

  15. Multi camera image tracking

    Microsoft Academic Search

    James Black; Tim Ellis

    2006-01-01

    This paper presents a method for multi-camera image tracking in the context of image surveillance. The approach differs from most methods in that we exploit multiple camera views to resolve object occlusion. Moving objects are detected by using background subtraction. Viewpoint correspondence between the detected objects is then established by using the ground plane homography constraint. The Kalman Filter is

  16. Apollo 17 Landing Site: A Cartographic Investigation of the Taurus-Littrow Valley Based on LROC NAC Imagery

    NASA Astrophysics Data System (ADS)

    Haase, I.; Wählisch, M.; Gläser, P.; Oberst, J.; Robinson, M. S.

    2014-04-01

    A Digital Terrain Model (DTM) of the Taurus- Littrow Valley with a 1.5 m/pixel resolution was derived from high resolution stereo images of the Lunar Reconnaissance Orbiter Narrow Angle Camera (LROC NAC) [1]. It was used to create a controlled LROC NAC ortho-mosaic with a pixel size of 0.5 m on the ground. Covering the entire Apollo 17 exploration site, it allows for determining accurate astronaut and surface feature positions along the astronauts' traverses when integrating historic Apollo surface photography to our analysis.

  17. Spacecraft camera image registration

    NASA Technical Reports Server (NTRS)

    Kamel, Ahmed A. (Inventor); Graul, Donald W. (Inventor); Chan, Fred N. T. (Inventor); Gamble, Donald W. (Inventor)

    1987-01-01

    A system for achieving spacecraft camera (1, 2) image registration comprises a portion external to the spacecraft and an image motion compensation system (IMCS) portion onboard the spacecraft. Within the IMCS, a computer (38) calculates an image registration compensation signal (60) which is sent to the scan control loops (84, 88, 94, 98) of the onboard cameras (1, 2). At the location external to the spacecraft, the long-term orbital and attitude perturbations on the spacecraft are modeled. Coefficients (K, A) from this model are periodically sent to the onboard computer (38) by means of a command unit (39). The coefficients (K, A) take into account observations of stars and landmarks made by the spacecraft cameras (1, 2) themselves. The computer (38) takes as inputs the updated coefficients (K, A) plus synchronization information indicating the mirror position (AZ, EL) of each of the spacecraft cameras (1, 2), operating mode, and starting and stopping status of the scan lines generated by these cameras (1, 2), and generates in response thereto the image registration compensation signal (60). The sources of periodic thermal errors on the spacecraft are discussed. The system is checked by calculating measurement residuals, the difference between the landmark and star locations predicted at the external location and the landmark and star locations as measured by the spacecraft cameras (1, 2).

  18. Mapping the Apollo 17 Astronauts' Positions Based on LROC Data and Apollo Surface Photography

    NASA Astrophysics Data System (ADS)

    Haase, I.; Oberst, J.; Scholten, F.; Gläser, P.; Wählisch, M.; Robinson, M. S.

    2011-10-01

    The positions from where the Apollo 17 astronauts recorded panoramic image series, e.g. at the so-called "traverse stations", were precisely determined using ortho-images (0.5 m/pxl) as well as Digital Terrain Models (DTM) (1.5 m/pxl and 100 m/pxl) derived from Lunar Reconnaissance Orbiter Camera (LROC) data. Features imaged in the Apollo panoramas were identified in LROC ortho-images. Least-squares techniques were applied to angles measured in the panoramas to determine the astronaut's position to within the ortho-image pixel. The result of our investigation of Traverse Station 1 in the north-west of Steno Crater is presented.

  19. Exploring the Moon with LROC-NAC Stereo Anaglyphs

    NASA Astrophysics Data System (ADS)

    Mattson, S.; McEwen, A. S.; Robinson, M. S.; Speyerer, E.; Archinal, B.

    2012-09-01

    The Lunar Reconnaissance Orbiter Camera Narrow Angle Camera (LROC NAC) operating on the Lunar Reconnaissance Orbiter (LRO), has returned over 500,000 high resolution images of the surface of the Moon since 2009 [1]. The NAC acquires geometric stereo image pairs of the same surface target on subsequent orbits by rolling the spacecraft off-nadir to achieve stereo convergence. Stereo pairs are generally acquired close in time (2 to 4 hrs), to minimize photometric differences. An anaglyph is a qualitative stereo visualization product formed by putting one image from the stereo pair in the red channel, and the other image in the blue and green channels, so that together the pair can be viewed in 3D using red-blue or red-cyan glasses. LROC NAC anaglyphs are produced automatically, so the stereo information is readily interpretable, in a qualitative sense, without the need for intensive computational and personnel resources, such as is required to make digital terrain models (DTM).

  20. Neutron Imaging Camera

    NASA Technical Reports Server (NTRS)

    Hunter, Stanley; deNolfo, G. A.; Barbier, L. M.; Link, J. T.; Son, S.; Floyd, S. R.; Guardala, N.; Skopec, M.; Stark, B.

    2008-01-01

    The Neutron Imaging Camera (NIC) is based on the Three-dimensional Track Imager (3DTI) technology developed at GSFC for gamma-ray astrophysics applications. The 3-DTI, a large volume time-projection chamber, provides accurate, approximately 0.4 mm resolution, 3-D tracking of charged particles. The incident direction of fast neutrons, En > 0.5 MeV, are reconstructed from the momenta and energies of the proton and triton fragments resulting from (sup 3)He(n,p) (sup 3)H interactions in the 3-DTI volume. The performance of the NIC from laboratory and accelerator tests is presented.

  1. LRO Camera Imaging of Constellation Sites

    NASA Astrophysics Data System (ADS)

    Gruener, J.; Jolliff, B. L.; Lawrence, S.; Robinson, M. S.; Plescia, J. B.; Wiseman, S. M.; Li, R.; Archinal, B. A.; Howington-Kraus, A. E.

    2009-12-01

    One of the top priorities for Lunar Reconnaissance Orbiter Camera (LROC) imaging during the "exploration" phase of the mission is thorough coverage of 50 sites selected to represent a wide variety of terrain types and geologic features that are of interest for human exploration. These sites, which are broadly distributed around the Moon and include locations at or near both poles, will provide the Constellation Program with data for a set of targets that represent a diversity of scientific and resource opportunities, thus forming a basis for planning for scientific exploration, resource development, and mission operations including traverse and habitation zone planning. Identification of the Constellation targets is not intended to be a site-selection activity. Sites include volcanic terrains (surfaces with young and old basalt flows, pyroclastic deposits, vents, fissures, domes, low shields, rilles, wrinkle ridges, and lava tubes), impact craters and basins (crater floors, central peaks, terraces and walls; impact-melt and ejecta deposits, basin ring structures; and antipodal terrain), and contacts of geologic features in areas of complex geology. Sites at the poles represent different lighting conditions and include craters with areas of permanent shadow. Sites were also chosen that represent typical feldspathic highlands terrain, areas in the highlands with anomalous compositions, and unusual features such as magnetic anomalies. These sites were reviewed by the Lunar Exploration Analysis Group (LEAG). These sites all have considerable scientific and exploration interest and were derived from previous studies of potential lunar landing sites, supplemented with areas that capitalize on discoveries from recent orbital missions. Each site consists of nested regions of interest (ROI), including 10×10 km, 20×20 km, and 40×40 km areas. Within the 10×10 and 20×20 ROIs, the goal is to compile a set of narrow-angle-camera (NAC) observations for a controlled mosaic, photometric and geometric stereo, and images taken at low and high sun to enhance morphology and albedo, respectively. These data will provide the basis for topographic maps, digital elevation models, and slope and boulder hazard maps that could be used to establish landing or habitation zones. Within the 40×40 ROIs, images will be taken to achieve the best possible high-resolution mosaics. All ROIs will have wide-angle-camera context images covering the sites and surrounding areas. At the time of writing (prior to the end of the LRO commissioning phase), over 500 individual NAC frames have been acquired for 47 of the 50 sites. Because of the polar orbit, the majority of repeat coverage occurs for the polar and high latitude sites. Analysis of the environment for several representative Constellation site ROIs will be presented.

  2. Neutron Imaging Camera

    NASA Technical Reports Server (NTRS)

    Hunter, Stanley D.; DeNolfo, Georgia; Floyd, Sam; Krizmanic, John; Link, Jason; Son, Seunghee; Guardala, Noel; Skopec, Marlene; Stark, Robert

    2008-01-01

    We describe the Neutron Imaging Camera (NIC) being developed for DTRA applications by NASA/GSFC and NSWC/Carderock. The NIC is based on the Three-dimensional Track Imager (3-DTI) technology developed at GSFC for gamma-ray astrophysics applications. The 3-DTI, a large volume time-projection chamber, provides accurate, approximately 0.4 mm resolution. 3-D tracking of charged particles. The incident direction of fast neutrons, E(sub N) > 0.5 MeV. arc reconstructed from the momenta and energies of the proton and triton fragments resulting from 3He(n,p)3H interactions in the 3-DTI volume. We present angular and energy resolution performance of the NIC derived from accelerator tests.

  3. LROC Observations of Geologic Features in the Marius Hills

    NASA Astrophysics Data System (ADS)

    Lawrence, S.; Stopar, J. D.; Hawke, R. B.; Denevi, B. W.; Robinson, M. S.; Giguere, T.; Jolliff, B. L.

    2009-12-01

    Lunar volcanic cones, domes, and their associated geologic features are important objects of study for the LROC science team because they represent possible volcanic endmembers that may yield important insights into the history of lunar volcanism and are potential sources of lunar resources. Several hundred domes, cones, and associated volcanic features are currently targeted for high-resolution LROC Narrow Angle Camera [NAC] imagery[1]. The Marius Hills, located in Oceanus Procellarum (centered at ~13.4°N, -55.4°W), represent the largest concentration of these volcanic features on the Moon including sinuous rilles, volcanic cones, domes, and depressions [e.g., 2-7]. The Marius region is thus a high priority for future human lunar exploration, as signified by its inclusion in the Project Constellation list of notional future human lunar exploration sites [8], and will be an intense focus of interest for LROC science investigations. Previous studies of the Marius Hills have utilized telescopic, Lunar Orbiter, Apollo, and Clementine imagery to study the morphology and composition of the volcanic features in the region. Complementary LROC studies of the Marius region will focus on high-resolution NAC images of specific features for studies of morphology (including flow fronts, dome/cone structure, and possible layering) and topography (using stereo imagery). Preliminary studies of the new high-resolution images of the Marius Hills region reveal small-scale features in the sinuous rilles including possible outcrops of bedrock and lobate lava flows from the domes. The observed Marius Hills are characterized by rough surface textures, including the presence of large boulders at the summits (~3-5m diameter), which is consistent with the radar-derived conclusions of [9]. Future investigations will involve analysis of LROC stereo photoclinometric products and coordinating NAC images with the multispectral images collected by the LROC WAC, especially the ultraviolet data, to enable measurements of color variations within and amongst deposits and provide possible compositional insights, including the location of possibly related pyroclastic deposits. References: [1] J. D. Stopar et al. (2009), LRO Science Targeting Meeting, Abs. 6039 [2] Greeley R (1971) Moon, 3, 289-314 [3] Guest J. E. (1971) Geol. and Phys. of the Moon, p. 41-53. [4] McCauley J. F. (1967) USGS Geologic Atlas of the Moon, Sheet I-491 [5] Weitz C. M. and Head J. W. (1999) JGR, 104, 18933-18956 [6] Heather D. J. et al. (2003) JGR, doi:10.1029/2002JE001938 [7] Whitford-Stark, J. L., and J. W. Head (1977) Proc. LSC 8th, 2705-2724 [8] Gruener J. and Joosten B. K. (2009) LRO Science Targeting Meeting, Abs. 6036 [9] Campbell B. A. et al. (2009) JGR, doi:10.1029/2008JE003253.

  4. Calibration of the Lunar Reconnaissance Orbiter Camera

    Microsoft Academic Search

    M. Tschimmel; M. S. Robinson; D. C. Humm; B. W. Denevi; S. J. Lawrence; S. Brylow; M. Ravine; T. Ghaemi

    2008-01-01

    The Lunar Reconnaissance Orbiter Camera (LROC) onboard the NASA Lunar Reconnaissance Orbiter (LRO) spacecraft consists of three cameras: the Wide-Angle Camera (WAC) and two identical Narrow Angle Cameras (NAC-L, NAC-R). The WAC is push-frame imager with 5 visible wavelength filters (415 to 680 nm) at a spatial resolution of 100 m\\/pixel and 2 UV filters (315 and 360 nm) with

  5. Computer-Assisted Detection of Collapse Pits in LROC NAC Images

    NASA Astrophysics Data System (ADS)

    Wagner, R. V.; Robinson, M. S.

    2012-12-01

    Pits in mare basalts and impact melt deposits provide unique environments for human shelters and preservation of geologic information. Due to their steep walls, pits are most distinguishable when the Sun is high (pit walls are casting shadows and impact crater walls are not). Because of the large number of NAC images acquired every day (>350), each typically with 5000 samples and 52,224 lines, it is not feasible to carefully search each image manually, so we developed a shadow detection algorithm (Pitscan) which analyzes an image in thirty seconds. It locates blocks of pixels that are below a digital number (DN) cutoff value, indicating that the block of pixels is "in shadow", and then runs a DN profile in the direction of solar lighting, comparing average DN values of the up-Sun and down-Sun sides. If the up-Sun average DN is higher than the down-Sun average, the shadow is assumed to be from a positive relief feature, and ignored. Otherwise, Pitscan saves a 200 x 200 pixel sub-image for later manual review. The algorithm currently generates ~150 false positives for each successful pit identification. This number would be unacceptable for an algorithm designed to catalog a common feature, but since the logic is merely intended to assist humans in locating an unusual type of feature, the false alarm rate is acceptable, and the current version allows a human to effectively check 10,000 NAC images for pits (over 2500 gigapixels) per hour. The false negative rate is not yet known, however Pitscan detected every pit in a test on a small subset of the images known to contain pits. Pitscan is only effective when the Sun is within 50° of the zenith. When the Sun is closer to the horizon crater walls often cast shadows, resulting in unacceptable numbers of false positives. Due to the Sun angle limit, only regions within 50° latitude of the equator are searchable. To date, 25.42% of the Moon has been imaged within this constraint. Early versions of Pitscan found more than 150 small (average diameter 15m) pits in impact melt deposits of Copernican craters [1]. More recently, improvements to the algorithm revealed two new large mare pits, similar to the three pits discovered in Kaguya images [2]. One is in Schlüter crater, a mare-filled crater near Orientale basin, with a 20 x 40m opening, approximately 60 m deep. The second new pit is in Lacus Mortis (44.96°N, 25.61°E) in a tectonically complex region west of Burg crater, This pit is the largest mare pit found to date, with an opening approximately 100 x 150 m, and a floor more than 90 m below the surrounding terrain. Most interesting from an exploration point of view is the fact that the east wall appears to have collapsed, leaving a relatively smooth ~22° slope from the surrounding mare down to the pit floor. Computer-assisted feature detection is an effective method of locating rare features in the extremely large high-resolution NAC dataset. Pitscan enabled the discovery of unknown collapse pits both in the mare and highlands. These pits are an important resource for future surface exploration, both by providing access to pristine cross-sections of the near-surface and by providing radiation and micrometorite shielding for human outposts. [1] Wagner, R.V. et al. (2012), LPSC XLIII, #2266 [2] Haruyama, J. et al. (2010), LPSC XLI, #1285

  6. Lunar Reconnaissance Orbiter Camera Narrow Angle Cameras: Laboratory and Initial Flight Calibration

    Microsoft Academic Search

    D. C. Humm; M. Tschimmel; B. W. Denevi; S. Lawrence; P. Mahanti; T. N. Tran; P. C. Thomas; E. Eliason; M. S. Robinson

    2009-01-01

    The Lunar Reconnaissance Orbiter Camera (LROC) has two identical Narrow Angle Cameras (NACs). Each NAC is a monochrome pushbroom scanner, providing images with a pixel scale of 50 cm from a 50-km orbit. A single NAC image has a swath width of 2.5 km and a length of up to 26 km. The NACs are mounted to acquire side-by-side imaging

  7. Impact melt volume estimates in small-to-medium sized craters on the Moon from the Lunar Orbiter Laser Altimeter (LOLA) and Lunar Reconnaissance Orbiter Camera (LROC)

    NASA Astrophysics Data System (ADS)

    Barnouin, O. S.; Seelos, K. D.; McGovern, A.; Denevi, B. W.; Zuber, M. T.; Smith, D. E.; Robinson, M. S.; Neumann, G. A.; Mazarico, E.; Torrence, M. H.

    2010-12-01

    Direct measurements of the volume of melt generated during cratering have only been possible using data acquired at terrestrial craters. These measurements are usually the result of areal mapping efforts, drill core investigations, and assessments of the amount of erosion a crater and its melt sheet might have undergone. Good data for melt volume are needed to further test and validate both analytical and numerical models of melt generation on terrestrial planets, whose results can vary by as much as a factor of 10 for identical impact conditions. Such models are used to provide estimates of the depth of origin of surface features (e.g., central peaks and rings) seen within craters and could influence the interpretations of their diameter-to-depth relationships. For example, high velocity impacts (>30km/s) on Mercury are expected to produce significant melt volumes, which could influence crater aspect ratio. The Lunar Reconnaissance Orbiter has now returned a wealth of new data, including those from small-to-medium sized fresh craters on the Moon (1kmLROC observations and LOLA altimetry (spatial sampling ~56m, vertical precision =10cm) are of such good quality that additional new melt volume estimates can be obtained for many of these craters. Using geological maps from the Apollo era, we have identified over 100 fresh crater candidates for investigation. Preliminary results indicate that melt volumes can vary significantly for given crater sizes, sometimes even exceeding estimates from current numerical and analytical models in the literature for impacts on the Moon. The broad range of observed melt volumes might be due to local variations in the target properties (including density, composition and porosity), projectile speeds and possibly projectile properties, the range of which the theoretical models do not typically consider.

  8. Apogee Imaging Systems Camera Installation Guide

    E-print Network

    Kleinfeld, David

    Apogee Imaging Systems Camera Installation Guide Version 1.6 #12;Apogee Imaging Systems Camera Installation Guide Page 2 of 26 Disclaimer Apogee Imaging Systems, Inc. assumes no liability for the use in this document are subject to change without notice. Support The Apogee Imaging Systems Camera Installation Guide

  9. Morphological Analysis of Lunar Lobate Scarps Using LROC NAC and LOLA Data

    NASA Astrophysics Data System (ADS)

    Banks, M. E.; Watters, T. R.; Robinson, M. S.; Tornabene, L. L.; Tran, T.; Ojha, L.

    2011-10-01

    Lobate scarps on the Moon are relatively smallscale tectonic landforms observed in mare basalts and more commonly, highland material [1-4]. These scarps are the surface expression of thrust faults, and are the most common tectonic landform on the lunar farside [1-4]. Prior to Lunar Reconnaissance Orbiter (LRO) observations, lobate scarps were largely detected only in equatorial regions because of limited Apollo Panoramic Camera and high resolution Lunar Orbiter coverage with optimum lighting geometry [1-3]. Previous measurements of the relief of lobate scarps were made for 9 low-latitude scarps (<±20°), and range from ~6 to 80 m (mean relief of ~32 m) [1]. However, the relief of these scarps was primarily determined from shadow measurements with limited accuracy from Apollo-era photography. We present the results of a detailed characterization of the relief and morphology of a larger sampling of the population of lobate scarps. Outstanding questions include what is the range of maximum relief of the lobate scarps? Is their size and structural relief consistent with estimates of the global contractional strain? What is the range of horizontal shortening expressed by lunar scarps and how does this range compare with that found for planetary lobate scarps? Lunar Reconnaissance Orbiter Camera (LROC) images and Lunar Orbiter Laser Altimeter (LOLA) ranging enable detection and detailed morphological analysis of lobate scarps at all latitudes. To date, previously undetected scarps have been identified in LROC imagery in 75 different locations, over 20 of which occur at latitudes greater than ±60° [5-6]. LROC stereo-derived digital terrain models (DTMs) and LOLA data are used to measure the relief and characterize the morphology of 26 previously known (n = 8) and newly detected (n = 18) lobate scarps. Lunar examples are compared to lobate scarps on Mars, Mercury, and 433 Eros (Hinks Dorsum).

  10. Height-to-diameter ratios of moon rocks from analysis of Lunokhod-1 and -2 and Apollo 11-17 panoramas and LROC NAC images

    NASA Astrophysics Data System (ADS)

    Demidov, N. E.; Basilevsky, A. T.

    2014-09-01

    An analysis is performed of 91 panoramic photographs taken by Lunokhod-1 and -2, 17 panoramic images composed of photographs taken by Apollo 11-15 astronauts, and six LROC NAC photographs. The results are used to measure the height-to-visible-diameter ( h/ d) and height-to-maximum-diameter ( h/ D) ratios for lunar rocks at three highland and three mare sites on the Moon. The average h/ d and h/ D for the six sites are found to be indistinguishable at a significance level of 95%. Therefore, our estimates for the average h/ d = 0.6 ± 0.03 and h/ D = 0.54 ± 0.03 on the basis of 445 rocks are applicable for the entire Moon's surface. Rounding off, an h/ D ratio of ?0.5 is suggested for engineering models of the lunar surface. The ratios between the long, medium, and short axes of the lunar rocks are found to be similar to those obtained in high-velocity impact experiments for different materials. It is concluded, therefore, that the degree of penetration of the studied lunar rocks into the regolith is negligible, and micrometeorite abrasion and other factors do not dominate in the evolution of the shape of lunar rocks.

  11. Digital Elevation Models and Derived Products from Lroc Nac Stereo Observations

    NASA Astrophysics Data System (ADS)

    Burns, K. N.; Speyerer, E. J.; Robinson, M. S.; Tran, T.; Rosiek, M. R.; Archinal, B. A.; Howington-Kraus, E.; the LROC Science Team

    2012-08-01

    One of the primary objectives of the Lunar Reconnaissance Orbiter Camera (LROC) is to acquire stereo observations with the Narrow Angle Camera (NAC) to enable production of high resolution digital elevation models (DEMs). This work describes the processes and techniques used in reducing the NAC stereo observations to DEMs through a combination of USGS integrated Software for Imagers and Spectrometers (ISIS) and SOCET SET® from BAE Systems by a team at Arizona State University (ASU). LROC Science Operations Center personnel have thus far reduced 130 stereo observations to DEMs of more than 130 stereo pairs for 11 Constellation Program (CxP) sites and 53 other regions of scientific interest. The NAC DEM spatial sampling is typically 2 meters, and the vertical precision is 1-2 meters. Such high resolution provides the three-dimensional view of the lunar surface required for site selection, hazard avoidance and planning traverses that minimize resource consumption. In addition to exploration analysis, geologists can measure parameters such as elevation, slope, and volume to place constraints on composition and geologic history. The NAC DEMs are released and archived through NASA's Planetary Data System.

  12. Marius Hills: Surface Roughness from LROC and Mini-RF

    NASA Astrophysics Data System (ADS)

    Lawrence, S.; Hawke, B. R.; Bussey, B.; Stopar, J. D.; Denevi, B.; Robinson, M.; Tran, T.

    2010-12-01

    The Lunar Reconnaissance Orbiter Camera (LROC) Team is collecting hundreds of high-resolution (0.5 m/pixel) Narrow Angle Camera (NAC) images of lunar volcanic constructs (domes, “cones”, and associated features) [1,2]. Marius Hills represents the largest concentration of volcanic features on the Moon and is a high-priority target for future exploration [3,4]. NAC images of this region provide new insights into the morphology and geology of specific features at the meter scale, including lava flow fronts, tectonic features, layers, and topography (using LROC stereo imagery) [2]. Here, we report initial results from Mini-RF and LROC collaborative studies of the Marius Hills. Mini-RF uses a hybrid polarimetric architecture to measure surface backscatter characteristics and can acquire data in one of two radar bands, S (12 cm) or X (4 cm) [5]. The spatial resolution of Mini-RF (15 m/pixel) enables correlation of features observed in NAC images to Mini-RF data. Mini-RF S-Band zoom-mode data and daughter products, such as circular polarization ratio (CPR), were directly compared to NAC images. Mini-RF S-Band radar images reveal enhanced radar backscatter associated with volcanic constructs in the Marius Hills region. Mini-RF data show that Marius Hills volcanic constructs have enhanced average CPR values (0.5-0.7) compared to the CPR values of the surrounding mare (~0.4). This result is consistent with the conclusions of [6], and implies that the lava flows comprising the domes in this region are blocky. To quantify the surface roughness [e.g., 6,7] block populations associated with specific geologic features in the Marius Hills region are being digitized from NAC images. Only blocks that can be unambiguously identified (>1 m diameter) are included in the digitization process, producing counts and size estimates of the block population. High block abundances occur mainly at the distal ends of lava flows. The average size of these blocks is 9 m, and 50% of observed blocks are between 9-12 m in diameter. These blocks are not associated with impact craters and have at most a thin layer of regolith. There is minimal visible evidence for downslope movement. Relatively high block abundances are also seen on the summits of steep-sided asymmetrical positive relief features (“cones”) atop low-sided domes. Digitization efforts will continue as we study the block populations of different geologic features in the Marius Hills region and correlate the results with Mini-RF data, which will provide new information about the emplacement of volcanic features in the region. [1] J.D. Stopar et al., LPI Contribution 1483 (2009) 93-94. [2] S.J. Lawrence et al. (2010) LPSC 41 #1906. [2] S.J. Lawrence et al. (2010) LPSC 41 # 2689. [3] C. Coombs & B.R. Hawke (1992) 2nd Proc. Lun. Bases & Space Act. 21st Cent pp. 219-229. [4]J.Gruener and B. Joosten (2009) LPI Contributions 1483 50-51. [5] D.B.J. Bussey et al. (2010) LPSC 41 # 2319. [6] B.A. Campbell et al. (2009) JGR-Planets, 114, 01001. [7] S.W. Anderson et al. (1998) GSA Bull, 110, 1258-1267.

  13. Matching image color from different cameras

    Microsoft Academic Search

    Mark D. Fairchild; David R. Wyble; Garrett M. Johnson

    2008-01-01

    Can images from professional digital SLR cameras be made equivalent in color using simple colorimetric characterization? Two cameras were characterized, these characterizations were implemented on a variety of images, and the results were evaluated both colorimetrically and psychophysically. A Nikon D2x and a Canon 5D were used. The colorimetric analyses indicated that accurate reproductions were obtained. The median CIELAB color

  14. Matching image color from different cameras

    NASA Astrophysics Data System (ADS)

    Fairchild, Mark D.; Wyble, David R.; Johnson, Garrett M.

    2008-01-01

    Can images from professional digital SLR cameras be made equivalent in color using simple colorimetric characterization? Two cameras were characterized, these characterizations were implemented on a variety of images, and the results were evaluated both colorimetrically and psychophysically. A Nikon D2x and a Canon 5D were used. The colorimetric analyses indicated that accurate reproductions were obtained. The median CIELAB color differences between the measured ColorChecker SG and the reproduced image were 4.0 and 6.1 for the Canon (chart and spectral respectively) and 5.9 and 6.9 for the Nikon. The median differences between cameras were 2.8 and 3.4 for the chart and spectral characterizations, near the expected threshold for reliable image difference perception. Eight scenes were evaluated psychophysically in three forced-choice experiments in which a reference image from one of the cameras was shown to observers in comparison with a pair of images, one from each camera. The three experiments were (1) a comparison of the two cameras with the chart-based characterizations, (2) a comparison with the spectral characterizations, and (3) a comparison of chart vs. spectral characterization within and across cameras. The results for the three experiments are 64%, 64%, and 55% correct respectively. Careful and simple colorimetric characterization of digital SLR cameras can result in visually equivalent color reproduction.

  15. Apollo 17 Landing Site Topography from LROC NAC Stereo Data — First Analysis and Results

    NASA Astrophysics Data System (ADS)

    Oberst, J.; Scholten, F.; Matz, K.-D.; Roatsch, T.; Wählisch, M.; Haase, I.; Gläser, P.; Gwinner, K.; Robinson, M. S.; Lroc Team

    2010-03-01

    The LROC NAC camera onboard the LRO mission provides stereo data with a ground scale of 0.5-1.5 m. We used our DLR photogrammetric processing system to compute a digital terrain model (DTM) of the Apollo 17 landing site and show first results.

  16. Apollo 17 Landing Site Topography from LROC NAC Stereo Data --- First Analysis and Results

    Microsoft Academic Search

    J. Oberst; F. Scholten; K.-D. Matz; T. Roatsch; M. Wählisch; I. Haase; P. Gläser; K. Gwinner; M. S. Robinson

    2010-01-01

    The LROC NAC camera onboard the LRO mission provides stereo data with a ground scale of 0.5-1.5 m. We used our DLR photogrammetric processing system to compute a digital terrain model (DTM) of the Apollo 17 landing site and show first results.

  17. Heart imaging by cadmium telluride gamma camera

    Microsoft Academic Search

    Ch. Scheiber; B. Eclancher; J. Chambron; V. Prat; A. Kazandjan; A. Jahnke; R. Matz; S. Thomas; S. Warren; M. Hage-Hali; R. Regal; P. Siffert; M. Karman

    1999-01-01

    Cadmium telluride semiconductor detectors (CdTe) operating at room temperature are attractive for medical imaging because of their good energy resolution providing excellent spatial and contrast resolution. The compactness of the detection system allows the building of small light camera heads which can be used for bedside imaging. A mobile pixellated gamma camera based on 2304 CdTe (pixel size: 3×3mm, field

  18. Digital image processing of metric camera imagery

    Microsoft Academic Search

    P. Lohmann

    1985-01-01

    The use of digitized Spacelab metric camera imagery for map updating is demonstrated for an area of Germany featuring agricultural and industrial areas, and a region of the White Nile. LANDSAT and Spacelab images were combined, and digital image processing techniques used for image enhancement. Updating was achieved by semiautomatic techniques, but for many applications manual editing may be feasible.

  19. Generating Stereoscopic Television Images With One Camera

    NASA Technical Reports Server (NTRS)

    Coan, Paul P.

    1996-01-01

    Straightforward technique for generating stereoscopic television images involves use of single television camera translated laterally between left- and right-eye positions. Camera acquires one of images (left- or right-eye image), and video signal from image delayed while camera translated to position where it acquires other image. Length of delay chosen so both images displayed simultaneously or as nearly simultaneously as necessary to obtain stereoscopic effect. Technique amenable to zooming in on small areas within broad scenes. Potential applications include three-dimensional viewing of geological features and meteorological events from spacecraft and aircraft, inspection of workpieces moving along conveyor belts, and aiding ground and water search-and-rescue operations. Also used to generate and display imagery for public education and general information, and possible for medical purposes.

  20. Occluded object imaging via optimal camera selection

    NASA Astrophysics Data System (ADS)

    Yang, Tao; Zhang, Yanning; Tong, Xiaomin; Ma, Wenguang; Yu, Rui

    2013-12-01

    High performance occluded object imaging in cluttered scenes is a significant challenging task for many computer vision applications. Recently the camera array synthetic aperture imaging is proved to be an effective way to seeing object through occlusion. However, the imaging quality of occluded object is often significantly decreased by the shadows of the foreground occluder. Although some works have been presented to label the foreground occluder via object segmentation or 3D reconstruction, these methods will fail in the case of complicated occluder and severe occlusion. In this paper, we present a novel optimal camera selection algorithm to solve the above problem. The main characteristics of this algorithm include: (1) Instead of synthetic aperture imaging, we formulate the occluded object imaging problem as an optimal camera selection and mosaicking problem. To the best of our knowledge, our proposed method is the first one for occluded object mosaicing. (2) A greedy optimization framework is presented to propagate the visibility information among various depth focus planes. (3) A multiple label energy minimization formulation is designed in each plane to select the optimal camera. The energy is estimated in the synthetic aperture image volume and integrates the multi-view intensity consistency, previous visibility property and camera view smoothness, which is minimized via Graph cuts. We compare our method with the state-of-the-art synthetic aperture imaging algorithms, and extensive experimental results with qualitative and quantitative analysis demonstrate the effectiveness and superiority of our approach.

  1. Development of gamma ray imaging cameras

    SciTech Connect

    Wehe, D.K.; Knoll, G.F.

    1992-05-28

    In January 1990, the Department of Energy initiated this project with the objective to develop the technology for general purpose, portable gamma ray imaging cameras useful to the nuclear industry. The ultimate goal of this R D initiative is to develop the analog to the color television camera where the camera would respond to gamma rays instead of visible photons. The two-dimensional real-time image would be displayed would indicate the geometric location of the radiation relative to the camera's orientation, while the brightness and color'' would indicate the intensity and energy of the radiation (and hence identify the emitting isotope). There is a strong motivation for developing such a device for applications within the nuclear industry, for both high- and low-level waste repositories, for environmental restoration problems, and for space and fusion applications. At present, there are no general purpose radiation cameras capable of producing spectral images for such practical applications. At the time of this writing, work on this project has been underway for almost 18 months. Substantial progress has been made in the project's two primary areas: mechanically-collimated (MCC) and electronically-collimated camera (ECC) designs. We present developments covering the mechanically-collimated design, and then discuss the efforts on the electronically-collimated camera. The renewal proposal addresses the continuing R D efforts for the third year effort. 8 refs.

  2. LROC NAC Digital Elevation Model of Gruithuisen Gamma

    NASA Astrophysics Data System (ADS)

    Braden, S.; Tran, T. N.; Robinson, M. S.

    2009-12-01

    The Gruithuisen Domes have long been of interest as examples of non-mare volcanism [1]. Their form suggests extrusion of silica-rich magmas, possibly dating to 3.7-3.85 Ga (around the same time as the Iridum event), and were subsequently embayed by mare [2,3]. Non-mare volcanism is indicated by spectral features known as “red spots” which have (a) high albedo, (b) strong absorption in the ultraviolet, and (c) a wide range of morphologies [4,5,6]. The composition of red spot domes is still unknown, but dacitic or rhyolitic KREEP-rich compositions [5] and mature, low iron and low titanium agglutinate-rich soils [7] have been suggested. The existence of non-mare volcanism has major implications for the thermal history and crustal evolution of the Moon. A new digital elevation model (DEM), derived from stereo image pairs acquired with the Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC), allows detailed investigation of the morphology and thus origin of Mons Gruithuisen Gamma (36.6° N, 40.5° W). The 10 meter per pixel DEM shows relief of ~1500 meters from the summit plateau of Gruithuisen Gamma to the nearby mare surface. This measurement is close to previous estimates of over 1200 meters from Apollo era images [4]. Previous estimates also suggested that the overall slopes ranged from 15-30° [7]. Radial profiles (n=25) across the eastern two-thirds of the Gruithuisen Gamma DEM show that the overall slope is 17-18° along the north- and northeastern-facing slopes, 14° along the eastern-most edge, 12° on the side facing the contact of the dome material and highlands material, and 11° on the directly southern-facing slope. The north-south diameter of the dome is ~24 km and the east-west diameter is ~18 km. The textures on each slope are remarkably similar and distinct from the highlands and crater slopes, with irregular furrows oriented down-slope. The same furrowed texture is not seen on mare domes, which are generally much smoother, flatter, and smaller than red spot domes [8]. Two ~2 km diameter craters on Gamma have likely exposed fresh dome material from below the surface texture, as evidenced by boulders visible in the ejecta. Overall, Gruithuisen Gamma has asymmetric slope morphology, but uniform texture. Topographic analysis and models of rheological properties with data from new LROC DEMs may aid in constraining the composition and origin of Gruithuisen Gamma. [1] Scott and Eggleton (1973) I-805, USGS. [2] Wagner, R.J., et al. (2002) LPSC #1619 [3] Wagner, R.J., et al. (2002) JGR. 104. [4] Chevrel, S.D., Pinet, P.C., and Head J.W. (1999) JGR. 104, 16515-16529 [5] Malin, M. (1974) Earth Planet. Sci. Lett. 21, 331 [6] Whitaker, E.A. (1972) Moon, 4, 348. [7] Head, J.W. and McCord, T.B. (1978) Science. 199, 1433-1436 [8] Head, J.W. and Gifford, A. (1980) Moon and Planets.

  3. IMAGE-BASED PAN-TILT CAMERA CONTROL IN A MULTI-CAMERA SURVEILLANCE ENVIRONMENT

    E-print Network

    Davis, Larry

    IMAGE-BASED PAN-TILT CAMERA CONTROL IN A MULTI-CAMERA SURVEILLANCE ENVIRONMENT Ser-Nam Lim, Ahmed the cameras accurately. Each cam- era must be able to pan-tilt such that an object detected in the scene camera is assigned a pan-tilt zero- position. Position of an object detected in one camera is related

  4. Camera Calibration from Images of Spheres

    Microsoft Academic Search

    Hui Zhang; Kwan-yee Kenneth Wong; Guoqiang Zhang

    2007-01-01

    This paper introduces a novel approach for solving the problem of camera calibration from spheres. By exploiting the relationship between the dual images of spheres and the dual image of the absolute conic (IAC), it is shown that the common pole and polar w.r.t. the conic images of 2 spheres are also the pole and polar w.r.t the IAC. This

  5. Lroc Observations of Permanently Shadowed Regions: Seeing into the Dark

    NASA Astrophysics Data System (ADS)

    Koeber, S. D.; Robinson, M. S.

    2013-12-01

    Permanently shadowed regions (PSRs) near the lunar poles that receive secondary illumination from nearby Sun facing slopes were imaged by the Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Cameras (NAC). Typically secondary lighting is optimal in polar areas around respective solstices and when the LRO orbit is nearly coincident with the sub-solar point (low spacecraft beta angles). NAC PSR images provide the means to search for evidence of surface frosts and unusual morphologies from ice rich regolith, and aid in planning potential landing sites for future in-situ exploration. Secondary illumination imaging in PSRs requires NAC integration times typically more than ten times greater than nominal imaging. The increased exposure time results in downtrack smear that decreases the spatial resolution of the NAC PSR images. Most long exposure NAC images of PSRs were acquired with exposure times of 24.2-ms (1-m by 40-m pixels, sampled to 20-m) and 12-ms (1-m by 20-m, sampled to 10-m). The initial campaign to acquire long exposure NAC images of PSRs in the north pole region ran from February 2013 to April 2013. Relative to the south polar region, PSRs near the north pole are generally smaller (D<24-km) and located in simple craters. Long exposure NAC images of PSRs in simple craters are often well illuminated by secondary light reflected from Sun-facing crater slopes during the northern summer solstice, allowing many PSRs to be imaged with the shorter exposure time of 12-ms (resampled to 10-m). With the exception of some craters in Peary crater, most northern PSRs with diameters >6-km were successfully imaged (ex. Whipple, Hermite A, and Rozhestvenskiy U). The third PSR south polar campaign began in April 2013 and will continue until October 2013. The third campaign will expand previous NAC coverage of PSRs and follow up on discoveries with new images of higher signal to noise ratio (SNR), higher resolution, and varying secondary illumination conditions. Utilizing previous campaign images and Sun's position, an individualized approach for targeting each crater drives this campaign. Secondary lighting within the PSRs, though somewhat diffuse, is at low incidence angles and coupled with nadir NAC imaging results in large phase angles. Such conditions tend to reduce albedo contrasts, complicating identification of patchy frost or ice deposits. Within the long exposure PSR images, a few small craters (D<200-m) with highly reflective ejecta blankets have been identified and interpreted as small fresh impact craters. Sylvester N and Main L are Copernican-age craters with PSRs; NAC images reveal debris flows, boulders, and morphologically fresh interior walls indicative of their young age. The identifications of albedo anomalies associated with these fresh craters and debris flows indicate that strong albedo contrasts (~2x) associated with small fresh impact craters can be distinguished in PSRs. Lunar highland material has an albedo of ~0.2, while pure water frost has an albedo of ~0.9. If features in PSRs have an albedo similar to lunar highlands, significant surface frost deposits could result in detectable reflective anomalies in the NAC images. However, no reflective anomalies have thus far been identified in PSRs attributable to frost.

  6. A nonparametric approach to comparing the areas under correlated LROC curves

    NASA Astrophysics Data System (ADS)

    Wunderlich, Adam; Noo, Frédéric

    2012-02-01

    In contrast to the ROC assessment paradigm, localization ROC (LROC) analysis provides a means to jointly assess the accuracy of visual search and detection in an observer study. In a typical multireader, multicase (MRMC) evaluation, the data sets are paired so that correlations arise in observer performance both between observers and between image reconstruction methods (or modalities). Therefore,MRMC evaluations motivate the need for a statistical methodology to compare correlated LROC curves. In this work, we suggest a nonparametric strategy for this purpose. Specifically, we find that seminal work of Sen on U-statistics can be applied to estimate the covariance matrix for a vector of LROC area estimates. The resulting covariance estimator is the LROC analog of the covariance estimator given by DeLong et al. for ROC analysis. Once the covariance matrix is estimated, it can be used to construct confidence intervals and/or confidence regions for purposes of comparing observer performance across reconstruction methods. The utility of our covariance estimator is illustrated with a human-observer LROC evaluation of three reconstruction strategies for fan-beam CT.

  7. Why do the image widths from the various cameras change?

    Atmospheric Science Data Center

    2014-12-08

    ... camera the focal length must be greater in order to preserve resolution. All of the camera images are 1504 pixels wide, and the focal ... the time interval between when each camera acquires its image of a given area. The area of overlap among the cameras depends on ...

  8. New insight into lunar impact melt mobility from the LRO camera

    Microsoft Academic Search

    V. J. Bray; L. L. Tornabene; L. P. Keszthelyi; A. S. McEwen; B. R. Hawke; T. A. Giguere; S. A. Kattenhorn; W. B. Garry; B. Rizk; C. M. Caudill; L. R. Gaddis; C. H. van der Bogert

    2010-01-01

    The Lunar Reconnaissance Orbiter Camera (LROC) is systematically imaging impact melt deposits in and around lunar craters at meter and sub-meter scales. These images reveal that lunar impact melts, although morphologically similar to terrestrial lava flows of similar size, exhibit distinctive features (e.g., erosional channels). Although generated in a single rapid event, the post-impact mobility and morphology of lunar impact

  9. Investigation of Layered Lunar Mare Lava flows through LROC Imagery and Terrestrial Analogs

    NASA Astrophysics Data System (ADS)

    Needham, H.; Rumpf, M.; Sarah, F.

    2013-12-01

    High resolution images of the lunar surface have revealed layered deposits in the walls of impact craters and pit craters in the lunar maria, which are interpreted to be sequences of stacked lava flows. The goal of our research is to establish quantitative constraints and uncertainties on the thicknesses of individual flow units comprising the layered outcrops, in order to model the cooling history of lunar lava flows. The underlying motivation for this project is to identify locations hosting intercalated units of lava flows and paleoregoliths, which may preserve snapshots of the ancient solar wind and other extra-lunar particles, thereby providing potential sampling localities for future missions to the lunar surface. Our approach involves mapping layered outcrops using high-resolution imagery acquired by the Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC), with constraints on flow unit dimensions provided by Lunar Orbiter Laser Altimeter (LOLA) data. We have measured thicknesses of ~ 2 to > 20 m. However, there is considerable uncertainty in the definition of contacts between adjacent units, primarily because talus commonly obscures contacts and/or prevents lateral tracing of the flow units. In addition, flows may have thicknesses or geomorphological complexity at scales approaching the limit of resolution of the data, which hampers distinguishing one unit from another. To address these issues, we have undertaken a terrestrial analog study using World View 2 satellite imagery of layered lava sequences on Oahu, Hawaii. These data have a resolution comparable to LROC NAC images of 0.5 m. The layered lava sequences are first analyzed in ArcGIS to obtain an initial estimate of the number and thicknesses of flow units identified in the images. We next visit the outcrops in the field to perform detailed measurements of the individual units. We have discovered that the number of flow units identified in the remote sensing data is fewer compared to the field analysis, because the resolution of the data precludes identification of subtle flow contacts and the identified 'units' are in fact multiple compounded units. Other factors such as vegetation and shadows may alter the view in the satellite imagery. This means that clarity in the lunar study may also be affected by factors such as lighting angle and amount of debris overlaying the lava sequence. The compilation of field and remote sensing measurements allows us to determine the uncertainty on unit thicknesses, which can be modeled to establish the uncertainty on the calculated depths of penetration of the resulting heat pulse into the underlying regolith. This in turn provides insight into the survivability of extra-lunar particles in paleoregolith layers sandwiched between lava flows.

  10. A non-intuitive aspect of Swensson's LROC model

    NASA Astrophysics Data System (ADS)

    Judy, Philip F.

    2007-03-01

    If the locations of abnormalities (targets) in an image are unknown, the evaluation of human observers' detection performance can be complex. Richard Swensson in 1996 developed a model that unified the various analysis approaches to this problem. For the LROC experiment, the model assumed that a false-positive report-arises from the latent decision variable of the most suspicious non-target location of the target stimuli. The localization scoring was based on the same latent decision variable, i.e., when the latent decision variable at the non-target location was greater than latent decision variable at the target location the response was scored as a miss. Human observer reports vary, i.e., different locations have been identified during replications. A Monte Carlo model was developed to investigate this variation and identified a non-intuitive aspect of Swensson's LROC model. When the number of potentially suspicious locations was 1, the model performance was greater than apparently possible. For example, assume that target expected latent decision variable is 1.0. Both target and non-target standard deviations were assumed to be 1.0. The model predicts the area-under-the-ROC is 0.815, which implies d a=1.27. If the target latent decision variable was 0.0, then d a=0.61. The reason was the number latent decision variables in the model for the non-target stimuli is one, while the number latent decision variables for the target stimuli is the maximum of 2. The simulation indicated that the parameters of a LROC fit, when the number of suspicious locations is small or the observer performance is low, does not have the same intuitive meaning as ROC parameters of a SKE task.

  11. X-ray imaging using digital cameras

    NASA Astrophysics Data System (ADS)

    Winch, Nicola M.; Edgar, Andrew

    2012-03-01

    The possibility of using the combination of a computed radiography (storage phosphor) cassette and a semiprofessional grade digital camera for medical or dental radiography is investigated. We compare the performance of (i) a Canon 5D Mk II single lens reflex camera with f1.4 lens and full-frame CMOS array sensor and (ii) a cooled CCD-based camera with a 1/3 frame sensor and the same lens system. Both systems are tested with 240 x 180 mm cassettes which are based on either powdered europium-doped barium fluoride bromide or needle structure europium-doped cesium bromide. The modulation transfer function for both systems has been determined and falls to a value of 0.2 at around 2 lp/mm, and is limited by light scattering of the emitted light from the storage phosphor rather than the optics or sensor pixelation. The modulation transfer function for the CsBr:Eu2+ plate is bimodal, with a high frequency wing which is attributed to the light-guiding behaviour of the needle structure. The detective quantum efficiency has been determined using a radioisotope source and is comparatively low at 0.017 for the CMOS camera and 0.006 for the CCD camera, attributed to the poor light harvesting by the lens. The primary advantages of the method are portability, robustness, digital imaging and low cost; the limitations are the low detective quantum efficiency and hence signal-to-noise ratio for medical doses, and restricted range of plate sizes. Representative images taken with medical doses are shown and illustrate the potential use for portable basic radiography.

  12. The painting camera: an abstract rendering system with camera-control images 

    E-print Network

    Meadows, Scott Harrison

    2000-01-01

    to be developed. This thesis describes a new method for generating abstract computer images using a simple and intuitive rendering technique. This technique, inspired by cubism, is based on ray tracing and uses images to define camera parameters. These images...

  13. Computing the Camera Motion Direction from Many Images

    Microsoft Academic Search

    John Oliensis

    2006-01-01

    We analyze the problem of estimating a camera's motion direction from a calibrated multi-image sequence. We assume that the camera moves roughly along a line and that its velocity and orientation are unknown and can vary over time. For infinitesimal camera motion (multiple flows rather than multiple images), we give a closed-form expression for the result of minimizing the true

  14. Cervical SPECT Camera for Parathyroid Imaging

    SciTech Connect

    None

    2012-08-31

    Primary hyperparathyroidism characterized by one or more enlarged parathyroid glands has become one of the most common endocrine diseases in the world affecting about 1 per 1000 in the United States. Standard treatment is highly invasive exploratory neck surgery called ���¢��������Parathyroidectomy���¢�������. The surgery has a notable mortality rate because of the close proximity to vital structures. The move to minimally invasive parathyroidectomy is hampered by the lack of high resolution pre-surgical imaging techniques that can accurately localize the parathyroid with respect to surrounding structures. We propose to develop a dedicated ultra-high resolution (~ 1 mm) and high sensitivity (10x conventional camera) cervical scintigraphic imaging device. It will be based on a multiple pinhole-camera SPECT system comprising a novel solid state CZT detector that offers the required performance. The overall system will be configured to fit around the neck and comfortably image a patient.

  15. Methods for identification of images acquired with digital cameras

    NASA Astrophysics Data System (ADS)

    Geradts, Zeno J.; Bijhold, Jurrien; Kieft, Martijn; Kurosawa, Kenji; Kuroki, Kenro; Saitoh, Naoki

    2001-02-01

    From the court we were asked whether it is possible to determine if an image has been made with a specific digital camera. This question has to be answered in child pornography cases, where evidence is needed that a certain picture has been made with a specific camera. We have looked into different methods of examining the cameras to determine if a specific image has been made with a camera: defects in CCDs, file formats that are used, noise introduced by the pixel arrays and watermarking in images used by the camera manufacturer.

  16. Color image processing in Canon's digital camera

    NASA Astrophysics Data System (ADS)

    Udagawa, Yoshiro

    1996-03-01

    A new TTL white balance method is studied to get a good white balance for a wide variety of illumination in digital camera system. Parameters of white balance are decided only by analyzing a single shot of images in this method. A locus of a white point under various illumination is plotted in a certain 2D color space, and the point in the space that is assumed to correspond to a white point in the scene is compared to the locus. Four complementary color filters (cyan, yellow, magenta and green) attached on a CCD imager generate four color signals, which are transformed by a 3 X 4 matrix into three of red, green and blue. The adaptive matrix method corresponding to the color temperature of illumination is compared to the fixed matrix method. Calculating the adaptive matrix is based on the Von Kries Law.

  17. LCD display screen performance testing for handheld thermal imaging cameras

    Microsoft Academic Search

    Joshua B. Dinaburg; Francine Amon; Anthony Hamins; Paul Boynton

    2006-01-01

    Handheld thermal imaging cameras are an important tool for the first responder community. As their use becomes more prevalent, it will become important for a set of standard test metrics to be available to characterize the performance of these cameras. A major factor in the performance of the imagers is the quality of the image on a display screen. An

  18. MIT Media Lab Camera Culture Coded Computational Imaging

    E-print Network

    Agrawal, Amit

    everything that she can see #12;MIT Media Lab Camera Culture is intensity of light ­ Seen from a single view Snapshot #12;MIT Media Lab Camera Culture is intensity of light ­ Seen from a single view point ­ Over timeMIT Media Lab Camera Culture Coded Computational Imaging: Light Fields and Applications Ankit Mohan

  19. 2000-fps digital imager for replacing 16-mm film cameras

    NASA Astrophysics Data System (ADS)

    Balch, Kris S.

    1999-06-01

    For many years 16 mm film cameras have been used in severe environments. These film cameras are used on Hy-G automotive sleds, airborne weapon testing, range tracking, and other hazardous environments. The companies and government agencies using these cameras are in need of replacing them with a more cost-effective solution. Film-based cameras still produce the best resolving capability. However, film development time, chemical disposal, non-optimal lighting conditions, recurring media cost, and faster digital analysis are factors influencing the desire for a 16 mm film camera replacement. This paper will describe a new imager from Kodak that has been designed to replace 16 mm high-speed film cameras. Also included is a detailed configuration, operational scenario, and cost analysis of Kodak's imager for airborne applications. The KODAK EKTAPRO HG Imager, Model 2000 is a high-resolution color or monochrome CCD Camera especially designed for replacement of rugged high-speed film cameras. The HG Imager is a self-contained camera. It features a high-resolution [512 X 384], light-sensitive CCD sensor with an electronic shutter. This shutter provides blooming protection that prevents 'smearing' of bright light sources, e.g., camera looking into a bright sun reflection. The HG Imager is a very rugged camera packaged in a highly integrated housing. This imager operates from +22 to 42 VDC. The HG Imager has a similar interface and form factor as that of high-speed film cameras, e.g., Photosonics 1B. However, the HG also has the digital interfaces such as 100-Base-T Ethernet and RS-485 that enable control and image transfer. The HG Imager is designed to replace 16 mm film cameras that support rugged testing applications.

  20. Camera system for multispectral imaging of documents

    NASA Astrophysics Data System (ADS)

    Christens-Barry, William A.; Boydston, Kenneth; France, Fenella G.; Knox, Keith T.; Easton, Roger L., Jr.; Toth, Michael B.

    2009-02-01

    A spectral imaging system comprising a 39-Mpixel monochrome camera, LED-based narrowband illumination, and acquisition/control software has been designed for investigations of cultural heritage objects. Notable attributes of this system, referred to as EurekaVision, include: streamlined workflow, flexibility, provision of well-structured data and metadata for downstream processing, and illumination that is safer for the artifacts. The system design builds upon experience gained while imaging the Archimedes Palimpsest and has been used in studies of a number of important objects in the LOC collection. This paper describes practical issues that were considered by EurekaVision to address key research questions for the study of fragile and unique cultural objects over a range of spectral bands. The system is intended to capture important digital records for access by researchers, professionals, and the public. The system was first used for spectral imaging of the 1507 world map by Martin Waldseemueller, the first printed map to reference "America." It was also used to image sections of the Carta Marina 1516 map by the same cartographer for comparative purposes. An updated version of the system is now being utilized by the Preservation Research and Testing Division of the Library of Congress.

  1. Decision Strategies that Maximize the Area Under the LROC Curve

    E-print Network

    1 Decision Strategies that Maximize the Area Under the LROC Curve Parmeshwar Khurd, Student Member of the signal, then it would be similarly valuable to have a decision strategy that optimized a relevant scalar uncertainty using the LROC method- ology. Therefore, we derive decision strategies that maximize the area

  2. Digital Camera Identification from Images Estimating False Acceptance Probability

    E-print Network

    Fridrich, Jessica

    the state-of-the-art identification method and discuss its practical issues. In the camera identification, time and date, to detect image forgeries and ma- nipulations, reverse-engineer cameras and more introduced a large number of image forensic tools [4] that can reveal forgeries. Forensic analysis

  3. Plenoptic camera image simulation for reconstruction algorithm verification

    NASA Astrophysics Data System (ADS)

    Schwiegerling, Jim

    2014-09-01

    Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. Two distinct camera forms have been proposed in the literature. The first has the camera image focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The second plenoptic form has the lenslet array relaying the image formed by the camera lens to the sensor. We have developed a raytracing package that can simulate images formed by a generalized version of the plenoptic camera. Several rays from each sensor pixel are traced backwards through the system to define a cone of rays emanating from the entrance pupil of the camera lens. Objects that lie within this cone are integrated to lead to a color and exposure level for that pixel. To speed processing three-dimensional objects are approximated as a series of planes at different depths. Repeating this process for each pixel in the sensor leads to a simulated plenoptic image on which different reconstruction algorithms can be tested.

  4. Image processing for cameras with fiber bundle image relay.

    PubMed

    Olivas, Stephen J; Arianpour, Ashkan; Stamenov, Igor; Morrison, Rick; Stack, Ron A; Johnson, Adam R; Agurok, Ilya P; Ford, Joseph E

    2015-02-10

    Some high-performance imaging systems generate a curved focal surface and so are incompatible with focal plane arrays fabricated by conventional silicon processing. One example is a monocentric lens, which forms a wide field-of-view high-resolution spherical image with a radius equal to the focal length. Optical fiber bundles have been used to couple between this focal surface and planar image sensors. However, such fiber-coupled imaging systems suffer from artifacts due to image sampling and incoherent light transfer by the fiber bundle as well as resampling by the focal plane, resulting in a fixed obscuration pattern. Here, we describe digital image processing techniques to improve image quality in a compact 126° field-of-view, 30 megapixel panoramic imager, where a 12 mm focal length F/1.35 lens made of concentric glass surfaces forms a spherical image surface, which is fiber-coupled to six discrete CMOS focal planes. We characterize the locally space-variant system impulse response at various stages: monocentric lens image formation onto the 2.5 ?m pitch fiber bundle, image transfer by the fiber bundle, and sensing by a 1.75 ?m pitch backside illuminated color focal plane. We demonstrate methods to mitigate moiré artifacts and local obscuration, correct for sphere to plane mapping distortion and vignetting, and stitch together the image data from discrete sensors into a single panorama. We compare processed images from the prototype to those taken with a 10× larger commercial camera with comparable field-of-view and light collection. PMID:25968031

  5. Dual-band camera system with advanced image processing capability

    Microsoft Academic Search

    Oliver Schreer; Mónica López Sáenz; Christian Peppermueller; Uwe Schmidt

    2007-01-01

    A dual-band IR camera system based on a dual-band QWIP focal plane array in 384x288x2 format was developed. The camera delivers exactly pixel-registered simultaneously acquired images and exhibits an excellent NETD of <30 mK at an integration time of less than 10 ms. It is equipped with Camera Link and Gigabit Ethernet data interface and is connected to and operated

  6. Colorimetric calibration of CCD cameras for self-luminous images

    NASA Astrophysics Data System (ADS)

    Chang, Gao-Wei; Chen, Yung-Chang

    1998-06-01

    Reproducing colors with rich saturation, from illuminating objects, is usually recognized as an essential issue for CCD camera imaging. In this paper, we propose a colorimetric calibration scheme regarding self-luminous images for CCD cameras. And, an efficient algorithm to generate highly saturated color stimuli is devised for investigating the CCD camera performance of image reproduction. In this scheme, a set of color samples containing highly saturated colors is generated, from an advanced CRT, as color stimuli for colorimetric characterization. To demonstrate the effectiveness of this algorithm, a realization of color samples, uniformly distributed in CIE LAB, are presented for illustration.

  7. NV-CMOS HD camera for day/night imaging

    NASA Astrophysics Data System (ADS)

    Vogelsong, T.; Tower, J.; Sudol, Thomas; Senko, T.; Chodelka, D.

    2014-06-01

    SRI International (SRI) has developed a new multi-purpose day/night video camera with low-light imaging performance comparable to an image intensifier, while offering the size, weight, ruggedness, and cost advantages enabled by the use of SRI's NV-CMOS HD digital image sensor chip. The digital video output is ideal for image enhancement, sharing with others through networking, video capture for data analysis, or fusion with thermal cameras. The camera provides Camera Link output with HD/WUXGA resolution of 1920 x 1200 pixels operating at 60 Hz. Windowing to smaller sizes enables operation at higher frame rates. High sensitivity is achieved through use of backside illumination, providing high Quantum Efficiency (QE) across the visible and near infrared (NIR) bands (peak QE <90%), as well as projected low noise (<2h+) readout. Power consumption is minimized in the camera, which operates from a single 5V supply. The NVCMOS HD camera provides a substantial reduction in size, weight, and power (SWaP) , ideal for SWaP-constrained day/night imaging platforms such as UAVs, ground vehicles, fixed mount surveillance, and may be reconfigured for mobile soldier operations such as night vision goggles and weapon sights. In addition the camera with the NV-CMOS HD imager is suitable for high performance digital cinematography/broadcast systems, biofluorescence/microscopy imaging, day/night security and surveillance, and other high-end applications which require HD video imaging with high sensitivity and wide dynamic range. The camera comes with an array of lens mounts including C-mount and F-mount. The latest test data from the NV-CMOS HD camera will be presented.

  8. Laser speckle imaging using a consumer-grade color camera

    E-print Network

    Choi, Bernard

    Laser speckle imaging using a consumer-grade color camera Owen Yang1,2, * and Bernard Choi1,2 1]. The color camera was a 14 bit color com- plementary metal­oxide­semiconductor (CMOS) cam- era (Canon 5D Mark, California 92697, USA 2 Beckman Laser Institute, University of California, Irvine, 1002 Health Sciences Road

  9. 15 CFR 743.3 - Thermal imaging camera reporting.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...be reported. Exports that are not authorized by an individually validated license of thermal imaging cameras controlled by ECCN 6A003.b.4.b to Albania, Australia, Austria, Belgium, Bulgaria, Croatia, Cyprus, Czech Republic,...

  10. 15 CFR 743.3 - Thermal imaging camera reporting.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...be reported. Exports that are not authorized by an individually validated license of thermal imaging cameras controlled by ECCN 6A003.b.4.b to Albania, Australia, Austria, Belgium, Bulgaria, Croatia, Cyprus, Czech Republic,...

  11. 15 CFR 743.3 - Thermal imaging camera reporting.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ...be reported. Exports that are not authorized by an individually validated license of thermal imaging cameras controlled by ECCN 6A003.b.4.b to Albania, Australia, Austria, Belgium, Bulgaria, Croatia, Cyprus, Czech Republic,...

  12. 15 CFR 743.3 - Thermal imaging camera reporting.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...authorized by an individually validated license of thermal imaging cameras controlled by ECCN 6A003.b.4.b to Albania, Australia, Austria, Belgium, Bulgaria, Croatia, Cyprus, Czech Republic, Denmark, Estonia, Finland, France,...

  13. Multi-camera: interactive rendering of abstract digital images

    E-print Network

    Smith, Jeffrey Statler

    2004-09-30

    MULTI-CAMERA: INTERACTIVE RENDERING OF ABSTRACT DIGITAL IMAGES AThesis by JEFFREY STATLER SMITH Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE... December 2003 Major Subject: Visualization Sciences MULTI-CAMERA: INTERACTIVE RENDERING OF ABSTRACT DIGITAL IMAGES AThesis by JEFFREY STATLER SMITH Submitted to Texas A&M University in partial fulfillment of the requirements for the degree of MASTER...

  14. Bin mode estimation methods for Compton camera imaging

    NASA Astrophysics Data System (ADS)

    Ikeda, S.; Odaka, H.; Uemura, M.; Takahashi, T.; Watanabe, S.; Takeda, S.

    2014-10-01

    We study the image reconstruction problem of a Compton camera which consists of semiconductor detectors. The image reconstruction is formulated as a statistical estimation problem. We employ a bin-mode estimation (BME) and extend an existing framework to a Compton camera with multiple scatterers and absorbers. Two estimation algorithms are proposed: an accelerated EM algorithm for the maximum likelihood estimation (MLE) and a modified EM algorithm for the maximum a posteriori (MAP) estimation. Numerical simulations demonstrate the potential of the proposed methods.

  15. Determining the Camera Response from Images: What Is Knowable?

    E-print Network

    Nayar, Shree K.

    Determining the Camera Response from Images: What Is Knowable? Michael D. Grossberg, Member, IEEE determined is by establishing a mapping of intensity values between images taken with different exposures. We information from a pair of images taken at different exposures is needed to determine the intensity mapping

  16. Spherical Image Processing for Accurate Visual Odometry with Omnidirectional Cameras

    E-print Network

    Paris-Sud XI, Université de

    Spherical Image Processing for Accurate Visual Odometry with Omnidirectional Cameras Hicham Hadj for increasing the accuracy of visual odometry estimation. The omnidirectional images are mapped onto a unit the accuracy of the visual odometry obtained using the spherical image processing and the improvement

  17. Dual-band camera system with advanced image processing capability

    NASA Astrophysics Data System (ADS)

    Schreer, Oliver; López Sáenz, Mónica; Peppermueller, Christian; Schmidt, Uwe

    2007-04-01

    A dual-band IR camera system based on a dual-band QWIP focal plane array in 384x288x2 format was developed. The camera delivers exactly pixel-registered simultaneously acquired images and exhibits an excellent NETD of <30 mK at an integration time of less than 10 ms. It is equipped with Camera Link and Gigabit Ethernet data interface and is connected to and operated from a personal computer. The camera is equipped with a special dual-band, dual-field-of-view lens (14.6 degree and 2.8 degree diagonal FOV). Radiometric calibration was performed for real quantitative comparison of MWIR and LWIR radiant power. The system uses special software to extract and visualize the - often quite small - differences of MWIR and LWIR images. The software corrects and processes the images and permits to overlay them with complementary colors such that differences become apparent and can easily be perceived. As a special feature, the system has advanced software for real-time image processing of dynamic scenes. It has an image stabilization feature which compensates for the movement of the camera sensor relative to the scene observed. It also has a powerful image registration capability for automatic stitching of live images to create large mosaic images. The camera system was tested with different scenes and under different weather conditions. It delivers large-format sharp images which reveal a lot of details which would not be perceptible with a single-band IR camera. It permits to identify materials (e.g. glass, asphalt, slate, etc.), to distinguish sun reflections from hot objects and to visualize hot exhaust gases.

  18. Image Mosaicing Using a Self-Calibration Camera

    NASA Astrophysics Data System (ADS)

    Baataoui, A.; Laraqui, A.; Saaidi, A.; Satori, K.; Jarrar, A.; Masrar, Med.

    2015-06-01

    In this paper, we are interested in the problem of the mosaic of images from a method of self-calibration of CCD cameras with varying intrinsic parameters. We use a new method of self-calibration to estimate the intrinsic parameters of the cameras. This method is based on the use of a 3D scene containing an unknown equilateral triangle. The plane of the considered triangle permits us to simplify the equations of self-calibration and to estimate the camera intrinsic parameters. The proposed approach is tested on real images of the same point in space, taken by different orientations of the camera. We show the performance and robustness of the intrinsic parameters estimated by this method on the image mosaic problem.

  19. Digital Image Forensics for Identifying Computer Generated and Digital Camera Images

    Microsoft Academic Search

    Sintayehu Dehnie; Husrev T. Sencar; Nasir D. Memon

    2006-01-01

    We describe a digital image forensics technique to distinguish im- ages captured by a digital camera from computer generated images. Our approach is based on the fact that image acquisition in a digital camera is fundamentally different from the generative algorithms de- ployed by computer generated imagery. This difference is captured in terms of the properties of the residual image

  20. A design of camera simulator for photoelectric image acquisition system

    NASA Astrophysics Data System (ADS)

    Cai, Guanghui; Liu, Wen; Zhang, Xin

    2015-02-01

    In the process of developing the photoelectric image acquisition equipment, it needs to verify the function and performance. In order to make the photoelectric device recall the image data formerly in the process of debugging and testing, a design scheme of the camera simulator is presented. In this system, with FPGA as the control core, the image data is saved in NAND flash trough USB2.0 bus. Due to the access rate of the NAND, flash is too slow to meet the requirement of the sytsem, to fix the problem, the pipeline technique and the High-Band-Buses technique are applied in the design to improve the storage rate. It reads image data out from flash in the control logic of FPGA and output separately from three different interface of Camera Link, LVDS and PAL, which can provide image data for photoelectric image acquisition equipment's debugging and algorithm validation. However, because the standard of PAL image resolution is 720*576, the resolution is different between PAL image and input image, so the image can be output after the resolution conversion. The experimental results demonstrate that the camera simulator outputs three format image sequence correctly, which can be captured and displayed by frame gather. And the three-format image data can meet test requirements of the most equipment, shorten debugging time and improve the test efficiency.

  1. VME image acquisition and processing using standard TV CCD cameras

    Microsoft Academic Search

    F. Epaud; P. Verdier

    1994-01-01

    The ESRF has released the first version of a low-cost image acquisition and processing system based on a industrial VME board and commercial CCD TV cameras. The images from standard CCIR (625 lines) or EIA (525 lines) inputs are digitised with 8-bit dynamic range and stored in a general purpose frame buffer to be processed by the embedded firmware. They

  2. Automatic Camera Calibration from a Single Manhattan Image

    Microsoft Academic Search

    J. Deutscher; Michael Isard; John Maccormick

    2002-01-01

    We present a completely automatic method for obtaining the approximate calibration of a camera (alignment to a world frame and focal length) from a single image of an unknown scene, provided only that the scene satisfies a Manhattan world assumption. This assump- tion states that the imaged scene contains three orthogonal, dominant directions, and is often satisfied by outdoor or

  3. Gaze directed camera control for face image acquisition

    Microsoft Academic Search

    Eric Sommerlade; Ben Benfold; Ian Reid

    2011-01-01

    Face recognition in surveillance situations usually requires high resolution face images to be captured from remote active cameras. Since the recognition accuracy is typ- ically a function of the face direction - with frontal faces more likely to lead to reliable recognition - we propose a system which optimises the capturing of such images by using coarse gaze estimates from

  4. An airborne four-camera imaging system for agricultural applications

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper describes the design and testing of an airborne multispectral digital imaging system for remote sensing applications. The system consists of four high resolution charge coupled device (CCD) digital cameras and a ruggedized PC equipped with a frame grabber and image acquisition software. T...

  5. Accurate Single Image Multi-Modal Camera Pose Estimation

    Microsoft Academic Search

    Christoph Bodensteiner; Marcus Hebel; Michael Arens

    2010-01-01

    A well known problem in photogrammetry and computer vi- sion is the precise and robust determination of camera poses with respect to a given 3D model. In this work we propose a novel multi-modal method for single image camera pose estimation with respect to 3D models with intensity information (e.g., LiDAR data with reectance information). We utilize a direct point

  6. Multispectral imaging using a stereo camera: concept, design and assessment

    NASA Astrophysics Data System (ADS)

    Shrestha, Raju; Mansouri, Alamin; Hardeberg, Jon Yngve

    2011-12-01

    This paper proposes a one-shot six-channel multispectral color image acquisition system using a stereo camera and a pair of optical filters. The two filters from the best pair, selected from among readily available filters such that they modify the sensitivities of the two cameras in such a way that they produce optimal estimation of spectral reflectance and/or color, are placed in front of the two lenses of the stereo camera. The two images acquired from the stereo camera are then registered for pixel-to-pixel correspondence. The spectral reflectance and/or color at each pixel on the scene are estimated from the corresponding camera outputs in the two images. Both simulations and experiments have shown that the proposed system performs well both spectrally and colorimetrically. Since it acquires the multispectral images in one shot, the proposed system can solve the limitations of slow and complex acquisition process, and costliness of the state of the art multispectral imaging systems, leading to its possible uses in widespread applications.

  7. Decision Strategies Maximizing the Area Under the LROC Parmeshwar Khurd and Gene Gindi

    E-print Network

    Decision Strategies Maximizing the Area Under the LROC Curve Parmeshwar Khurd and Gene Gindi of the signal, then it would be similarly valuable to have a decision strategy that optimized a relevant scalar uncertainty using the LROC methodology. We derive decision strategies that maximize the area under the LROC

  8. Mars Global Surveyor Mars Orbiter Camera Image Gallery

    NSDL National Science Digital Library

    Malin Space Science Systems

    This site from Malin Space Science Systems provides access to all of the images acquired by the Mars Orbiter Camera (MOC) during the Mars Global Surveyor mission through March 2005. MOC consists of several cameras: A narrow angle system that provides grayscale high resolution views of the planet's surface (typically, 1.5 to 12 meters/pixel), and red and blue wide angle cameras that provide daily global weather monitoring, context images to determine where the narrow angle views were actually acquired, and regional coverage to monitor variable surface features such as polar frost and wind streaks. Ancillary data for each image is provided and instructions regarding gallery usage are also available on the site.

  9. Efficient height measurement method of surveillance camera image.

    PubMed

    Lee, Joong; Lee, Eung-Dae; Tark, Hyun-Oh; Hwang, Jin-Woo; Yoon, Do-Young

    2008-05-01

    As surveillance cameras are increasingly installed, their films are often submitted as evidence of crime, but very scant detailed information such as features and clothes is obtained due to the limited camera performance. Height, however, is relatively not significantly influenced by the camera performance. This paper studied the height measurement method using images from a CCTV. The information on the height was obtained via photogrammetry, including the reference points in the photographed area and the calculation of the relationship between a 3D space and a 2D image through linear and nonlinear calibration. Using this correlation, this paper suggested the height measurement method, which projects a 3D virtual ruler onto the image. This method has been proven to offer more stable values within the range of data convergence than those of other existing methods. PMID:18096339

  10. Laser speckle imaging using a consumer-grade color camera

    PubMed Central

    Yang, Owen; Choi, Bernard

    2013-01-01

    Laser speckle imaging (LSI) is a noninvasive optical imaging technique able to provide wide-field two-dimensional maps of moving particles. Raw laser speckle images are typically taken with a scientific-grade monochrome camera. We demonstrate that a digital single-lens reflex (dSLR) camera with a Bayer filter is able to provide similar sensitivity despite taking information only from a specific pixel color. Here we demonstrate the effect of changing three primary dSLR exposure settings (i.e., aperture, exposure time/shutter speed, and gain/sensitivity (ISO)) on speckle contrast. In addition, we present data from an in vivo reactive hyperemia experiment that demonstrates the qualitative similarity in blood-flow dynamics visualized with a color dSLR and a scientific-grade monochrome camera. PMID:23027244

  11. CMOS Image Sensors: Electronic Camera On A Chip

    NASA Technical Reports Server (NTRS)

    Fossum, E. R.

    1995-01-01

    Recent advancements in CMOS image sensor technology are reviewed, including both passive pixel sensors and active pixel sensors. On- chip analog to digital converters and on-chip timing and control circuits permit realization of an electronic camera-on-a-chip. Highly miniaturized imaging systems based on CMOS image sensor technology are emerging as a competitor to charge-coupled devices for low cost uses.

  12. High-Resolution Mars Camera Test Image of Moon (Infrared)

    NASA Technical Reports Server (NTRS)

    2005-01-01

    This crescent view of Earth's Moon in infrared wavelengths comes from a camera test by NASA's Mars Reconnaissance Orbiter spacecraft on its way to Mars. The mission's High Resolution Imaging Science Experiment camera took the image on Sept. 8, 2005, while at a distance of about 10 million kilometers (6 million miles) from the Moon. The dark feature on the right is Mare Crisium. From that distance, the Moon would appear as a star-like point of light to the unaided eye. The test verified the camera's focusing capability and provided an opportunity for calibration. The spacecraft's Context Camera and Optical Navigation Camera also performed as expected during the test.

    The Mars Reconnaissance Orbiter, launched on Aug. 12, 2005, is on course to reach Mars on March 10, 2006. After gradually adjusting the shape of its orbit for half a year, it will begin its primary science phase in November 2006. From the mission's planned science orbit about 300 kilometers (186 miles) above the surface of Mars, the high resolution camera will be able to discern features as small as one meter or yard across.

  13. Anger camera image generation with microcomputers 

    E-print Network

    Williams, Karl Morgan

    1988-01-01

    in medical imaging due to their low efficiency. Low detector efficiency, primarily caused by the low molecular weight of the detector material requires high doses of radionuclide for proper imaging. These high doses of red ionucl ide are unacceptable... for routine medical imaging. Solid state detectors are only recently finding tneir place in medical instrumentation. The most common solid state detectors are made of either lithium drifted silicon, lithium drifted germanium, cadmium telluride, mercuric...

  14. Digital camera with apparatus for authentication of images produced from an image file

    NASA Technical Reports Server (NTRS)

    Friedman, Gary L. (inventor)

    1993-01-01

    A digital camera equipped with a processor for authentication of images produced from an image file taken by the digital camera is provided. The digital camera processor has embedded therein a private key unique to it, and the camera housing has a public key that is so uniquely based upon the private key that digital data encrypted with the private key by the processor may be decrypted using the public key. The digital camera processor comprises means for calculating a hash of the image file using a predetermined algorithm, and second means for encrypting the image hash with the private key, thereby producing a digital signature. The image file and the digital signature are stored in suitable recording means so they will be available together. Apparatus for authenticating at any time the image file as being free of any alteration uses the public key for decrypting the digital signature, thereby deriving a secure image hash identical to the image hash produced by the digital camera and used to produce the digital signature. The apparatus calculates from the image file an image hash using the same algorithm as before. By comparing this last image hash with the secure image hash, authenticity of the image file is determined if they match, since even one bit change in the image hash will cause the image hash to be totally different from the secure hash.

  15. CCD camera response to diffraction patterns simulating particle images.

    PubMed

    Stanislas, M; Abdelsalam, D G; Coudert, S

    2013-07-01

    We present a statistical study of CCD (or CMOS) camera response to small images. Diffraction patterns simulating particle images of a size around 2-3 pixels were experimentally generated and characterized using three-point Gaussian peak fitting, currently used in particle image velocimetry (PIV) for accurate location estimation. Based on this peak-fitting technique, the bias and RMS error between locations of simulated and real images were accurately calculated by using a homemade program. The influence of the intensity variation of the simulated particle images on the response of the CCD camera was studied. The experimental results show that the accuracy of the position determination is very good and brings attention to superresolution PIV algorithms. Some tracks are proposed in the conclusion to enlarge and improve the study. PMID:23842270

  16. A Refractive Camera for Acquiring Stereo and Super-resolution Images

    Microsoft Academic Search

    Chunyu Gao; Narendra Ahuja

    2006-01-01

    We propose a novel depth sensing system composed of a single camera, and a transparent plate which is placed in front of the camera and rotates about the optical axis of the camera. The camera takes a sequence of images as the plate rotates, which provide the equivalent of a large number of stereo pairs. Compared with conventional multi-camera stereo

  17. Image quality testing of assembled IR camera modules

    NASA Astrophysics Data System (ADS)

    Winters, Daniel; Erichsen, Patrik

    2013-10-01

    Infrared (IR) camera modules for the LWIR (8-12_m) that combine IR imaging optics with microbolometer focal plane array (FPA) sensors with readout electronics are becoming more and more a mass market product. At the same time, steady improvements in sensor resolution in the higher priced markets raise the requirement for imaging performance of objectives and the proper alignment between objective and FPA. This puts pressure on camera manufacturers and system integrators to assess the image quality of finished camera modules in a cost-efficient and automated way for quality control or during end-of-line testing. In this paper we present recent development work done in the field of image quality testing of IR camera modules. This technology provides a wealth of additional information in contrast to the more traditional test methods like minimum resolvable temperature difference (MRTD) which give only a subjective overall test result. Parameters that can be measured are image quality via the modulation transfer function (MTF) for broadband or with various bandpass filters on- and off-axis and optical parameters like e.g. effective focal length (EFL) and distortion. If the camera module allows for refocusing the optics, additional parameters like best focus plane, image plane tilt, auto-focus quality, chief ray angle etc. can be characterized. Additionally, the homogeneity and response of the sensor with the optics can be characterized in order to calculate the appropriate tables for non-uniformity correction (NUC). The technology can also be used to control active alignment methods during mechanical assembly of optics to high resolution sensors. Other important points that are discussed are the flexibility of the technology to test IR modules with different form factors, electrical interfaces and last but not least the suitability for fully automated measurements in mass production.

  18. Image synthesis for blind corners from uncalibrated multiple vehicle cameras

    Microsoft Academic Search

    Kazuki Ichikawa; Jun Sato

    2008-01-01

    Because of the increasing number of car accidents, it is very important to provide safety systems for car drivers. In this paper, we propose a method for synthesizing virtual blind corner images by using multiple vehicle cameras, so that drivers can see the other side of the blind corner. The existing methods require position sensors such as GPS and calibrated

  19. A CCD Camera?based Hyperspectral Imaging System for Stationary and Airborne Applications

    Microsoft Academic Search

    Chenghai Yang; James H. Everitt; Michael R. Davis; Chengye Mao

    2003-01-01

    This paper describes a CCD (charge coupled device) camera?based hyperspectral imaging system designed for both stationary and airborne remote sensing applications. The system consists of a high performance digital CCD camera, an imaging spectrograph, an optional focal plane scanner, and a PC computer equipped with a frame grabbing board and camera utility software. The CCD camera provides 1280(h) × 1024(v)

  20. Laser Imaging Video Camera Sees Through Fire, Fog, Smoke

    NASA Technical Reports Server (NTRS)

    2015-01-01

    Under a series of SBIR contracts with Langley Research Center, inventor Richard Billmers refined a prototype for a laser imaging camera capable of seeing through fire, fog, smoke, and other obscurants. Now, Canton, Ohio-based Laser Imaging through Obscurants (LITO) Technologies Inc. is demonstrating the technology as a perimeter security system at Glenn Research Center and planning its future use in aviation, shipping, emergency response, and other fields.

  1. Insights into Pyroclastic Volcanism on the Moon with LROC Data

    Microsoft Academic Search

    L. R. Gaddis; M. S. Robinson; B. R. Hawke; T. Giguere; O. Gustafson; L. P. Keszthelyi; S. Lawrence; J. Stopar; B. L. Jolliff; J. F. Bell; W. B. Garry

    2009-01-01

    Lunar pyroclastic deposits are high-priority targets for the Lunar Reconnaissance Orbiter Camera. Images from the Narrow Angle Camera (NAC; 0.5 m\\/pixel) and Wide Angle Camera (WAC; 7 bands, 100 m\\/p visible, 400 m\\/p ultraviolet) are being acquired. Studies of pyroclastic deposits with LRO data have the potential to resolve major questions concerning their distribution, composition, volume, eruptive styles, and role

  2. Automatic Digital Camera Based Fingerprint Image Preprocessing

    Microsoft Academic Search

    B. Y. Hiew; Andrew Teoh Beng Jin; David Ngo Chek Ling

    2006-01-01

    Touch-less fingerprint recognition has been receiving attention recently as it frees from the problems in terms of hygienic, maintenance and latent fingerprints. However, the conventional techniques that used to preprocess the optical or capacitance sensor acquired fingerprint image, for segmentation, enhancement and core point detection, are inadequate to serve the purpose. The problems of the touch-less fingerprint recognition consist of

  3. A novel SPECT camera for molecular imaging of the prostate

    NASA Astrophysics Data System (ADS)

    Cebula, Alan; Gilland, David; Su, Li-Ming; Wagenaar, Douglas; Bahadori, Amir

    2011-10-01

    The objective of this work is to develop an improved SPECT camera for dedicated prostate imaging. Complementing the recent advancements in agents for molecular prostate imaging, this device has the potential to assist in distinguishing benign from aggressive cancers, to improve site-specific localization of cancer, to improve accuracy of needle-guided prostate biopsy of cancer sites, and to aid in focal therapy procedures such as cryotherapy and radiation. Theoretical calculations show that the spatial resolution/detection sensitivity of the proposed SPECT camera can rival or exceed 3D PET and further signal-to-noise advantage is attained with the better energy resolution of the CZT modules. Based on photon transport simulation studies, the system has a reconstructed spatial resolution of 4.8 mm with a sensitivity of 0.0001. Reconstruction of a simulated prostate distribution demonstrates the focal imaging capability of the system.

  4. Innovative Camera and Image Processing System to Characterize Cryospheric Changes

    NASA Astrophysics Data System (ADS)

    Schenk, A.; Csatho, B. M.; Nagarajan, S.

    2010-12-01

    The polar regions play an important role in Earth’s climatic and geodynamic systems. Digital photogrammetric mapping provides a means for monitoring the dramatic changes observed in the polar regions during the past decades. High-resolution, photogrammetrically processed digital aerial imagery provides complementary information to surface measurements obtained by laser altimetry systems. While laser points accurately sample the ice surface, stereo images allow for the mapping of features, such as crevasses, flow bands, shear margins, moraines, leads, and different types of sea ice. Tracking features in repeat images produces a dense velocity vector field that can either serve as validation for interferometrically derived surface velocities or it constitutes a stand-alone product. A multi-modal, photogrammetric platform consists of one or more high-resolution, commercial color cameras, GPS and inertial navigation system as well as optional laser scanner. Such a system, using a Canon EOS-1DS Mark II camera, was first flown on the Icebridge missions Fall 2009 and Spring 2010, capturing hundreds of thousands of images at a frame rate of about one second. While digital images and videos have been used for quite some time for visual inspection, precise 3D measurements with low cost, commercial cameras require special photogrammetric treatment that only became available recently. Calibrating the multi-camera imaging system and geo-referencing the images are absolute prerequisites for all subsequent applications. Commercial cameras are inherently non-metric, that is, their sensor model is only approximately known. Since these cameras are not as rugged as photogrammetric cameras, the interior orientation also changes, due to temperature and pressure changes and aircraft vibration, resulting in large errors in 3D measurements. It is therefore necessary to calibrate the cameras frequently, at least whenever the system is newly installed. Geo-referencing the images is performed by the Applanix navigation system. Our new method enables a 3D reconstruction of ice sheet surface with high accuracy and unprecedented details, as it is demonstrated by examples from the Antarctic Peninsula, acquired by the IceBridge mission. Repeat digital imaging also provides data for determining surface elevation changes and velocities that are critical parameters for ice sheet models. Although these methods work well, there are known problems with satellite images and the traditional area-based matching, especially over rapidly changing outlet glaciers. To take full advantage of the high resolution, repeat stereo imaging we have developed a new method. The processing starts with the generation of a DEM from geo-referenced stereo images of the first time epoch. The next step is concerned with extracting and matching interest points in object space. Since an interest point moves its spatial position between two time epochs, such points are only radiometrically conjugate but not geometrically. In fact, the geometric displacement of two identical points, together with the time difference, renders velocities. We computed the evolution of the velocity field and surface topography on the floating tongue of the Jakobshavn glacier from historical stereo aerial photographs to illustrate the approach.

  5. Engineering design criteria for an image intensifier/image converter camera

    NASA Technical Reports Server (NTRS)

    Sharpsteen, J. T.; Lund, D. L.; Stoap, L. J.; Solheim, C. D.

    1976-01-01

    The design, display, and evaluation of an image intensifier/image converter camera which can be utilized in various requirements of spaceshuttle experiments are described. An image intensifier tube was utilized in combination with two brassboards as power supply and used for evaluation of night photography in the field. Pictures were obtained showing field details which would have been undistinguishable to the naked eye or to an ordinary camera.

  6. A single neutron sensitive camera based upon a proportional scintillation imaging chamber

    Microsoft Academic Search

    Masayo Suzuki; Tan Takahashi; Hidekazu Kumagai; Masayasu Ishihara; Hisao Kobayashi

    1993-01-01

    A novel neutron camera, based on a scintillation proportional imaging chamber equipped with a 10B powder-coated cathode, is proposed. This instrument yields about three orders of magnitude brighter images for single thermal neutrons compared to the existing neutron cameras. It is shown that the prototype camera can perform neutron imaging with spatial and time resolutions of 760 mum (rms) and

  7. Mosaicing of acoustic camera images K. Kim, N. Neretti and N. Intrator

    E-print Network

    Intrator, Nathan

    systems are widely used to obtain images of seabed or other underwater objects. An acoustic cameraMosaicing of acoustic camera images K. Kim, N. Neretti and N. Intrator Abstract: An algorithm, inhomogeneous illumination and low frame rate is presented. Imaging geometry of acoustic cameras

  8. Characterization of a PET Camera Optimized for ProstateImaging

    SciTech Connect

    Huber, Jennifer S.; Choong, Woon-Seng; Moses, William W.; Qi,Jinyi; Hu, Jicun; Wang, G.C.; Wilson, David; Oh, Sang; Huesman, RonaldH.; Derenzo, Stephen E.

    2005-11-11

    We present the characterization of a positron emission tomograph for prostate imaging that centers a patient between a pair of external curved detector banks (ellipse: 45 cm minor, 70 cm major axis). The distance between detector banks adjusts to allow patient access and to position the detectors as closely as possible for maximum sensitivity with patients of various sizes. Each bank is composed of two axial rows of 20 HR+ block detectors for a total of 80 detectors in the camera. The individual detectors are angled in the transaxial plane to point towards the prostate to reduce resolution degradation in that region. The detectors are read out by modified HRRT data acquisition electronics. Compared to a standard whole-body PET camera, our dedicated-prostate camera has the same sensitivity and resolution, less background (less randoms and lower scatter fraction) and a lower cost. We have completed construction of the camera. Characterization data and reconstructed images of several phantoms are shown. Sensitivity of a point source in the center is 946 cps/mu Ci. Spatial resolution is 4 mm FWHM in the central region.

  9. ARNICA, the NICMOS 3 imaging camera of TIRGO.

    NASA Astrophysics Data System (ADS)

    Lisi, F.; Baffa, C.; Hunt, L.; Stanga, R.

    ARNICA (ARcetri Near Infrared CAmera) is the imaging camera for the near infrared bands between 1.0 and 2.5 ?m that Arcetri Observatory has designed and built as a general facility for the TIRGO telescope (1.5 m diameter, f/20) located at Gornergrat (Switzerland). The scale is 1?per pixel, with sky coverage of more than 4 min×4 min on the NICMOS 3 (256×256 pixels, 40 ?m side) detector array. The camera is remotely controlled by a PC 486, connected to the array control electronics via a fiber-optics link. A C-language package, running under MS-DOS on the PC 486, acquires and stores the frames, and controls the timing of the array. The camera is intended for imaging of large extra-galactic and Galactic fields; a large effort has been dedicated to explore the possibility of achieving precise photometric measurements in the J, H, K astronomical bands, with very promising results.

  10. Evaluation of thermal imaging cameras used in fire fighting applications

    NASA Astrophysics Data System (ADS)

    Amon, Francine; Bryner, Nelson; Hamins, Anthony

    2004-08-01

    Thermal imaging cameras are rapidly becoming integral equipment for first responders for use in structure fires. Currently there are no standardized test methods or performance metrics available to the users or manufacturers of these instruments. The Building and Fire Research Laboratory (BFRL) at the National Institute of Standards and Technology (NIST) is developing a testing facility and methods to evaluate the performance of thermal imagers used by fire fighters to search for victims and hot spots in burning structures. The facility will test the performance of currently available imagers and advanced fire detection systems, as well as serve as a test bed for new technology. An evaluation of the performance of different thermal imaging detector technologies under field conditions is also underway. Results of this project will provide a quantifiable physical and scientific basis upon which industry standards for imaging performance, testing protocols and reporting practices related to the performance of thermal imaging cameras can be developed. The background and approach that shape the evaluation procedure for the thermal imagers are the primary focus of this paper.

  11. Image\\/video deblurring using a hybrid camera

    Microsoft Academic Search

    Yu-wing Tai; Hao Du; Michael S. Brown; Stephen Lin

    2008-01-01

    We propose a novel approach to reduce spatially varying motion blur using a hybrid camera system that simultane- ously captures high-resolution video at a low-frame rate to- gether with low-resolution video at a high-frame rate. Our work is inspired by Ben-Ezra and Nayar (3) who introduced thehybridcameraideaforcorrectingglobalmotionblurfor a single still image. We broaden the scope of the problem to address

  12. New technique of three-dimensional imaging through a 3-mm single lens camera

    NASA Astrophysics Data System (ADS)

    Bae, Sam Y.; Korniski, Ron; Ream, Allen; Shahinian, Hrayr; Manohara, Harish M.

    2012-02-01

    We present a technique for imaging full-color 3-D images with a single camera in this paper. Unlike a typical 3-D-imaging system comprising two independent cameras each contributing one viewpoint, the technique presented here creates two viewpoints using a single-lens camera with a bipartite filter whose bandpass characteristics are complementary to each other. The bipartite filter divides the camera's limiting aperture into two spatially separated apertures or viewpoints that alternately image an object field using filter-passband matched, time-sequenced illumination. This technique was applied to construct a 3-D camera to image scenes at a working distance of 10 mm. We evaluated the effectiveness of the 3-D camera in generating stereo images using statistical comparison of the depth resolutions achieved by the 3-D camera and a similar 2D camera arrangement. The comparison showed that the complementary filters produce effective stereopsis at prescribed working distances.

  13. Correction of Spatially Varying Image and Video Motion Blur Using a Hybrid Camera

    E-print Network

    Kim, Dae-Shik

    -blurred, high-resolution images yields high-frequency details, but with ringing artifacts due to the lack of low-frequenc a hybrid camera system. A hybrid camera is a standard video camera that is coupled with an auxiliary low-resolution camera sharing the same optical path but capturing at a significantly higher frame rate. The auxiliary

  14. Self calibration of camera with non-linear imaging model

    NASA Astrophysics Data System (ADS)

    Hou, Wenguang; Shang, Tao; Ding, Mingyue

    2007-11-01

    Being put forward by the researchers in computer vision, self calibration commonly deals with camera with linear model. Since the distortion is practically existed especially for ordinary camera, the result of calibration can't meet the demand of vision measurement with high accuracy regardless of the distortion. Being obedience to systematism mainly, the distortion is the target function of distortion coefficient, principal point, principal distance ratio and skew factor etc. So there exists a group of parameters including of distortion coefficient, principal point, principal distance ratio and skew factor and fundamental matrix which make homologous point meets epipolar restriction theoretically. Accordingly, the paper advances the way titled self calibration of camera with non-linear imaging model which is on basis of the Kruppa equation. In calculating the fundamental matrix, we can obtain interior elements except principal distance by taking into account distortion correction about image coordinate. Then the principal distance can be obtained by using Kruppa equation. This way only need some homologous points between two images, not need any known information about objects. Lots of experiments have proven its correctness and reliability.

  15. The role of camera-bundled image management software in the consumer digital imaging value chain

    NASA Astrophysics Data System (ADS)

    Mueller, Milton; Mundkur, Anuradha; Balasubramanian, Ashok; Chirania, Virat

    2005-02-01

    This research was undertaken by the Convergence Center at the Syracuse University School of Information Studies (www.digital-convergence.info). Project ICONICA, the name for the research, focuses on the strategic implications of digital Images and the CONvergence of Image management and image CApture. Consumer imaging - the activity that we once called "photography" - is now recognized as in the throes of a digital transformation. At the end of 2003, market researchers estimated that about 30% of the households in the U.S. and 40% of the households in Japan owned digital cameras. In 2004, of the 86 million new cameras sold (excluding one-time use cameras), a majority (56%) were estimated to be digital cameras. Sales of photographic film, while still profitable, are declining precipitously.

  16. An efficient image compressor for charge coupled devices camera.

    PubMed

    Li, Jin; Xing, Fei; You, Zheng

    2014-01-01

    Recently, the discrete wavelet transforms- (DWT-) based compressor, such as JPEG2000 and CCSDS-IDC, is widely seen as the state of the art compression scheme for charge coupled devices (CCD) camera. However, CCD images project on the DWT basis to produce a large number of large amplitude high-frequency coefficients because these images have a large number of complex texture and contour information, which are disadvantage for the later coding. In this paper, we proposed a low-complexity posttransform coupled with compressing sensing (PT-CS) compression approach for remote sensing image. First, the DWT is applied to the remote sensing image. Then, a pair base posttransform is applied to the DWT coefficients. The pair base are DCT base and Hadamard base, which can be used on the high and low bit-rate, respectively. The best posttransform is selected by the l p -norm-based approach. The posttransform is considered as the sparse representation stage of CS. The posttransform coefficients are resampled by sensing measurement matrix. Experimental results on on-board CCD camera images show that the proposed approach significantly outperforms the CCSDS-IDC-based coder, and its performance is comparable to that of the JPEG2000 at low bit rate and it does not have the high excessive implementation complexity of JPEG2000. PMID:25114977

  17. An Efficient Image Compressor for Charge Coupled Devices Camera

    PubMed Central

    Li, Jin; Xing, Fei; You, Zheng

    2014-01-01

    Recently, the discrete wavelet transforms- (DWT-) based compressor, such as JPEG2000 and CCSDS-IDC, is widely seen as the state of the art compression scheme for charge coupled devices (CCD) camera. However, CCD images project on the DWT basis to produce a large number of large amplitude high-frequency coefficients because these images have a large number of complex texture and contour information, which are disadvantage for the later coding. In this paper, we proposed a low-complexity posttransform coupled with compressing sensing (PT-CS) compression approach for remote sensing image. First, the DWT is applied to the remote sensing image. Then, a pair base posttransform is applied to the DWT coefficients. The pair base are DCT base and Hadamard base, which can be used on the high and low bit-rate, respectively. The best posttransform is selected by the lp-norm-based approach. The posttransform is considered as the sparse representation stage of CS. The posttransform coefficients are resampled by sensing measurement matrix. Experimental results on on-board CCD camera images show that the proposed approach significantly outperforms the CCSDS-IDC-based coder, and its performance is comparable to that of the JPEG2000 at low bit rate and it does not have the high excessive implementation complexity of JPEG2000. PMID:25114977

  18. Imaging of Venus from Galileo - Early results and camera performance

    NASA Technical Reports Server (NTRS)

    Belton, M. J. S.; Gierasch, P.; Klaasen, K. P.; Anger, C. D.; Carr, M. H.; Chapman, C. R.; Davies, M. E.; Greeley, R.; Greenberg, R.; Head, J. W.

    1992-01-01

    Three images of Venus have been returned so far by the Galileo spacecraft following an encounter with the planet on UT February 10, 1990. The images, taken at effective wavelengths of 4200 and 9900 A, characterize the global motions and distribution of haze near the Venus cloud tops and, at the latter wavelength, deep within the main cloud. Previously undetected markings are clearly seen in the near-infrared image. The global distribution of these features, which have maximum contrasts of 3 percent, is different from that recorded at short wavelengths. In particular, the 'polar collar', which is omnipresent in short wavelength images, is absent at 9900 A. The maximum contrast in the features at 4200 A is about 20 percent. The optical performance of the camera is described and is judged to be nominal.

  19. Imaging of Venus from Galileo: Early results and camera performance

    USGS Publications Warehouse

    Belton, M.J.S.; Gierasch, P.; Klaasen, K.P.; Anger, C.D.; Carr, M.H.; Chapman, C.R.; Davies, M.E.; Greeley, R.; Greenberg, R.; Head, J.W.; Neukum, G.; Pilcher, C.B.; Veverka, J.; Fanale, F.P.; Ingersoll, A.P.; Pollock, J.B.; Morrison, D.; Clary, M.C.; Cunningham, W.; Breneman, H.

    1992-01-01

    Three images of Venus have been returned so far by the Galileo spacecraft following an encounter with the planet on UT February 10, 1990. The images, taken at effective wavelengths of 4200 and 9900 A??, characterize the global motions and distribution of haze near the Venus cloud tops and, at the latter wavelength, deep within the main cloud. Previously undetected markings are clearly seen in the near-infrared image. The global distribution of these features, which have maximum contrasts of 3%, is different from that recorded at short wavelengths. In particular, the "polar collar," which is omnipresent in short wavelength images, is absent at 9900 A??. The maximum contrast in the features at 4200 A?? is about 20%. The optical performance of the camera is described and is judged to be nominal. ?? 1992.

  20. A two-camera imaging system for pest detection and aerial application

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This presentation reports on the design and testing of an airborne two-camera imaging system for pest detection and aerial application assessment. The system consists of two digital cameras with 5616 x 3744 effective pixels. One camera captures normal color images with blue, green and red bands, whi...

  1. First experiences with ARNICA, the ARCETRI observatory imaging camera

    NASA Astrophysics Data System (ADS)

    Lisi, F.; Baffa, C.; Hunt, L.; Maiolino, R.; Moriondo, G.; Stanga, R.

    1994-03-01

    ARNICA (ARcetri Near Infrared CAmera) is the imaging camera for the near infrared bands between 1.0 and 2.5 micrometer that Arcetri Observatory has designed and built as a common use instrument for the TIRGO telescope (1.5 m diameter, f/20) located at Gornergrat (Switzerland). The scale is 1 sec per pixel, with sky coverage of more than 4 min x 4 min on the NICMOS 3 (256 x 256 pixels, 40 micrometer side) detector array. The optical path is compact enough to be enclosed in a 25.4 cm diameter dewar; the working temperature of detector and optics is 76 K. We give an estimate of performance, in terms of sensitivity with an assigned observing time, along with some preliminary considerations on photometric accuracy.

  2. Image-intensifier camera studies of shocked metal surfaces

    SciTech Connect

    Engelke, R.P.; Thurston, R.S.

    1986-01-01

    A high-space-resolution image-intensifier camera with luminance gain of up to 5000 and exposure times as short as 30 ns has been applied to the study of the interaction of posts and welds with strongly shocked metal surfaces, which included super strong steels. The time evolution of a single experiment can be recorded by multiple pulsing of the camera. Phenomena that remain coherent for relatively long durations have been observed. An important feature of the hydrodynamic flow resulting from post-plate interactions is the creation of a wave that propagates outward on the plate; the flow blocks the explosive product gases from escaping through the plate for greater than 10 ..mu..s. Electron beam welds were ineffective in blocking product gases from escaping for even short periods of time.

  3. Image reconstruction from limited angle Compton camera data

    NASA Astrophysics Data System (ADS)

    Tomitani, T.; Hirasawa, M.

    2002-06-01

    The Compton camera is used for imaging the distributions of ? ray direction in a ? ray telescope for astrophysics and for imaging radioisotope distributions in nuclear medicine without the need for collimators. The integration of ? rays on a cone is measured with the camera, so that some sort of inversion method is needed. Parra found an analytical inversion algorithm based on spherical harmonics expansion of projection data. His algorithm is applicable to the full set of projection data. In this paper, six possible reconstruction algorithms that allow image reconstruction from projections with a finite range of scattering angles are investigated. Four algorithms have instability problems and two others are practical. However, the variance of the reconstructed image diverges in these two cases, so that window functions are introduced with which the variance becomes finite at a cost of spatial resolution. These two algorithms are compared in terms of variance. The algorithm based on the inversion of the summed back-projection is superior to the algorithm based on the inversion of the summed projection.

  4. Expert interpretation compensates for reduced image quality of camera-digitized images referred to radiologists.

    PubMed

    Zwingenberger, Allison L; Bouma, Jennifer L; Saunders, H Mark; Nodine, Calvin F

    2011-01-01

    We compared the accuracy of five veterinary radiologists when reading 20 radiographic cases on both analog film and in camera-digitized format. In addition, we compared the ability of five veterinary radiologists vs. 10 private practice veterinarians to interpret the analog images. Interpretation accuracy was compared using receiver operating characteristic curve analysis. Veterinary radiologists' accuracy did not significantly differ between analog vs. camera-digitized images (P = 0.13) although sensitivity was higher for analog images. Radiologists' interpretation of both digital and analog images was significantly better compared with the private veterinarians (P < 0.05). PMID:21831251

  5. Coulomb-explosion imaging using a pixel-imaging mass-spectrometry camera

    NASA Astrophysics Data System (ADS)

    Slater, Craig S.; Blake, Sophie; Brouard, Mark; Lauer, Alexandra; Vallance, Claire; Bohun, C. Sean; Christensen, Lauge; Nielsen, Jens H.; Johansson, Mikael P.; Stapelfeldt, Henrik

    2015-05-01

    Femtosecond laser-induced Coulomb-explosion imaging of 3,5-dibromo-3 ',5 '-difluoro-4 '-cyanobiphenyl molecules prealigned in space is explored using a pixel-imaging mass-spectrometry (PImMS) camera. The fast-event-triggered camera allows the concurrent detection of the correlated two-dimensional momentum images, or covariance maps, of all the ionic fragments resulting from fragmentation of multiple molecules in each acquisition cycle. Detailed simulation of the covariance maps reveals that they provide rich information about the parent molecular structure and fragmentation dynamics. Future opportunities for imaging the real-time dynamics of intramolecular processes are considered.

  6. MECHANICAL ADVANCING HANDLE THAT SIMPLIFIES MINIRHIZOTRON CAMERA REGISTRATION AND IMAGE COLLECTION

    EPA Science Inventory

    Minirkizotrons in conjunction with a minirkizotron video camera system are becoming widely used tools for investigating root production and survical in a variety of ecosystems. Image collection with a minirhizotron camera can be time consuming and tedious particularly when hundre...

  7. Dynamic Imaging with High Resolution Time-of-Flight PET Camera - TOFPET I

    Microsoft Academic Search

    N. A. Mullani; J. Gaeta; K. Yerian; W. H. Wong; R. K. Hartz; E. A. Philippe; D. Bristow; K. L. Gould

    1984-01-01

    One of the major design goals of the TOFPET I positron camera was to produce a high resolution whole body positron camera capable of dynamically imaging an organ such as the heart. TOFPET I is now nearing completion and preliminary images have been obtained to assess its dynamic and three dimensional imaging capabilities. Multiple gated images of the uptake of

  8. Single-quantum dot imaging with a photon counting camera

    PubMed Central

    Michalet, X.; Colyer, R. A.; Antelman, J.; Siegmund, O.H.W.; Tremsin, A.; Vallerga, J.V.; Weiss, S.

    2010-01-01

    The expanding spectrum of applications of single-molecule fluorescence imaging ranges from fundamental in vitro studies of biomolecular activity to tracking of receptors in live cells. The success of these assays has relied on progresses in organic and non-organic fluorescent probe developments as well as improvements in the sensitivity of light detectors. We describe a new type of detector developed with the specific goal of ultra-sensitive single-molecule imaging. It is a wide-field, photon-counting detector providing high temporal and high spatial resolution information for each incoming photon. It can be used as a standard low-light level camera, but also allows access to a lot more information, such as fluorescence lifetime and spatio-temporal correlations. We illustrate the single-molecule imaging performance of our current prototype using quantum dots and discuss on-going and future developments of this detector. PMID:19689323

  9. Imaging sensitivity of three kind of high-sensitivity imaging cameras under short-pulsed light illumination

    Microsoft Academic Search

    Hideyuki TAKAHASHI; Kouichi SAWADA; Koki ABE; Yoshimi TAKAO; Kazutoshi WATANABE

    *** Abstract: As a part of a development of an optical system that enables us to precisely observe negative phototactic fish in situ, characteristics of three different types of a high-sensitivity camera were investigated under short-pulsed light illuminations of different colors. These three types of a camera were Image Intensifier connected with CCD camera, EB-CCD camera, and HARP camera and

  10. Frequency Identification of Vibration Signals Using Video Camera Image Data

    PubMed Central

    Jeng, Yih-Nen; Wu, Chia-Hung

    2012-01-01

    This study showed that an image data acquisition system connecting a high-speed camera or webcam to a notebook or personal computer (PC) can precisely capture most dominant modes of vibration signal, but may involve the non-physical modes induced by the insufficient frame rates. Using a simple model, frequencies of these modes are properly predicted and excluded. Two experimental designs, which involve using an LED light source and a vibration exciter, are proposed to demonstrate the performance. First, the original gray-level resolution of a video camera from, for instance, 0 to 256 levels, was enhanced by summing gray-level data of all pixels in a small region around the point of interest. The image signal was further enhanced by attaching a white paper sheet marked with a black line on the surface of the vibration system in operation to increase the gray-level resolution. Experimental results showed that the Prosilica CV640C CMOS high-speed camera has the critical frequency of inducing the false mode at 60 Hz, whereas that of the webcam is 7.8 Hz. Several factors were proven to have the effect of partially suppressing the non-physical modes, but they cannot eliminate them completely. Two examples, the prominent vibration modes of which are less than the associated critical frequencies, are examined to demonstrate the performances of the proposed systems. In general, the experimental data show that the non-contact type image data acquisition systems are potential tools for collecting the low-frequency vibration signal of a system. PMID:23202026

  11. Coincidence imaging using a standard dual head gamma camera

    SciTech Connect

    Miyaoka, R.S.; Costa, W.L.S.; Lewellen, T.K. [and others

    1996-12-31

    Coincidence electronics and a data acquisition system were developed to explore coincidence detection using a conventional dual head gamma camera. A high impedance pick-off circuit provides position and energy signals without interfering with normal camera operation. The signals are pulse-clipped to reduce pileup effects. Thin lead-tin-copper filters are used to reduce the flux of low energy photons to the detectors. The data are stored in list mode format. The measured coincidence timing resolution for the system is 9 nsec FWHM (450 kcps/detector) and the energy resolution is 11% (650 kcps/detector). The system sensitivity is 46 kcps/{mu}Ci/cc for a 20 cm diameter (18 cm length) cylindrical phantom centered in the field of view. A scatter fraction of 31% was measured using the 20 cm cylindrical phantom. The sensitivity and scatter fraction measurements were made using a 450-575 keV energy window, 63.0 cm detector spacing, and 1 mm thick lead filters. The maximum recommended singles rate (full spectrum) for coincidence imaging is {approximately}800 kcps per detector. The 3D reprojection algorithm has been implemented. Example images of the 3D Hoffman brain phantom and patient tumor images are shown.

  12. ARNICA: the Arcetri Observatory NICMOS3 imaging camera

    NASA Astrophysics Data System (ADS)

    Lisi, Franco; Baffa, Carlo; Hunt, Leslie K.

    1993-10-01

    ARNICA (ARcetri Near Infrared CAmera) is the imaging camera for the near infrared bands between 1.0 and 2.5 micrometers that Arcetri Observatory has designed and built as a general facility for the TIRGO telescope (1.5 m diameter, f/20) located at Gornergrat (Switzerland). The scale is 1' per pixel, with sky coverage of more than 4' X 4' on the NICMOS 3 (256 X 256 pixels, 40 micrometers side) detector array. The optical path is compact enough to be enclosed in a 25.4 cm diameter dewar; the working temperature is 76 K. The camera is remotely controlled by a 486 PC, connected to the array control electronics via a fiber-optics link. A C-language package, running under MS-DOS on the 486 PC, acquires and stores the frames, and controls the timing of the array. We give an estimate of performance, in terms of sensitivity with an assigned observing time, along with some details on the main parameters of the NICMOS 3 detector.

  13. Real-time viewpoint image synthesis using strips of multi-camera images

    NASA Astrophysics Data System (ADS)

    Date, Munekazu; Takada, Hideaki; Kojima, Akira

    2015-03-01

    A real-time viewpoint image generation method is achieved. Video communications with a high sense of reality are needed to make natural connections between users at different places. One of the key technologies to achieve a sense of high reality is image generation corresponding to an individual user's viewpoint. However, generating viewpoint images requires advanced image processing, which is usually too heavy to use for real-time and low-latency purposes. In this paper we propose a real-time viewpoint image generation method using simple blending of multiple camera images taken at equal horizontal intervals and convergence obtained by using approximate information of an object's depth. An image generated from the nearest camera images is visually perceived as an intermediate viewpoint image due to the visual effect of depth-fused 3D (DFD). If the viewpoint is not on the line of the camera array, a viewpoint image could be generated by region splitting. We made a prototype viewpoint image generation system and achieved real-time full-frame operation for stereo HD videos. The users can see their individual viewpoint image for left-and-right and back-and-forth movement toward the screen. Our algorithm is very simple and promising as a means for achieving video communication with high reality.

  14. Noise evaluation of Compton camera imaging for proton therapy.

    PubMed

    Ortega, P G; Torres-Espallardo, I; Cerutti, F; Ferrari, A; Gillam, J E; Lacasta, C; Llosá, G; Oliver, J F; Sala, P R; Solevi, P; Rafecas, M

    2015-03-01

    Compton Cameras emerged as an alternative for real-time dose monitoring techniques for Particle Therapy (PT), based on the detection of prompt-gammas. As a consequence of the Compton scattering process, the gamma origin point can be restricted onto the surface of a cone (Compton cone). Through image reconstruction techniques, the distribution of the gamma emitters can be estimated, using cone-surfaces backprojections of the Compton cones through the image space, along with more sophisticated statistical methods to improve the image quality. To calculate the Compton cone required for image reconstruction, either two interactions, the last being photoelectric absorption, or three scatter interactions are needed. Because of the high energy of the photons in PT the first option might not be adequate, as the photon is not absorbed in general. However, the second option is less efficient. That is the reason to resort to spectral reconstructions, where the incoming ? energy is considered as a variable in the reconstruction inverse problem. Jointly with prompt gamma, secondary neutrons and scattered photons, not strongly correlated with the dose map, can also reach the imaging detector and produce false events. These events deteriorate the image quality. Also, high intensity beams can produce particle accumulation in the camera, which lead to an increase of random coincidences, meaning events which gather measurements from different incoming particles. The noise scenario is expected to be different if double or triple events are used, and consequently, the reconstructed images can be affected differently by spurious data. The aim of the present work is to study the effect of false events in the reconstructed image, evaluating their impact in the determination of the beam particle ranges. A simulation study that includes misidentified events (neutrons and random coincidences) in the final image of a Compton Telescope for PT monitoring is presented. The complete chain of detection, from the beam particle entering a phantom to the event classification, is simulated using FLUKA. The range determination is later estimated from the reconstructed image obtained from a two and three-event algorithm based on Maximum Likelihood Expectation Maximization. The neutron background and random coincidences due to a therapeutic-like time structure are analyzed for mono-energetic proton beams. The time structure of the beam is included in the simulations, which will affect the rate of particles entering the detector. PMID:25658644

  15. A novel ultra-high speed camera for digital image processing applications

    Microsoft Academic Search

    A. Hijazi; V. Madhavan

    2008-01-01

    Multi-channel gated-intensified cameras are commonly used for capturing images at ultra-high frame rates. The use of image intensifiers reduces the image resolution and increases the error in applications requiring high-quality images, such as digital image correlation. We report the development of a new type of non-intensified multi-channel camera system that permits recording of image sequences at ultra-high frame rates at

  16. Author's personal copy In-flight calibration of the Cassini imaging science sub-system cameras

    E-print Network

    Throop, Henry

    Author's personal copy In-flight calibration of the Cassini imaging science sub-system cameras Polarimetry Experimental techniques a b s t r a c t We describe in-flight calibration of the Cassini Imaging rights reserved. 1. Introduction The Cassini imaging science sub-system (ISS) consists of two cameras

  17. AN INVESTIGATION INTO ALIASING IN IMAGES RECAPTURED FROM AN LCD MONITOR USING A DIGITAL CAMERA

    E-print Network

    Dragotti, Pier Luigi

    AN INVESTIGATION INTO ALIASING IN IMAGES RECAPTURED FROM AN LCD MONITOR USING A DIGITAL CAMERA Hani displays, such as an LCD monitor, using a digital still camera and professional image editing software approach to detect- ing an image that has been recaptured from an LCD monitor is to search for the presence

  18. LROC WAC 100 Meter Scale Photometrically Normalized Map of the Moon

    NASA Astrophysics Data System (ADS)

    Boyd, A. K.; Nuno, R. G.; Robinson, M. S.; Denevi, B. W.; Hapke, B. W.

    2013-12-01

    The Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) monthly global observations allowed derivation of a robust empirical photometric solution over a broad range of incidence, emission and phase (i, e, g) angles. Combining the WAC stereo-based GLD100 [1] digital terrain model (DTM) and LOLA polar DTMs [2] enabled precise topographic corrections to photometric angles. Over 100,000 WAC observations at 643 nm were calibrated to reflectance (I/F). Photometric angles (i, e, g), latitude, and longitude were calculated and stored for each WAC pixel. The 6-dimensional data set was then reduced to 3 dimensions by photometrically normalizing I/F with a global solution similar to [3]. The global solution was calculated from three 2°x2° tiles centered on (1°N, 147°E), (45°N, 147°E), and (89°N, 147°E), and included over 40 million WAC pixels. A least squares fit to a multivariate polynomial of degree 4 (f(i,e,g)) was performed, and the result was the starting point for a minimum search solving the non-linear function min[{1-[ I/F / f(i,e,g)] }2]. The input pixels were filtered to incidence angles (calculated from topography) < 89° and I/F greater than a minimum threshold to avoid shadowed pixels, and the output normalized I/F values were gridded into an equal-area map projection at 100 meters/pixel. At each grid location the median, standard deviation, and count of valid pixels were recorded. The normalized reflectance map is the result of the median of all normalized WAC pixels overlapping that specific 100-m grid cell. There are an average of 86 WAC normalized I/F estimates at each cell [3]. The resulting photometrically normalized mosaic provides the means to accurately compare I/F values for different regions on the Moon (see Nuno et al. [4]). The subtle differences in normalized I/F can now be traced across the local topography at regions that are illuminated at any point during the LRO mission (while the WAC was imaging), including at polar latitudes. This continuous map of reflectance at 643 nm, normalized to a standard geometry of i=30, e=0, g=30, ranges from 0.036 to 0.36 (0.01%-99.99% of the histogram) with a global mean reflectance of 0.115. Immature rays of Copernican craters are typically >0.14 and maria are typically <0.07 with averages for individual maria ranging from 0.046 to 0.060. The materials with the lowest normalized reflectance on the Moon are pyroclastic deposits at Sinus Aestuum (<0.036) and those with the highest normalized reflectance are found on steep crater walls (>0.36)[4]. 1. Scholten et al. (2012) J. Geophys. Res., 117, doi: 10.1029/2011JE003926. 2. Smith et al. (2010), Geophys. Res. Lett., 37, L18204, doi:10.1029/2010GL043751. 3. Boyd et al. (2012) LPSC XLIII, #2795 4. Nuno et al. AGU, (this conference)

  19. Embedded image enhancement for high-throughput cameras

    NASA Astrophysics Data System (ADS)

    Geerts, Stan J. C.; Cornelissen, Dion; de With, Peter H. N.

    2014-03-01

    This paper presents image enhancement for a novel Ultra-High-Definition (UHD) video camera offering 4K images and higher. Conventional image enhancement techniques need to be reconsidered for the high-resolution images and the low-light sensitivity of the new sensor. We study two image enhancement functions and evaluate and optimize the algorithms for embedded implementation in programmable logic (FPGA). The enhancement study involves high-quality Auto White Balancing (AWB) and Local Contrast Enhancement (LCE). We have compared multiple algorithms from literature, both with objective and subjective metrics. In order to objectively compare Local Contrast (LC), an existing LC metric is modified for LC measurement in UHD images. For AWB, we have found that color histogram stretching offers a subjective high image quality and it is among the algorithms with the lowest complexity, while giving only a small balancing error. We impose a color-to-color gain constraint, which improves robustness of low-light images. For local contrast enhancement, a combination of contrast preserving gamma and single-scale Retinex is selected. A modified bilateral filter is designed to prevent halo artifacts, while significantly reducing the complexity and simultaneously preserving quality. We show that by cascading contrast preserving gamma and single-scale Retinex, the visibility of details is improved towards the level appropriate for high-quality surveillance applications. The user is offered control over the amount of enhancement. Also, we discuss the mapping of those functions on a heterogeneous platform to come to an effective implementation while preserving quality and robustness.

  20. Single-Shot Image Deblurring with Modified Camera Optics Yosuke Bando

    E-print Network

    .e., motion blur, is only avoided by increasing the shutter speed and sensor sensitivity when a camera detectsSingle-Shot Image Deblurring with Modified Camera Optics by Yosuke Bando A Doctoral Dissertation;Abstract The recent rapid popularization of digital cameras allows people to capture a large number

  1. Replacing 16-mm airborne film cameras with commercial-off-the-shelf (COTS) digital imaging

    NASA Astrophysics Data System (ADS)

    Balch, Kris S.

    1998-08-01

    For many years 16 mm film cameras have been used in severe environments. These film cameras are used on Hy-G automotive sleds, airborne weapon testing, range tracking, an other hazardous environments. The companies and government agencies using these cameras are in need of replacing them with a more cost-effective solution. Film-based cameras still produce the best resolving capability. However, film development time, chemical disposal, non-optimal lighting conditions, recurring media cost, and faster digital analysis are factors influencing the desire for a 16 mm film camera replacement. This paper will describe a new imager from Kodak that has been designed to replace 16 mm high-speed film cameras. Also included is a detailed configuration, operational scenario, and cost analysis of Kodak's imager for airborne applications. The KODAK EKTAPRO Imager RO Imager is a high-resolution color or monochrome CCD Camera especially designed for replacement of rugged high-speed film cameras. The RO Imager is a record only camera. It features a high-resolution [512 X 384], light-sensitive CCD sensor with an electronic shutter. This shutter provides blooming protection that prevents 'smearing' of bright light sources, e.g., camera looking into a bright sun reflection. The RO Imager is a very rugged camera packaged in a highly integrated housing. This imager operates off +28 VDC. The RO Imager has a similar interface and form factor is that of high-speed film cameras, e.g., Photonics 1B. The RO Imager is designed to replace 16 mm film cameras that support rugged testing applications.

  2. Applying and extending ISO/TC42 digital camera resolution standards to mobile imaging products

    NASA Astrophysics Data System (ADS)

    Williams, Don; Burns, Peter D.

    2007-01-01

    There are no fundamental differences between today's mobile telephone cameras and consumer digital still cameras that suggest many existing ISO imaging performance standards do not apply. To the extent that they have lenses, color filter arrays, detectors, apertures, image processing, and are hand held, there really are no operational or architectural differences. Despite this, there are currently differences in the levels of imaging performance. These are driven by physical and economic constraints, and image-capture conditions. Several ISO standards for resolution, well established for digital consumer digital cameras, require care when applied to the current generation of cell phone cameras. In particular, accommodation of optical flare, shading non-uniformity and distortion are recommended. We offer proposals for the application of existing ISO imaging resolution performance standards to mobile imaging products, and suggestions for extending performance standards to the characteristic behavior of camera phones.

  3. Extreme ultra-violet movie camera for imaging microsecond time scale magnetic reconnection.

    PubMed

    Chai, Kil-Byoung; Bellan, Paul M

    2013-12-01

    An ultra-fast extreme ultra-violet (EUV) movie camera has been developed for imaging magnetic reconnection in the Caltech spheromak/astrophysical jet experiment. The camera consists of a broadband Mo:Si multilayer mirror, a fast decaying YAG:Ce scintillator, a visible light block, and a high-speed visible light CCD camera. The camera can capture EUV images as fast as 3.3 × 10(6) frames per second with 0.5 cm spatial resolution. The spectral range is from 20 eV to 60 eV. EUV images reveal strong, transient, highly localized bursts of EUV radiation when magnetic reconnection occurs. PMID:24387431

  4. Coded-aperture Compton camera for gamma-ray imaging

    NASA Astrophysics Data System (ADS)

    Farber, Aaron M.

    This dissertation describes the development of a novel gamma-ray imaging system concept and presents results from Monte Carlo simulations of the new design. Current designs for large field-of-view gamma cameras suitable for homeland security applications implement either a coded aperture or a Compton scattering geometry to image a gamma-ray source. Both of these systems require large, expensive position-sensitive detectors in order to work effectively. By combining characteristics of both of these systems, a new design can be implemented that does not require such expensive detectors and that can be scaled down to a portable size. This new system has significant promise in homeland security, astronomy, botany and other fields, while future iterations may prove useful in medical imaging, other biological sciences and other areas, such as non-destructive testing. A proof-of-principle study of the new gamma-ray imaging system has been performed by Monte Carlo simulation. Various reconstruction methods have been explored and compared. General-Purpose Graphics-Processor-Unit (GPGPU) computation has also been incorporated. The resulting code is a primary design tool for exploring variables such as detector spacing, material selection and thickness and pixel geometry. The advancement of the system from a simple 1-dimensional simulation to a full 3-dimensional model is described. Methods of image reconstruction are discussed and results of simulations consisting of both a 4 x 4 and a 16 x 16 object space mesh have been presented. A discussion of the limitations and potential areas of further study is also presented.

  5. ROBUST THRESHOLDING BASED ON WAVELETS AND THINNING ALGORITHMS FOR DEGRADED CAMERA IMAGES

    E-print Network

    Dupont, Stéphane

    ROBUST THRESHOLDING BASED ON WAVELETS AND THINNING ALGORITHMS FOR DEGRADED CAMERA IMAGES C acquired from a low-resolution camera. This technique is based on wavelet denoising and global thresholding, thresholding, denoising, degraded docu- ments 1. INTRODUCTION 1.1. Context of Images As the first step

  6. High-resolution image digitizing through 12x3-bit RGB-filtered CCD camera

    Microsoft Academic Search

    Andrew Y. Cheng; C. Y. Pau

    1996-01-01

    A high resolution computer-controlled CCD image capturing system is developed by using a 12 bits 1024 by 1024 pixels CCD camera and motorized RGB filters to grasp an image with color depth up to 36 bits. The filters distinguish the major components of color and collect them separately while the CCD camera maintains the spatial resolution and detector filling factor.

  7. Single-Sensor Camera Image Processing RASTISLAV LUKAC and KONSTANTINOS N. PLATANIOTIS

    E-print Network

    Plataniotis, Konstantinos N.

    Chapter 16 Single-Sensor Camera Image Processing RASTISLAV LUKAC and KONSTANTINOS N. PLATANIOTIS 16, it is the extreme and still increasing popularity of the consumer, single-sensor digital cameras which boosts today the research activities in the field of digital color image acquisition, processing, and storage. Single

  8. The CCD imager electronics for the Mars pathfinder and Mars surveyor cameras

    Microsoft Academic Search

    J. Rainer Kramm; Nicolas Thomas; H. Uwe Keller; Peter H. Smith

    1998-01-01

    The Mars pathfinder stereo camera and both cameras on the Mars surveyor lander use CCD detectors for image acquisition. The frame transfer type CCD's were produced by Loral for space applications under contract from MPAE. A detector consists of two sections of 256 lines and 512 columns each. Pixels in the image section contain an anti-blooming structure to remove excessive

  9. Subnanosecond time-resolved imaging using a rf phase-sensitive image converter camera

    Microsoft Academic Search

    Klaus W. Berndt; Joseph R. Lakowicz

    1993-01-01

    Common high-speed gated proximity focused multichannel plate image intensifiers allow for a typical gate width of 3 to 5 ns. We have studied an alternative way to accomplish sub- nanosecond time-resolved imaging by operating a gatable proximity focused intensifier as a radio-frequency phase-sensitive camera. In this operating mode, we apply a dc bias voltage between the photocathode and the microchannel

  10. Image matching robust to changes in imaging conditions with a car-mounted Camera

    Microsoft Academic Search

    Naoko Enami; Norimichi Ukita; Masatsugu Kidode

    2008-01-01

    In this paper, we propose a matching method for images captured at different times and under different capturing conditions. Our method is designed for change detection in street scapes using normal automobiles that has an off-the-shelf car mounted camera and a GPS. Therefore, we should analyze low-resolution and low frame-rate images captured asynchronously. To cope with this difficulty, previous and

  11. Image Rectification for Robust Matching of Car-mounted Camera Images

    Microsoft Academic Search

    Naoko Enami; Norimichi Ukita; Masatsugu Kidode

    2009-01-01

    We propose a matching method for images captured at different times and under different capturing con- ditions. Our method is designed for change detection in streetscapes using normal automobiles that has an off-the-shelf car mounted camera and a GPS. There- fore, we should analyze low resolution and frame-rate images captured asynchronously. To cope with this dif- ficulty, previous and current

  12. Earth elevation map production and high resolution sensing camera imaging analysis

    NASA Astrophysics Data System (ADS)

    Yang, Xiubin; Jin, Guang; Jiang, Li; Dai, Lu; Xu, Kai

    2010-11-01

    The Earth's digital elevation which impacts space camera imaging has prepared and imaging has analysed. Based on matching error that TDI CCD integral series request of the speed of image motion, statistical experimental methods-Monte Carlo method is used to calculate the distribution histogram of Earth's elevation in image motion compensated model which includes satellite attitude changes, orbital angular rate changes, latitude, longitude and the orbital inclination changes. And then, elevation information of the earth's surface from SRTM is read. Earth elevation map which produced for aerospace electronic cameras is compressed and spliced. It can get elevation data from flash according to the shooting point of latitude and longitude. If elevation data between two data, the ways of searching data uses linear interpolation. Linear interpolation can better meet the rugged mountains and hills changing requests. At last, the deviant framework and camera controller are used to test the character of deviant angle errors, TDI CCD camera simulation system with the material point corresponding to imaging point model is used to analyze the imaging's MTF and mutual correlation similarity measure, simulation system use adding cumulation which TDI CCD imaging exceeded the corresponding pixel horizontal and vertical offset to simulate camera imaging when stability of satellite attitude changes. This process is practicality. It can effectively control the camera memory space, and meet a very good precision TDI CCD camera in the request matches the speed of image motion and imaging.

  13. Gamma camera-mounted anatomical X-ray tomography: technology, system characteristics and first images

    Microsoft Academic Search

    Moshe Bocher; Adi Balan; Yodphat Krausz; Yigal Shrem; Albert Lonn; Michael Wilk; Roland Chisin

    2000-01-01

    .   Scintigraphic diagnosis, based on functional image interpretation, becomes more accurate and meaningful when supported by\\u000a corresponding anatomical data. In order to produce anatomical images that are inherently registered with images of emission\\u000a computerised tomography acquired with a gamma camera, an X-ray transmission system was mounted on the slip-ring gantry of\\u000a a GEMS Millennium VG gamma camera. The X-ray imaging

  14. Lifetime-selective fluorescence imaging using an rf phase-sensitive camera

    Microsoft Academic Search

    Joseph R. Lakowicz; Klaus W. Berndt

    1991-01-01

    We report the creation of two-dimensional fluorescence lifetime images, based on a sinusoidally modulated image intensifier that is operated as a radio-frequency phase-sensitive camera, synchronized to a mode-locked and cavity dumped picosecond dye laser. By combining the image intensifier with a CCD camera and applying digital image processing, lifetime-selective signal suppression can be realized even for fluorophores with comparable lifetimes.

  15. Megapixel mythology and photospace: estimating photospace for camera phones from large image sets

    NASA Astrophysics Data System (ADS)

    Hultgren, Bror O.; Hertel, Dirk W.

    2008-01-01

    It is a myth that more pixels alone result in better images. The marketing of camera phones in particular has focused on their pixel numbers. However, their performance varies considerably according to the conditions of image capture. Camera phones are often used in low-light situations where the lack of a flash and limited exposure time will produce underexposed, noisy and blurred images. Camera utilization can be quantitatively described by photospace distributions, a statistical description of the frequency of pictures taken at varying light levels and camera-subject distances. If the photospace distribution is known, the user-experienced distribution of quality can be determined either directly by direct measurement of subjective quality, or by photospace-weighting of objective attributes. The population of a photospace distribution requires examining large numbers of images taken under typical camera phone usage conditions. ImagePhi was developed as a user-friendly software tool to interactively estimate the primary photospace variables, subject illumination and subject distance, from individual images. Additionally, subjective evaluations of image quality and failure modes for low quality images can be entered into ImagePhi. ImagePhi has been applied to sets of images taken by typical users with a selection of popular camera phones varying in resolution. The estimated photospace distribution of camera phone usage has been correlated with the distributions of failure modes. The subjective and objective data show that photospace conditions have a much bigger impact on image quality of a camera phone than the pixel count of its imager. The 'megapixel myth' is thus seen to be less a myth than an ill framed conditional assertion, whose conditions are to a large extent specified by the camera's operational state in photospace.

  16. Toward an image compression algorithm for the high-resolution electronic still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    Taking pictures with a camera that uses a digital recording medium instead of film has the advantage of recording and transmitting images without the use of a darkroom or a courier. However, high-resolution images contain an enormous amount of information and strain data-storage systems. Image compression will allow multiple images to be stored in the High-Resolution Electronic Still Camera. The camera is under development at Johnson Space Center. Fidelity of the reproduced image and compression speed are of tantamount importance. Lossless compression algorithms are fast and faithfully reproduce the image, but their compression ratios will be unacceptably low due to noise in the front end of the camera. Future efforts will include exploring methods that will reduce the noise in the image and increase the compression ratio.

  17. Development of filter exchangeable 3CCD camera for multispectral imaging acquisition

    NASA Astrophysics Data System (ADS)

    Lee, Hoyoung; Park, Soo Hyun; Kim, Moon S.; Noh, Sang Ha

    2012-05-01

    There are a lot of methods to acquire multispectral images. Dynamic band selective and area-scan multispectral camera has not developed yet. This research focused on development of a filter exchangeable 3CCD camera which is modified from the conventional 3CCD camera. The camera consists of F-mounted lens, image splitter without dichroic coating, three bandpass filters, three image sensors, filer exchangeable frame and electric circuit for parallel image signal processing. In addition firmware and application software have developed. Remarkable improvements compared to a conventional 3CCD camera are its redesigned image splitter and filter exchangeable frame. Computer simulation is required to visualize a pathway of ray inside of prism when redesigning image splitter. Then the dimensions of splitter are determined by computer simulation which has options of BK7 glass and non-dichroic coating. These properties have been considered to obtain full wavelength rays on all film planes. The image splitter is verified by two line lasers with narrow waveband. The filter exchangeable frame is designed to make swap bandpass filters without displacement change of image sensors on film plane. The developed 3CCD camera is evaluated to application of detection to scab and bruise on Fuji apple. As a result, filter exchangeable 3CCD camera could give meaningful functionality for various multispectral applications which need to exchange bandpass filter.

  18. SCC500: next-generation infrared imaging camera core products with highly flexible architecture for unique camera designs

    NASA Astrophysics Data System (ADS)

    Rumbaugh, Roy N.; Grealish, Kevin; Kacir, Tom; Arsenault, Barry; Murphy, Robert H.; Miller, Scott

    2003-09-01

    A new 4th generation MicroIR architecture is introduced as the latest in the highly successful Standard Camera Core (SCC) series by BAE SYSTEMS to offer an infrared imaging engine with greatly reduced size, weight, power, and cost. The advanced SCC500 architecture provides great flexibility in configuration to include multiple resolutions, an industry standard Real Time Operating System (RTOS) for customer specific software application plug-ins, and a highly modular construction for unique physical and interface options. These microbolometer based camera cores offer outstanding and reliable performance over an extended operating temperature range to meet the demanding requirements of real-world environments. A highly integrated lens and shutter is included in the new SCC500 product enabling easy, drop-in camera designs for quick time-to-market product introductions.

  19. The Potential of Dual Camera Systems for Multimodal Imaging of Cardiac Electrophysiology and Metabolism

    PubMed Central

    Holcomb, Mark R.; Woods, Marcella C.; Uzelac, Ilija; Wikswo, John P.; Gilligan, Jonathan M.; Sidorov, Veniamin Y.

    2013-01-01

    Fluorescence imaging has become a common modality in cardiac electrodynamics. A single fluorescent parameter is typically measured. Given the growing emphasis on simultaneous imaging of more than one cardiac variable, we present an analysis of the potential of dual camera imaging, using as an example our straightforward dual camera system that allows simultaneous measurement of two dynamic quantities from the same region of the heart. The advantages of our system over others include an optional software camera calibration routine that eliminates the need for precise camera alignment. The system allows for rapid setup, dichroic image separation, dual-rate imaging, and high spatial resolution, and it is generally applicable to any two-camera measurement. This type of imaging system offers the potential for recording simultaneously not only transmembrane potential and intracellular calcium, two frequently measured quantities, but also other signals more directly related to myocardial metabolism, such as [K+]e, NADH, and reactive oxygen species, leading to the possibility of correlative multimodal cardiac imaging. We provide a compilation of dye and camera information critical to the design of dual camera systems and experiments. PMID:19657065

  20. Experimental research on thermoelectric cooler for imager camera thermal control

    NASA Astrophysics Data System (ADS)

    Hu, Bing-ting; Kang, Ao-feng; Fu, Xin; Jiang, Shi-chen; Dong, Yao-hai

    2013-09-01

    Conventional passive thermal design failed to satisfy CCD's temperature requirement on a geostationary earth orbit satellite Imager camera because of the high power and low working temperature, leading to utilization of thermoelectric cooler (TEC) for heat dissipation. TEC was used in conjunction with the external radiator in the CCDs' thermal design. In order to maintain the CCDs at low working temperature, experimental research on the performance of thermoelectric cooler was necessary and the results could be the guide for the application of TEC in different conditions. The experimental system to evaluate the performance of TEC was designed and built, consisting of TEC, heat pipe, TEC mounting plate, radiator and heater. A series of TEC performance tests were conducted for domestic and oversea TECs in thermal vacuum environment. The effects of TEC's mounting, input power and heat load on the temperature difference of TEC's cold and hot face were explored. Results demonstrated that the temperature difference of TEC's cold and hot face was slightly increased when TEC's operating voltage reached 80% of rating voltage, which caused the temperature rise of TEC's hot face. It recommended TEC to operate at low voltage. Based on experiment results, thermal analysis indicated that the temperature difference of TEC's cold and hot face could satisfy the temperature requirement and still had surplus.

  1. Cloud Detection with the Earth Polychromatic Imaging Camera (EPIC)

    NASA Technical Reports Server (NTRS)

    Meyer, Kerry; Marshak, Alexander; Lyapustin, Alexei; Torres, Omar; Wang, Yugie

    2011-01-01

    The Earth Polychromatic Imaging Camera (EPIC) on board the Deep Space Climate Observatory (DSCOVR) would provide a unique opportunity for Earth and atmospheric research due not only to its Lagrange point sun-synchronous orbit, but also to the potential for synergistic use of spectral channels in both the UV and visible spectrum. As a prerequisite for most applications, the ability to detect the presence of clouds in a given field of view, known as cloud masking, is of utmost importance. It serves to determine both the potential for cloud contamination in clear-sky applications (e.g., land surface products and aerosol retrievals) and clear-sky contamination in cloud applications (e.g., cloud height and property retrievals). To this end, a preliminary cloud mask algorithm has been developed for EPIC that applies thresholds to reflected UV and visible radiances, as well as to reflected radiance ratios. This algorithm has been tested with simulated EPIC radiances over both land and ocean scenes, with satisfactory results. These test results, as well as algorithm sensitivity to potential instrument uncertainties, will be presented.

  2. Fluorine-18-fluorodeoxyglucose cardiac imaging using a modified scintillation camera.

    PubMed

    Sandler, M P; Bax, J J; Patton, J A; Visser, F C; Martin, W H; Wijns, W

    1998-12-01

    Conventional 201TI and hexakis 2-methoxy-2-isobutyl isonitrile studies are less accurate as compared to FDG PET in the prediction of functional recovery after revascularization in patients with injured but viable myocardium. The introduction of a dual-head variable-angle-geometry scintillation camera equipped with thicker crystals (5/8 in.) and high-resolution, ultrahigh-energy collimators capable of 511 keV imaging has permitted FDG SPECT to provide information equivalent to that of PET for the detection of injured but viable myocardium in patients with chronic ischemic heart disease. The development of standardized glucose-loading protocols, including glucose-insulin-potassium infusion and the potential use of nicotinic acid derivatives, has simplified the method of obtaining consistently good-to-excellent quality FDG SPECT cardiac studies. FDG SPECT may become the modality of choice for evaluating injured but viable myocardium because of enhanced availability of FDG, logistics, patient convenience, accuracy and cost-effectiveness compared to PET. PMID:9867138

  3. High-resolution position-sensitive proportional counter camera for radiochromatographic imaging

    Microsoft Academic Search

    D. D. Schuresko; M. K. Kopp; J. A. Harter; W. D. Bostick

    1988-01-01

    A high-resolution proportional counter camera for imaging two- dimensional (2-D) distributions of radionuclides is described. The camera can accommodate wet or dry samples that are separated from the counter gas volume by a 6-..mu..m Mylar membrane. Using 95% Xe-5% COâ gas at 3-MPa pressure and electronic collimation based upon pulse energy discrimination, the camera's performance characteristics for ¹⁴C distributions are

  4. P2C2: Programmable pixel compressive camera for high speed imaging

    Microsoft Academic Search

    Dikpal Reddy; Ashok Veeraraghavan; Rama Chellappa

    2011-01-01

    We describe an imaging architecture for compressive video sensing termed programmable pixel compressive camera (P2C2). P2C2 allows us to capture fast phenomena at frame rates higher than the camera sensor. In P2C2, each pixel has an independent shutter that is modulated at a rate higher than the camera frame-rate. The observed intensity at a pixel is an integration of the

  5. EDGE DETECTION PERFORMANCE IN SUPER-RESOLUTION IMAGE RECONSTRUCTION FROM CAMERA ARRAYS

    E-print Network

    Rajan, Dinesh

    EDGE DETECTION PERFORMANCE IN SUPER-RESOLUTION IMAGE RECONSTRUCTION FROM CAMERA ARRAYS Sally L@engr.smu.edu ABSTRACT Previous work has shown that for super-resolution image reconstruction from low resolution images the behavior of edge errors and intensity errors for super-resolution image reconstruction applications

  6. LROC NAC Photometry as a Tool for Studying Physical and Compositional Properties of the Lunar Surface

    NASA Astrophysics Data System (ADS)

    Clegg, R. N.; Jolliff, B. L.; Boyd, A. K.; Stopar, J. D.; Sato, H.; Robinson, M. S.; Hapke, B. W.

    2014-10-01

    LROC NAC photometry has been used to study the effects of rocket exhaust on lunar soil properties, and here we apply the same photometric methods to place compositional constraints on regions of silicic volcanism and pure anorthosite on the Moon.

  7. Pantir - a Dual Camera Setup for Precise Georeferencing and Mosaicing of Thermal Aerial Images

    NASA Astrophysics Data System (ADS)

    Weber, I.; Jenal, A.; Kneer, C.; Bongartz, J.

    2015-03-01

    Research and monitoring in fields like hydrology and agriculture are applications of airborne thermal infrared (TIR) cameras, which suffer from low spatial resolution and low quality lenses. Common ground control points (GCPs), lacking thermal activity and being relatively small in size, cannot be used in TIR images. Precise georeferencing and mosaicing however is necessary for data analysis. Adding a high resolution visible light camera (VIS) with a high quality lens very close to the TIR camera, in the same stabilized rig, allows us to do accurate geoprocessing with standard GCPs after fusing both images (VIS+TIR) using standard image registration methods.

  8. Relating transverse ray error and light fields in plenoptic camera images

    NASA Astrophysics Data System (ADS)

    Schwiegerling, Jim; Tyo, J. Scott

    2013-09-01

    Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. The camera image is focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The resultant image is an array of circular exit pupil images, each corresponding to the overlying lenslet. The position of the lenslet encodes the spatial information of the scene, whereas as the sensor pixels encode the angular information for light incident on the lenslet. The 4D light field is therefore described by the 2D spatial information and 2D angular information captured by the plenoptic camera. In aberration theory, the transverse ray error relates the pupil coordinates of a given ray to its deviation from the ideal image point in the image plane and is consequently a 4D function as well. We demonstrate a technique for modifying the traditional transverse ray error equations to recover the 4D light field of a general scene. In the case of a well corrected optical system, this light field is easily related to the depth of various objects in the scene. Finally, the effects of sampling with both the lenslet array and the camera sensor on the 4D light field data are analyzed to illustrate the limitations of such systems.

  9. Automated camera calibration for image-guided surgery using intensity-based registration

    Microsoft Academic Search

    Daniel Rueckert; Calvin R. Maurer

    2002-01-01

    In this paper we present a novel approach for the calibration of video cameras in an augmented reality image-guided surgery system. Whereas most calibration algorithms rely on the extraction of features such as points or lines, our proposed calibration algorithm determines the intrinsic and extrinsic camera calibration parameters by maximising the similarity between the real view of a calibration object

  10. Correction of Spatially Varying Image and Video Motion Blur using a Hybrid Camera

    E-print Network

    Washington at Seattle, University of

    -resolution images yields high-frequency d is a standard video camera that is coupled with an auxiliary low-resolution camera sharing the same optical path but capturing at a significantly higher frame rate. The auxiliary video is temporally sharper but at a lower

  11. IMAGE SPLICING DETECTION USING CAMERA RESPONSE FUNCTION CONSISTENCY AND AUTOMATIC SEGMENTATION

    E-print Network

    Chang, Shih-Fu

    areas. One camera response function (CRF) is estimated from each area using geometric invariants from lo- cally planar irradiance points (LPIPs). To classify a boundary segment between two areas as authenticIMAGE SPLICING DETECTION USING CAMERA RESPONSE FUNCTION CONSISTENCY AND AUTOMATIC SEGMENTATION Yu

  12. Characterization of a direct detection device imaging camera for transmission electron microscopy.

    PubMed

    Milazzo, Anna-Clare; Moldovan, Grigore; Lanman, Jason; Jin, Liang; Bouwer, James C; Klienfelder, Stuart; Peltier, Steven T; Ellisman, Mark H; Kirkland, Angus I; Xuong, Nguyen-Huu

    2010-06-01

    The complete characterization of a novel direct detection device (DDD) camera for transmission electron microscopy is reported, for the first time at primary electron energies of 120 and 200 keV. Unlike a standard charge coupled device (CCD) camera, this device does not require a scintillator. The DDD transfers signal up to 65 lines/mm providing the basis for a high-performance platform for a new generation of wide field-of-view high-resolution cameras. An image of a thin section of virus particles is presented to illustrate the substantially improved performance of this sensor over current indirectly coupled CCD cameras. PMID:20382479

  13. A Prediction Method of TV Camera Image for Space Manual-control Rendezvous and Docking

    NASA Astrophysics Data System (ADS)

    Zhen, Huang; Qing, Yang; Wenrui, Wu

    Space manual-control rendezvous and docking (RVD) is a key technology for accomplishing the RVD mission in manned space engineering, especially when automatic control system is out of work. The pilot on chase spacecraft manipulates the hand-stick by the image of target spacecraft captured by TV camera. From the TV image, the relative position and attitude of chase and target spacecrafts can be shown. Therefore, the size, the position, the brightness and the shadow of the target on TV camera are key to guarantee the success of manual-control RVD. A method of predicting the on-orbit TV camera image at different relative positions and light conditions during the process of RVD is discussed. Firstly, the basic principle of capturing the image of cross drone on target spacecraft by TV camera is analyzed theoretically, based which the strategy of manual-control RVD is discussed in detail. Secondly, the relationship between the displayed size or position and the real relative distance of chase and target spacecrafts is presented, the brightness and reflection by the target spacecraft at different light conditions are decribed, the shadow on cross drone caused by the chase or target spacecraft is analyzed. Thirdly, a prediction method of on-orbit TV camera images at certain orbit and light condition is provided, and the characteristics of TV camera image during the RVD is analyzed. Finally, the size, the position, the brightness and the shadow of target spacecraft on TV camera image at typical orbit is simulated. The result, by comparing the simulated images with the real images captured by the TV camera on Shenzhou manned spaceship , shows that the prediction method is reasonable

  14. Hubble Space Telescope Planetary Camera Images of NGC 1316

    E-print Network

    Edward J. Shaya; Daniel M. Dowling; Douglas G. Currie; S. M. Faber; Edward A. Ajhar; Tod R. Lauer; Edward J. Groth; Carl J. Grillmair; C. Roger Lynds; Earl J. O'Neil, Jr

    1996-03-13

    We present HST Planetary Camera V and I~band images of the central region of the peculiar giant elliptical galaxy NGC 1316. The inner profile is well fit by a nonisothermal core model with a core radius of 0.41" +/- 0.02" (34 pc). At an assumed distance of 16.9 Mpc, the deprojected luminosity density reaches $\\sim 2.0 \\times 10^3 L_{\\sun}$ pc$^{-3}$. Outside the inner two or three arcseconds, a constant mass-to-light ratio of $\\sim 2.2 \\pm 0.2$ is found to fit the observed line width measurements. The line width measurements of the center indicate the existence of either a central dark object of mass $2 \\times 10^9 M_{\\sun}$, an increase in the stellar mass-to-light ratio by at least a factor of two for the inner few arcseconds, or perhaps increasing radial orbit anisotropy towards the center. The mass-to-light ratio run in the center of NGC 1316 resembles that of many other giant ellipticals, some of which are known from other evidence to harbor central massive dark objects (MDO's). We also examine twenty globular clusters associated with NGC 1316 and report their brightnesses, colors, and limits on tidal radii. The brightest cluster has a luminosity of $9.9 \\times 10^6 L_{\\sun}$ ($M_V = -12.7$), and the faintest detectable cluster has a luminosity of $2.4 \\times 10^5 L_{\\sun}$ ($M_V = -8.6$). The globular clusters are just barely resolved, but their core radii are too small to be measured. The tidal radii in this region appear to be $\\le$ 35 pc. Although this galaxy seems to have undergone a substantial merger in the recent past, young globular clusters are not detected.

  15. No-reference sharpness assessment of camera-shaken images by analysis of spectral structure.

    PubMed

    Oh, Taegeun; Park, Jincheol; Seshadrinathan, Kalpana; Lee, Sanghoon; Bovik, Alan Conrad

    2014-12-01

    The tremendous explosion of image-, video-, and audio-enabled mobile devices, such as tablets and smart-phones in recent years, has led to an associated dramatic increase in the volume of captured and distributed multimedia content. In particular, the number of digital photographs being captured annually is approaching 100 billion in just the U.S. These pictures are increasingly being acquired by inexperienced, casual users under highly diverse conditions leading to a plethora of distortions, including blur induced by camera shake. In order to be able to automatically detect, correct, or cull images impaired by shake-induced blur, it is necessary to develop distortion models specific to and suitable for assessing the sharpness of camera-shaken images. Toward this goal, we have developed a no-reference framework for automatically predicting the perceptual quality of camera-shaken images based on their spectral statistics. Two kinds of features are defined that capture blur induced by camera shake. One is a directional feature, which measures the variation of the image spectrum across orientations. The second feature captures the shape, area, and orientation of the spectral contours of camera shaken images. We demonstrate the performance of an algorithm derived from these features on new and existing databases of images distorted by camera shake. PMID:25350928

  16. The high resolution gamma imager (HRGI): a CCD based camera for medical imaging

    NASA Astrophysics Data System (ADS)

    Lees, John. E.; Fraser, George. W.; Keay, Adam; Bassford, David; Ott, Robert; Ryder, William

    2003-11-01

    We describe the High Resolution Gamma Imager (HRGI): a Charge Coupled Device (CCD) based camera for imaging small volumes of radionuclide uptake in tissues. The HRGI is a collimated, scintillator-coated, low cost, high performance imager using low noise CCDs that will complement whole-body imaging Gamma Cameras in nuclear medicine. Using 59.5 keV radiation from a 241Am source we have measured the spatial resolution and relative efficiency of test CCDs from E2V Technologies (formerly EEV Ltd.) coated with Gadox (Gd 2O 2S(Tb)) layers of varying thicknesses. The spatial resolution degrades from 0.44 to 0.6 mm and the detection efficiency increases (×3) as the scintillator thickness increases from 100 to 500 ?m. We also describe our first image using the clinically important isotope 99mTc. The final HRGI will have intrinsic sub-mm spatial resolution (˜0.7 mm) and good energy resolution over the energy range 30-160 keV.

  17. D Hyperspectral Frame Imager Camera Data in Photogrammetric Mosaicking

    NASA Astrophysics Data System (ADS)

    Mäkeläinen, A.; Saari, H.; Hippi, I.; Sarkeala, J.; Soukkamäki, J.

    2013-08-01

    A new 2D hyperspectral frame camera system has been developed by VTT (Technical Research Center of Finland) and Rikola Ltd. It contains frame based and very light camera with RGB-NIR sensor and it is suitable for light weight and cost effective UAV planes. MosaicMill Ltd. has converted the camera data into proper format for photogrammetric processing, and camera's geometrical accuracy and stability are evaluated to guarantee required accuracies for end user applications. MosaicMill Ltd. has also applied its' EnsoMOSAIC technology to process hyperspectral data into orthomosaics. This article describes the main steps and results on applying hyperspectral sensor in orthomosaicking. The most promising results as well as challenges in agriculture and forestry are also described.

  18. Development of gamma ray imaging cameras. Progress report for second year

    SciTech Connect

    Wehe, D.K.; Knoll, G.F.

    1992-05-28

    In January 1990, the Department of Energy initiated this project with the objective to develop the technology for general purpose, portable gamma ray imaging cameras useful to the nuclear industry. The ultimate goal of this R&D initiative is to develop the analog to the color television camera where the camera would respond to gamma rays instead of visible photons. The two-dimensional real-time image would be displayed would indicate the geometric location of the radiation relative to the camera`s orientation, while the brightness and ``color`` would indicate the intensity and energy of the radiation (and hence identify the emitting isotope). There is a strong motivation for developing such a device for applications within the nuclear industry, for both high- and low-level waste repositories, for environmental restoration problems, and for space and fusion applications. At present, there are no general purpose radiation cameras capable of producing spectral images for such practical applications. At the time of this writing, work on this project has been underway for almost 18 months. Substantial progress has been made in the project`s two primary areas: mechanically-collimated (MCC) and electronically-collimated camera (ECC) designs. We present developments covering the mechanically-collimated design, and then discuss the efforts on the electronically-collimated camera. The renewal proposal addresses the continuing R&D efforts for the third year effort. 8 refs.

  19. Imaging Asteroid 4 Vesta Using the Framing Camera

    NASA Technical Reports Server (NTRS)

    Keller, H. Uwe; Nathues, Andreas; Coradini, Angioletta; Jaumann, Ralf; Jorda, Laurent; Li, Jian-Yang; Mittlefehldt, David W.; Mottola, Stefano; Raymond, C. A.; Schroeder, Stefan E.

    2011-01-01

    The Framing Camera (FC) onboard the Dawn spacecraft serves a dual purpose. Next to its central role as a prime science instrument it is also used for the complex navigation of the ion drive spacecraft. The CCD detector with 1024 by 1024 pixels provides the stability for a multiyear mission and its high requirements of photometric accuracy over the wavelength band from 400 to 1000 nm covered by 7 band-pass filters. Vesta will be observed from 3 orbit stages with image scales of 227, 63, and 17 m/px, respectively. The mapping of Vesta s surface with medium resolution will be only completed during the exit phase when the north pole will be illuminated. A detailed pointing strategy will cover the surface at least twice at similar phase angles to provide stereo views for reconstruction of the topography. During approach the phase function of Vesta was determined over a range of angles not accessible from earth. This is the first step in deriving the photometric function of the surface. Combining the topography based on stereo tie points with the photometry in an iterative procedure will disclose details of the surface morphology at considerably smaller scales than the pixel scale. The 7 color filters are well positioned to provide information on the spectral slope in the visible, the depth of the strong pyroxene absorption band, and their variability over the surface. Cross calibration with the VIR spectrometer that extends into the near IR will provide detailed maps of Vesta s surface mineralogy and physical properties. Georeferencing all these observation will result in a coherent and unique data set. During Dawn s approach and capture FC has already demonstrated its performance. The strong variation observed by the Hubble Space Telescope can now be correlated with surface units and features. We will report on results obtained from images taken during survey mode covering the whole illuminated surface. Vesta is a planet-like differentiated body, but its surface gravity and escape velocity are comparable to those of other asteroids and hence much smaller than those of the inner planets or

  20. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization

    NASA Astrophysics Data System (ADS)

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj

    2015-03-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.

  1. Low-Light AutoFocus Enhancement for Digital and CellPhone Camera Image Pipelines

    Microsoft Academic Search

    Mark Gamadia; Nasser Kehtarnavaz; Katie Roberts-Hoffman

    2007-01-01

    Images captured by a digital or cell-phone camera in low-light environments usually suffer from a lack of sharpness due to the failure of the camera's passive auto-focus (AF) system to locate the peak in-focus position of a sharpness function that is extracted from the image. In low-light, the sharpness function becomes flat, making it quite difficult to locate the peak.In

  2. Development of an Ultra-Violet Digital Camera for Volcanic Sulfur Dioxide Imaging

    Microsoft Academic Search

    G. J. Bluth; J. M. Shannon; I. M. Watson; F. J. Prata; V. J. Realmuto

    2006-01-01

    In an effort to improve monitoring of passive volcano degassing, we have constructed and tested a digital camera for quantifying the sulfur dioxide (SO2) content of volcanic plumes. The camera utilizes a bandpass filter to collect photons in the ultra-violet (UV) region where SO2 selectively absorbs UV light. SO2 is quantified by imaging calibration cells of known SO2 concentrations. Images

  3. Development of an ultra-violet digital camera for volcanic SO 2 imaging

    Microsoft Academic Search

    G. J. S. Bluth; J. M. Shannon; I. M. Watson; A. J. Prata; V. J. Realmuto

    2007-01-01

    In an effort to improve monitoring of passive volcano degassing, we have constructed and tested a digital camera for quantifying the sulfur dioxide (SO2) content of volcanic plumes. The camera utilizes a bandpass filter to collect photons in the ultra-violet (UV) region where SO2 selectively absorbs UV light. SO2 is quantified by imaging calibration cells of known SO2 concentrations.Images of

  4. Development of an ultra-violet digital camera for volcanic SO2 imaging

    Microsoft Academic Search

    G. J. S. Bluth; J. M. Shannon; I. M. Watson; A. J. Prata; V. J. Realmuto

    2007-01-01

    In an effort to improve monitoring of passive volcano degassing, we have constructed and tested a digital camera for quantifying the sulfur dioxide (SO2) content of volcanic plumes. The camera utilizes a bandpass filter to collect photons in the ultra-violet (UV) region where SO2 selectively absorbs UV light. SO2 is quantified by imaging calibration cells of known SO2 concentrations. Images

  5. Image Intensifier Modules For Use With Commercially Available Solid State Cameras

    NASA Astrophysics Data System (ADS)

    Murphy, Howard; Tyler, Al; Lake, Donald W.

    1989-04-01

    A modular approach to design has contributed greatly to the success of the family of machine vision video equipment produced by EG&G Reticon during the past several years. Internal modularity allows high-performance area (matrix) and line scan cameras to be assembled with two or three electronic subassemblies with very low labor costs, and permits camera control and interface circuitry to be realized by assemblages of various modules suiting the needs of specific applications. Product modularity benefits equipment users in several ways. Modular matrix and line scan cameras are available in identical enclosures (Fig. 1), which allows enclosure components to be purchased in volume for economies of scale and allows field replacement or exchange of cameras within a customer-designed system to be easily accomplished. The cameras are optically aligned (boresighted) at final test; modularity permits optical adjustments to be made with the same precise test equipment for all camera varieties. The modular cameras contain two, or sometimes three, hybrid microelectronic packages (Fig. 2). These rugged and reliable "submodules" perform all of the electronic operations internal to the camera except for the job of image acquisition performed by the monolithic image sensor. Heat produced by electrical power dissipation in the electronic modules is conducted through low resistance paths to the camera case by the metal plates, which results in a thermally efficient and environmentally tolerant camera with low manufacturing costs. A modular approach has also been followed in design of the camera control, video processor, and computer interface accessory called the Formatter (Fig. 3). This unit can be attached directly onto either a line scan or matrix modular camera to form a self-contained units, or connected via a cable to retain the advantages inherent to a small, light weight, and rugged image sensing component. Available modules permit the bus-structured Formatter to be configured as required by a specific camera application. Modular line and matrix scan cameras incorporating sensors with fiber optic faceplates (Fig 4) are also available. These units retain the advantages of interchangeability, simple construction, ruggedness, and optical precision offered by the more common lens input units. Fiber optic faceplate cameras are used for a wide variety of applications. A common usage involves mating of the Reticon-supplied camera to a customer-supplied intensifier tube for low light level and/or short exposure time situations.

  6. Influence of Non-Linear Image Processing on Spatial Frequency Response of Digital Still Cameras

    Microsoft Academic Search

    Yukio Okano

    1998-01-01

    The spatial frequency response (SFR) for a digital still camera is affected by non-linear image processing. We analyze the influence of image enhancement processing and gamma correction on SFR characteristics by varying the edge chart contrast. SFR expressions depending on the chart image contrast are proposed, as is a measurement method of resolving power based on the slanted edge.

  7. Development of Slow Scan Digital CCD Camera for Low light level Image

    Microsoft Academic Search

    YAOYU CHENG; YAN HU; YONGHONG LI

    this paper studies the method of the development of low cost and high resolving power scientific grade camera for low light level image, its image can be received by computer. The main performance parameter and readout driving signal are introduced, the total scheme of image acquisition is designed. Using computer Expand Parallel Port and the pipelining work method of readout,

  8. An image compression algorithm for a high-resolution digital still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    The Electronic Still Camera (ESC) project will provide for the capture and transmission of high-quality images without the use of film. The image quality will be superior to video and will approach the quality of 35mm film. The camera, which will have the same general shape and handling as a 35mm camera, will be able to send images to earth in near real-time. Images will be stored in computer memory (RAM) in removable cartridges readable by a computer. To save storage space, the image will be compressed and reconstructed at the time of viewing. Both lossless and loss-y image compression algorithms are studied, described, and compared.

  9. Validating a texture metric for camera phone images using a texture-based softcopy attribute ruler

    NASA Astrophysics Data System (ADS)

    Phillips, Jonathan B.; Christoffel, Douglas

    2010-01-01

    Imaging systems in camera phones have image quality limitations attributed to optics, size, and cost constraints. These limitations generally result in unwanted system noise. In order to minimize the image quality degradation, nonlinear noise cleaning algorithms are often applied to the images. However, as the strength of the noise cleaning increases, this often leads to texture degradation. The Camera Phone Image Quality (CPIQ) initiative of the International Imaging Industry Association (I3A) has been developing metrics to quantify texture appearance in camera phone images. Initial research established high correlation levels between the metrics and psychophysical data from sets of images that had noise cleaning filtering applied to simulate capture in actual camera phone systems. This paper describes the subsequent work to develop a texture-based softcopy attribute ruler in order to assess the texture appearance of eight camera phone units from four different manufacturers and to assess the efficacy of the texture metrics. Multiple companies participating in the initiative have been using the softcopy ruler approach in order to pool observers and increase statistical significance. Results and conclusions based on three captured scenes and two texture metrics will be presented.

  10. Temperature resolution enhancing of commercially available THz passive cameras due to computer processing of images

    NASA Astrophysics Data System (ADS)

    Trofimov, Vyacheslav A.; Trofimov, Vladislav V.; Kuchik, Igor E.

    2014-06-01

    As it is well-known, application of the passive THz camera for the security problems is very promising way. It allows seeing concealed object without contact with a person and this camera is non-dangerous for a person. Efficiency of using the passive THz camera depends on its temperature resolution. This characteristic specifies possibilities of the detection of concealed object: minimal size of the object, maximal distance of the detection, image detail. One of probable ways for a quality image enhancing consists in computer processing of image. Using computer processing of the THz image of objects concealed on the human body, one may improve it many times. Consequently, the instrumental resolution of such device may be increased without any additional engineering efforts. We demonstrate new possibilities for seeing the clothes details, which raw images, produced by the THz cameras, do not allow to see. We achieve good quality of the image due to applying various spatial filters with the aim to demonstrate independence of processed images on math operations. This result demonstrates a feasibility of objects seeing. We consider images produced by THz passive cameras manufactured by Microsemi Corp., and ThruVision Corp., and Capital Normal University (Beijing, China).

  11. Fast image acquisition and processing on a TV camera-based portal imaging system.

    PubMed

    Baier, Kurt; Meyer, Jürgen

    2005-01-01

    The present paper describes the fast acquisition and processing of portal images directly from a TV camera-based portal imaging device (Siemens Beamview Plus). This approach employs not only hard- and software included in the standard package installed by the manufacturer (in particular the frame grabber card and the Matrox Intellicam interpreter software), but also a software tool developed in-house for further processing and analysis of the images. The technical details are presented, including the source code for the Matrox interpreter script that enables the image capturing process. With this method it is possible to obtain raw images directly from the frame grabber card at an acquisition rate of 15 images per second. The original configuration by the manufacturer allows the acquisition of only a few images over the course of a treatment session. The approach has a wide range of applications, such as quality assurance (QA) of the radiation beam, real-time imaging, real-time verification of intensity-modulated radiation therapy (IMRT) fields, and generation of movies of the radiation field (fluoroscopy mode). PMID:16008082

  12. Matching Images to Models: Camera Calibration for 3-D Surface Reconstruction

    NASA Technical Reports Server (NTRS)

    Morris, Robin D.; Smelyanskiy, Vadim N.; Cheeseman. Peter C.; Norvig, Peter (Technical Monitor)

    2001-01-01

    In a previous paper we described a system which recursively recovers a super-resolved three dimensional surface model from a set of images of the surface. In that paper we assumed that the camera calibration for each image was known. In this paper we solve two problems. Firstly, if an estimate of the surface is already known, the problem is to calibrate a new image relative to the existing surface model. Secondly, if no surface estimate is available, the relative camera calibration between the images in the set must be estimated. This will allow an initial surface model to be estimated. Results of both types of estimation are given.

  13. A Low-Cost Imaging Method to Avoid Hand Shake Blur for Cell Phone Cameras

    NASA Astrophysics Data System (ADS)

    Luo, Lin-Bo; Chong, Jong-Wha

    In this letter, a novel imaging method to reduce the hand shake blur of a cell phone camera without using frame memory is proposed. The method improves the captured image in real time through the use of two additional preview images whose parameters can be calculated in advance and stored in a look-up table. The method does not require frame memory, and thus it can significantly reduce the chip size. The scheme is suitable for integration into a low-cost image sensor of a cell phone camera.

  14. Proposal for real-time terahertz imaging system with palm-size terahertz camera and compact quantum cascade laser

    E-print Network

    Oda, Naoki

    This paper describes a real-time terahertz (THz) imaging system, using the combination of a palm-size THz camera with a compact quantum cascade laser (QCL). The THz camera contains a 320x240 microbolometer focal plane array ...

  15. Development of CCD Cameras for Soft X-ray Imaging at the National Ignition Facility

    SciTech Connect

    Teruya, A. T. [LLNL; Palmer, N. E. [LLNL; Schneider, M. B. [LLNL; Bell, P. M. [LLNL; Sims, G. [Spectral Instruments; Toerne, K. [Spectral Instruments; Rodenburg, K. [Spectral Instruments; Croft, M. [Spectral Instruments; Haugh, M. J. [NSTec; Charest, M. R. [NSTec; Romano, E. D. [NSTec; Jacoby, K. D. [NSTec

    2013-09-01

    The Static X-Ray Imager (SXI) is a National Ignition Facility (NIF) diagnostic that uses a CCD camera to record time-integrated X-ray images of target features such as the laser entrance hole of hohlraums. SXI has two dedicated positioners on the NIF target chamber for viewing the target from above and below, and the X-ray energies of interest are 870 eV for the “soft” channel and 3 – 5 keV for the “hard” channels. The original cameras utilize a large format back-illuminated 2048 x 2048 CCD sensor with 24 micron pixels. Since the original sensor is no longer available, an effort was recently undertaken to build replacement cameras with suitable new sensors. Three of the new cameras use a commercially available front-illuminated CCD of similar size to the original, which has adequate sensitivity for the hard X-ray channels but not for the soft. For sensitivity below 1 keV, Lawrence Livermore National Laboratory (LLNL) had additional CCDs back-thinned and converted to back-illumination for use in the other two new cameras. In this paper we describe the characteristics of the new cameras and present performance data (quantum efficiency, flat field, and dynamic range) for the front- and back-illuminated cameras, with comparisons to the original cameras.

  16. Evaluation of detector material and radiation source position on Compton camera's ability for multitracer imaging.

    PubMed

    Uche, C Z; Round, W H; Cree, M J

    2012-09-01

    We present a study on the effects of detector material, radionuclide source and source position on the Compton camera aimed at realistic characterization of the camera's performance in multitracer imaging as it relates to brain imaging. The GEANT4 Monte Carlo simulation software was used to model the physics of radiation transport and interactions with matter. Silicon (Si) and germanium (Ge) detectors were evaluated for the scatterer, and cadmium zinc telluride (CZT) and cerium-doped lanthanum bromide (LaBr(3):Ce) were considered for the absorber. Image quality analyses suggest that the use of Si as the scatterer and CZT as the absorber would be preferred. Nevertheless, two simulated Compton camera models (Si/CZT and Si/LaBr(3):Ce Compton cameras) that are considered in this study demonstrated good capabilities for multitracer imaging in that four radiotracers within the nuclear medicine energy range are clearly visualized by the cameras. It is found however that beyond a range difference of about 2 cm for (113m)In and (18)F radiotracers in a brain phantom, there may be a need to rotate the Compton camera for efficient brain imaging. PMID:22829298

  17. X-ray imaging using a consumer-grade digital camera

    NASA Astrophysics Data System (ADS)

    Winch, N. M.; Edgar, A.

    2011-10-01

    The recent advancements in consumer-grade digital camera technology and the introduction of high-resolution, high sensitivity CsBr:Eu 2+ storage phosphor imaging plates make possible a new cost-effective technique for X-ray imaging. The imaging plate is bathed with red stimulating light by high-intensity light-emitting diodes, and the photostimulated image is captured with a digital single-lens reflex (SLR) camera. A blue band-pass optical filter blocks the stimulating red light but transmits the blue photostimulated luminescence. Using a Canon D5 Mk II camera and an f1.4 wide-angle lens, the optical image of a 240×180 mm 2 Konica CsBr:Eu 2+ imaging plate from a position 230 mm in front of the camera lens can be focussed so as to laterally fill the 35×23.3 mm 2 camera sensor, and recorded in 2808×1872 pixel elements, corresponding to an equivalent pixel size on the plate of 88 ?m. The analogue-to-digital conversion from the camera electronics is 13 bits, but the dynamic range of the imaging system as a whole is limited in practice by noise to about 2.5 orders of magnitude. The modulation transfer function falls to 0.2 at a spatial frequency of 2.2 line pairs/mm. The limiting factor of the spatial resolution is light scattering in the plate rather than the camera optics. The limiting factors for signal-to-noise ratio are shot noise in the light, and dark noise in the CMOS sensor. Good quality images of high-contrast objects can be recorded with doses of approximately 1 mGy. The CsBr:Eu 2+ plate has approximately three times the readout sensitivity of a similar BaFBr:Eu 2+ plate.

  18. Camera model compensation for image integration of time-of-flight depth video and color video

    NASA Astrophysics Data System (ADS)

    Yamashita, Hiromu; Tokai, Shogo; Uchino, Shunpei

    2015-03-01

    In this paper, we describe a consideration of a method of a camera calibration for TOF depth camera in the case of using with color video camera to combine their images into colored 3D models of a scene. Mainly, there are two problems with the calibration to combine them. One is stability of the TOF measurements, and another is deviation between the measured depth values and actual distances that are based on a geometrical camera model. To solve them, we propose a calibration method for it. At first, we estimate an optimum offset distance and intrinsic parameters for the depth camera to match both the measured depth value and its ideal one. Using estimated offset to consecutive frames and compensating the measured values to the actual distance values each frame, we try to remove the difference of camera models and suppress the noise as temporal variation For the estimation, we used the Zhang's calibration method for the intensity image from the depth camera and the color video image of a chessboard pattern. Using this method, we can get the 3D models which are matched between depth and color information correctly and stably. We also explain effectiveness of our approach by showing several experimental results.

  19. Advanced camera image data acquisition system for Pi-of-the-Sky

    NASA Astrophysics Data System (ADS)

    Kwiatkowski, Maciej; Kasprowicz, Grzegorz; Pozniak, Krzysztof; Romaniuk, Ryszard; Wrochna, Grzegorz

    2008-11-01

    The paper describes a new generation of high performance, remote control, CCD cameras designed for astronomical applications. A completely new camera PCB was designed, manufactured, tested and commissioned. The CCD chip was positioned in a different way than previously resulting in better performance of the astronomical video data acquisition system. The camera was built using a low-noise, 4Mpixel CCD circuit by STA. The electronic circuit of the camera is highly parameterized and reconfigurable, as well as modular in comparison with the solution of first generation, due to application of open software solutions and FPGA circuit, Altera Cyclone EP1C6. New algorithms were implemented into the FPGA chip. There were used the following advanced electronic circuit in the camera system: microcontroller CY7C68013a (core 8051) by Cypress, image processor AD9826 by Analog Devices, GigEth interface RTL8169s by Realtec, memory SDRAM AT45DB642 by Atmel, CPU typr microprocessor ARM926EJ-S AT91SAM9260 by ARM and Atmel. Software solutions for the camera and its remote control, as well as image data acquisition are based only on the open source platform. There were used the following image interfaces ISI and API V4L2, data bus AMBA, AHB, INDI protocol. The camera will be replicated in 20 pieces and is designed for continuous on-line, wide angle observations of the sky in the research program Pi-of-the-Sky.

  20. 1 Astrometric calibration for INT Wide Field Camera images The Wide Field Camera instrument on the Isaac Newton Telescope contains four CCD chips of

    E-print Network

    Taylor, Mark

    1 Astrometric calibration for INT Wide Field Camera images The Wide Field Camera instrument on the Isaac Newton Telescope contains four CCD chips of 2048 #2; 4096 pixels positioned roughly, it is necessary to correct for the exact orientation and position of each CCD in relation to the others, as well

  1. Color enhancement of image of earth photographed by UV camera

    NASA Technical Reports Server (NTRS)

    1972-01-01

    A color enhancement of an ultraviolet photograph of the geocorona, a halo of low density hydrogen around the Earth. Sunlight is shining from the left, and the geocorona is brighter on that side. The UV camera was operated by Astronaut John W. Young on the Apollo 16 lunar landing mission.

  2. Characterization of Imaging Phone Cameras Using Minimum Description Length Principle

    Microsoft Academic Search

    ADRIAN BURIAN; AKI HAPPONEN; MIHAELA CIRLUGEA

    In this paper, a new Minimum Description Length (MDL) approach for the characterization of a mobile phone's color camera is presented. The use of high-order polynomials, Fourier sine series, and artificial neural networks (ANN) for solving this problem are compared and contrasted. The MDL formalism is used for determining the stochastic complexity of polynomial and Fourier sine models for the

  3. DEFINITION OF AIRWAY COMPOSITION WITHIN GAMMA CAMERA IMAGES

    EPA Science Inventory

    The efficacies on inhaled pharmacologic drugs in the prophylaxis and treatment if airway diseases could be improved if particles were selectively directed to appropriate Sites. n the medical arena, planar gamma scintillation cameras may be employed to study factors affecting such...

  4. Camera Animation

    NSDL National Science Digital Library

    A general discussion of the use of cameras in computer animation. This section includes principles of traditional film techniques and suggestions for the use of a camera during an architectural walkthrough. This section includes html pages, images and one video.

  5. High-frame-rate intensified fast optically shuttered TV cameras with selected imaging applications

    SciTech Connect

    Yates, G.J.; King, N.S.P.

    1994-08-01

    This invited paper focuses on high speed electronic/electro-optic camera development by the Applied Physics Experiments and Imaging Measurements Group (P-15) of Los Alamos National Laboratory`s Physics Division over the last two decades. The evolution of TV and image intensifier sensors and fast readout fast shuttered cameras are discussed. Their use in nuclear, military, and medical imaging applications are presented. Several salient characteristics and anomalies associated with single-pulse and high repetition rate performance of the cameras/sensors are included from earlier studies to emphasize their effects on radiometric accuracy of electronic framing cameras. The Group`s test and evaluation capabilities for characterization of imaging type electro-optic sensors and sensor components including Focal Plane Arrays, gated Image Intensifiers, microchannel plates, and phosphors are discussed. Two new unique facilities, the High Speed Solid State Imager Test Station (HSTS) and the Electron Gun Vacuum Test Chamber (EGTC) arc described. A summary of the Group`s current and developmental camera designs and R&D initiatives are included.

  6. The Mast Cameras and Mars Descent Imager (MARDI) for the 2009 Mars Science Laboratory

    NASA Technical Reports Server (NTRS)

    Malin, M. C.; Bell, J. F.; Cameron, J.; Dietrich, W. E.; Edgett, K. S.; Hallet, B.; Herkenhoff, K. E.; Lemmon, M. T.; Parker, T. J.; Sullivan, R. J.

    2005-01-01

    Based on operational experience gained during the Mars Exploration Rover (MER) mission, we proposed and were selected to conduct two related imaging experiments: (1) an investigation of the geology and short-term atmospheric vertical wind profile local to the Mars Science Laboratory (MSL) landing site using descent imaging, and (2) a broadly-based scientific investigation of the MSL locale employing visible and very near infra-red imaging techniques from a pair of mast-mounted, high resolution cameras. Both instruments share a common electronics design, a design also employed for the MSL Mars Hand Lens Imager (MAHLI) [1]. The primary differences between the cameras are in the nature and number of mechanisms and specific optics tailored to each camera s requirements.

  7. Design considerations of color image processing pipeline for digital cameras

    Microsoft Academic Search

    Wen-Chung Kao; Sheng-Hong Wang; Lien-Yang Chen; Sheng-Yuan Lin

    2006-01-01

    Although many individual image processing steps have been well addressed in the field, very few good image pipeline designs were proposed to integrate these processing stages. In this paper, a new color image processing pipeline (IPP), which processes the image raw data captured from CCD\\/CMOS sensors and converts to the final color with exposure corrected, is presented, it bridges the

  8. Error Resilient Image Communication with Chaotic Pixel Interleaving for Wireless Camera

    E-print Network

    Paris-Sud XI, Université de

    require vision capabili- ties. Considering the high loss rates found in sensor networks, and the limited hardware resources of current sensor nodes, low-complexity robust image transmission must be implementedError Resilient Image Communication with Chaotic Pixel Interleaving for Wireless Camera Sensors

  9. Acoustic Mine Imaging (AMI) project: An underwater acoustic camera for use in mine warfare

    Microsoft Academic Search

    Colin Ellis; Ed Murphy

    2001-01-01

    This paper is submitted to detail the advances in sonar and imaging techniques and synthetic apertures being made in Australia by Thales Underwater Systems within a Australian Defence Acquisition Project termed Acoustic Mine Imaging (AMI). This paper will detail the development of the AMI underwater acoustic camera for the detection, classification and characterization of mines and other underwater objects in

  10. Application of spatial frequency response as a criterion for evaluating thermal imaging camera performance

    Microsoft Academic Search

    Andrew Lock; Francine Amon

    2008-01-01

    Police, firefighters, and emergency medical personnel are examples of first responders that are utilizing thermal imaging cameras in a very practical way every day. However, few performance metrics have been developed to assist first responders in evaluating the performance of thermal imaging technology. This paper describes one possible metric for evaluating spatial resolution using an application of Spatial Frequency Response

  11. Evaluating image sensor sensitivity by measuring camera signal-to-noise ratio

    Microsoft Academic Search

    Bradley S. Carlson

    2002-01-01

    Image sensor sensitivity is critical for machine vision applications where illumination is limited and large depth of field is required. In this paper a method is presented for evaluating image sensor sensitivity by measuring camera signal-to-noise ratio (SNR). The method is simple to implement and produces accurate results. The method measures SNR as a function of target illumination, and relates

  12. Development of the ProSPECTus semiconductor Compton camera for medical imaging

    Microsoft Academic Search

    Laura Harkness; Andrew Boston; Helen Boston; John Cresswell; Fay Filmer; Janet Groves; Jon Headspith; Graham Kemp; Ian Lazarus; Martin Jones; Daniel Judson; Paul Nolan; Janet Sampson; David Scraggs; John Simpson

    2009-01-01

    The ProSPECTus project is the development of a prototype semiconductor Compton camera for use in nuclear medical imaging applications. The proposed system has the potential to improve the sensitivity of conventional mechanically collimated Single Photon Emission Computed Tomography (SPECT) systems through the use of electronic collimation techniques. In addition, the use of compatible semiconductor technology within a Magnetic Resonance Imaging

  13. Uncalibrated multiple image stereo system with arbitrarily movable camera and projector for wide range scanning

    E-print Network

    Tokyo, University of

    and the projector. As compared to the conven- tional coded structured light method, this system needs no calibration with the uncalibrated stereo 3D reconstruction that uses multi- ple images. As compared to a feature-based, multiUncalibrated multiple image stereo system with arbitrarily movable camera and projector for wide

  14. BLOCK--BASED DEPTH ESTIMATION FROM IMAGE TRIPLES WITH UNRESTRICTED CAMERA SETUP

    E-print Network

    captured with an unrestricted calibrated camera setup. From each image triple, three image pairs are formed. Compared to binocular depth estima­ tion, the presented trinocular approach reduces the depth error. The interactive visualisation requires a 3--D reconstruction of the scene that is realised using stereo vision

  15. Design and control of a thermal stabilizing system for a MEMS optomechanical uncooled infrared imaging camera

    Microsoft Academic Search

    Jongeun Choi; Joji Yamaguchi; Simon Morales; Roberto Horowitz; Yang Zhao; Arunava Majumdar

    2003-01-01

    In this paper, the design and control of a thermal stabilizing system for an optomechanical uncooled infrared (IR) imaging camera is presented, which uses an array of MEMS bimaterial cantilever beams to sense an IR image source. A one-dimensional lumped parameter model of the thermal stabilization system was derived and experimentally validated. A model-based discrete time linear quadratic gaussian regulator

  16. Design and control of a thermal stabilizing system for a MEMS optomechanical uncooled infrared imaging camera

    E-print Network

    Horowitz, Roberto

    for an optomechanical uncooled infrared (IR) imaging camera is presented, which uses an array of MEMS bimaterial control system design presented in this paper is part of a novel optomechanical uncooled infrared imaging uncooled infrared receiver with optical readout system. Each pixel in the FPA array consists

  17. Inter-Camera Model Image Source Identification with Conditional Probability Features

    E-print Network

    Doran, Simon J.

    headers. This is where digital forensics becomes important: to ensure the integrity of the digital evidence is guaranteed. Digital forensics helps by extracting more essential information about an image the image was produced. This digital forensics problem is known as "camera identification". Fig. 1. "Make

  18. A Compton camera for spectroscopic imaging from 100 keV to 1 MeV

    SciTech Connect

    Earnhart, J.R.D.

    1998-12-31

    A review of spectroscopic imaging issues, applications, and technology is presented. Compton cameras based on solid state semiconductor detectors stands out as the best system for the nondestructive assay of special nuclear materials. A camera for this application has been designed based on an efficient specific purpose Monte Carlo code developed for this project. Preliminary experiments have been performed which demonstrate the validity of the Compton camera concept and the accuracy of the code. Based on these results, a portable prototype system is in development. Proposed future work is addressed.

  19. LRO Camera Imaging of the Moon: Apollo 17 and other Sites for Ground Truth

    Microsoft Academic Search

    B. L. Jolliff; S. M. Wiseman; M. S. Robinson; S. Lawrence; B. W. Denevi; J. F. Bell

    2009-01-01

    One of the fundamental goals of the Lunar Reconnaissance Orbiter (LRO) is the determination of mineralogic and compositional distributions and their relation to geologic features on the Moon's surface. Through a combination of imaging with the LRO narrow-angle cameras and wide-angle camera (NAC, WAC), very fine-scale geologic features are resolved with better than meter-per-pixel resolution (NAC) and correlated to spectral

  20. Spectroscopic Imaging Observation of Break Arcs using a High-speed Camera

    Microsoft Academic Search

    J. Sekikawa; T. Kubono

    2007-01-01

    Break arcs occurring between electrical contacts are observed in DC42V resistive circuit using a high-speed camera. Materials of contact pairs are Ag or Ag\\/ZnO. The break current is 10A. Spectroscopic images are obtained using the high-speed camera with optical band pass filters. The filters are attached on the lens to observe only the Agl 521 nm or ZnI 481 nm

  1. Imaging high-dimensional spatial entanglement with a camera

    E-print Network

    Matthew P. Edgar; Daniel S. Tasca; Frauke Izdebski; Ryan E. Warburton; Jonathan Leach; Megan Agnew; Gerald S. Buller; Robert W. Boyd; Miles J. Padgett

    2012-08-09

    The light produced by parametric down-conversion shows strong spatial entanglement that leads to violations of EPR criteria for separability. Historically, such studies have been performed by scanning a single-element, single-photon detector across a detection plane. Here we show that modern electron-multiplying charge-coupled device cameras can measure correlations in both position and momentum across a multi-pixel field of view. This capability allows us to observe entanglement of around 2,500 spatial states and demonstrate Einstein-Podolsky-Rosen type correlations by more than two orders of magnitude. More generally, our work shows that cameras can lead to important new capabilities in quantum optics and quantum information science.

  2. Imaging high-dimensional spatial entanglement with a camera

    PubMed Central

    Edgar, M.P.; Tasca, D.S.; Izdebski, F.; Warburton, R.E.; Leach, J.; Agnew, M.; Buller, G.S.; Boyd, R.W.; Padgett, M.J.

    2012-01-01

    The light produced by parametric down-conversion shows strong spatial entanglement that leads to violations of EPR criteria for separability. Historically, such studies have been performed by scanning a single-element, single-photon detector across a detection plane. Here we show that modern electron-multiplying charge-coupled device cameras can measure correlations in both position and momentum across a multi-pixel field of view. This capability allows us to observe entanglement of around 2,500 spatial states and demonstrate Einstein–Podolsky–Rosen type correlations by more than two orders of magnitude. More generally, our work shows that cameras can lead to important new capabilities in quantum optics and quantum information science. PMID:22871804

  3. Periscope-camera system for visible and infrared imaging diagnostics on TFTR

    SciTech Connect

    Medley, S.S.; Dimock, D.L.; Hayes, S.; Long, D.; Lowrence, J.L.; Mastrocola, V.; Renda, G.; Ulrickson, M.; Young, K.M.

    1985-05-01

    An optical diagnostic consisting of a periscope which relays images of the torus interior to an array of cameras is used on the Tokamak Fusion Test Reactor (TFTR) to view plasma discharge phenomena and inspect vacuum vessel internal structures in both visible and near-infrared wavelength regions. Three periscopes view through 20-cm-diameter fused-silica windows which are spaced around the torus midplane to provide a viewing coverage of approximately 75% of the vacuum vessel internal surface area. The periscopes have f/8 optics and motor-driven controls for focusing, magnification selection (5/sup 0/, 20/sup 0/, and 60/sup 0/ field of view), elevation and azimuth setting, mast rotation, filter selection, iris aperture, and viewing port selection. The four viewing ports on each periscope are equipped with multiple imaging devices which include: (1) an inspection eyepiece, (2) standard (RCA TC2900) and fast (RETICON) framing rate television cameras, (3) a PtSi CCD infrared imaging camera, (4) a 35 mm Nikon F3 still camera, or (5) a 16 mm Locam II movie camera with variable framing up to 500 fps. Operation of the periscope-camera system is controlled either locally or remotely through a computer-CAMAC interface. A description of the equipment and examples of its application are presented.

  4. Robust extraction of image correspondences exploiting the image scene geometry and approximate camera orientation

    NASA Astrophysics Data System (ADS)

    Alsadik, B.; Remondino, F.; Menna, F.; Gerke, M.; Vosselman, G.

    2013-02-01

    Image-based modeling techniques are an important tool for producing 3D models in a practical and cost effective manner. Accurate image-based models can be created as long as one can retrieve precise image calibration and orientation information which is nowadays performed automatically in computer vision and photogrammetry. The first step for orientation is to have sufficient correspondences across the captured images. Keypoint descriptors like SIFT or SURF are a successful approach for finding these correspondences. The extraction of precise image correspondences is crucial for the subsequent image orientation and image matching steps. Indeed there are still many challenges especially with wide-baseline image configuration. After the extraction of a sufficient and reliable set of image correspondences, a bundle adjustment is used to retrieve the image orientation parameters. In this paper, a brief description of our previous work on automatic camera network design is initially reported. This semi-automatic procedure results in wide-baseline high resolution images covering an object of interest, and including approximations of image orientations, a rough 3D object geometry and a matching matrix indicating for each image its matching mates. The main part of this paper will describe the subsequent image matching where the pre-knowledge on the image orientations and the pre-created rough 3D model of the study object is exploited. Ultimately the matching information retrieved during that step will be used for a precise bundle block adjustment. Since we defined the initial image orientation in the design of the network, we can compute the matching matrix prior to image matching of high resolution images. For each image involved in several pairs that is defined in the matching matrix, we detect the corners or keypoints and then transform them into the matching images by using the designed orientation and initial 3D model. Moreover, a window is defined for each corner and its initial correspondence in the matching images. A SIFT or SURF matching is implemented between every matching window to find the homologous points. This is followed by Least Square Matching LSM to refine the correspondences for a sub-pixel localization and to avoid inaccurate matches. Image matching is followed by a bundle adjustment to orient the images automatically to finally have a sparse 3D model. We used the commercial software Photomodeler Scanner 2010 for implementing the bundle adjustment since it reports a number of accuracy indices which are necessary for the evaluation purposes. The experimental test of comparing the automated image matching of four pre-designed streopairs shows that our approach can provide a high accuracy and effective orientation when compared to the results of commercial and open source software which does not exploit the pre-knowledge about the scene.

  5. In-flight calibration of the Cassini imaging science sub-system cameras Robert West a,n

    E-print Network

    In-flight calibration of the Cassini imaging science sub-system cameras Robert West a,n , Benjamin s t r a c t We describe in-flight calibration of the Cassini Imaging Science Sub-system narrow- and wide. Introduction The Cassini imaging science sub-system (ISS) consists of two cameras on the Cassini spacecraft

  6. Methods for a fusion of optical coherence tomography and stereo camera image data

    NASA Astrophysics Data System (ADS)

    Bergmeier, Jan; Kundrat, Dennis; Schoob, Andreas; Kahrs, Lüder A.; Ortmaier, Tobias

    2015-03-01

    This work investigates combination of Optical Coherence Tomography and two cameras, observing a microscopic scene. Stereo vision provides realistic images, but is limited in terms of penetration depth. Optical Coherence Tomography (OCT) enables access to subcutaneous structures, but 3D-OCT volume data do not give the surgeon a familiar view. The extension of the stereo camera setup with OCT imaging combines the benefits of both modalities. In order to provide the surgeon with a convenient integration of OCT into the vision interface, we present an automated image processing analysis of OCT and stereo camera data as well as combined imaging as augmented reality visualization. Therefore, we care about OCT image noise, perform segmentation as well as develop proper registration objects and methods. The registration between stereo camera and OCT results in a Root Mean Square error of 284 ?m as average of five measurements. The presented methods are fundamental for fusion of both imaging modalities. Augmented reality is shown as application of the results. Further developments lead to fused visualization of subcutaneous structures, as information of OCT images, into stereo vision.

  7. Mathematical Problems of Thermoacoustic and Compton Camera Imaging 

    E-print Network

    Georgieva-Hristova, Yulia Nekova

    2011-10-21

    The results presented in this dissertation concern two different types of tomographic imaging. The first part of the dissertation is devoted to the time reversal method for approximate reconstruction of images in thermoacoustic tomography. A...

  8. Validation of 3D surface imaging in breath-hold radiotherapy for breast cancer: one central camera unit versus three camera units

    NASA Astrophysics Data System (ADS)

    Alderliesten, Tanja; Betgen, Anja; van Vliet-Vroegindeweij, Corine; Remeijer, Peter

    2013-03-01

    In this work we investigated the benefit of the use of two lateral camera units additional to a central camera unit for 3D surface imaging for image guidance in deep-inspiration breath-hold (DIBH) radiotherapy by comparison with cone-beam computed tomography (CBCT). Ten patients who received DIBH radiotherapy after breast-conserving surgery were included. The performance of surface imaging using one and three camera units was compared to using CBCT for setup verification. Breast-surface registrations were performed for CBCT as well as for 3D surfaces, captured concurrently with CBCT, to planning CT. The resulting setup errors were compared with linear regression analysis. For the differences between setup errors an assessment of the group mean, systematic error, random error, and 95% limits of agreement was made. Correlations between derived surface-imaging [one camera unit;three camera units] and CBCT setup errors were: R2=[0.67;0.75], [0.76;0.87], [0.88;0.91] in left-right, cranio-caudal, and anterior-posterior direction, respectively. Group mean, systematic and random errors were slightly smaller (sub-millimeter differences) and the limits of agreement were 0.10 to 0.25cm tighter when using three camera units compared with one. For the majority of the data, the use of three camera units compared with one resulted in setup errors more similar to the CBCT derived setup errors for the craniocaudal and anterior-posterior directions (p<0.01, Wilcoxon-signed-ranks test). This study shows a better correlation and agreement between 3D surface imaging and CBCT when three camera units are used instead of one and further outlines the conditions under which the benefit of using three camera units is significant.

  9. Retinal axial focusing and multi-layer imaging with a liquid crystal adaptive optics camera

    NASA Astrophysics Data System (ADS)

    Liu, Rui-Xue; Zheng, Xian-Liang; Li, Da-Yu; Xia, Ming-Liang; Hu, Li-Fa; Cao, Zhao-Liang; Mu, Quan-Quan; Xuan, Li

    2014-09-01

    With the help of adaptive optics (AO) technology, cellular level imaging of living human retina can be achieved. Aiming to reduce distressing feelings and to avoid potential drug induced diseases, we attempted to image retina with dilated pupil and froze accommodation without drugs. An optimized liquid crystal adaptive optics camera was adopted for retinal imaging. A novel eye stared system was used for stimulating accommodation and fixating imaging area. Illumination sources and imaging camera kept linkage for focusing and imaging different layers. Four subjects with diverse degree of myopia were imaged. Based on the optical properties of the human eye, the eye stared system reduced the defocus to less than the typical ocular depth of focus. In this way, the illumination light can be projected on certain retina layer precisely. Since that the defocus had been compensated by the eye stared system, the adopted 512 × 512 liquid crystal spatial light modulator (LC-SLM) corrector provided the crucial spatial fidelity to fully compensate high-order aberrations. The Strehl ratio of a subject with -8 diopter myopia was improved to 0.78, which was nearly close to diffraction-limited imaging. By finely adjusting the axial displacement of illumination sources and imaging camera, cone photoreceptors, blood vessels and nerve fiber layer were clearly imaged successfully.

  10. Removal of parasitic image due to metal specularity based on digital micromirror device camera

    NASA Astrophysics Data System (ADS)

    Zhao, Shou-Bo; Zhang, Fu-Min; Qu, Xing-Hua; Chen, Zhe; Zheng, Shi-Wei

    2014-06-01

    Visual inspection for a highly reflective surface is commonly faced with a serious limitation, which is that useful information on geometric construction and textural defects is covered by a parasitic image due to specular highlights. In order to solve the problem, we propose an effective method for removing the parasitic image. Specifically, a digital micromirror device (DMD) camera for programmable imaging is first described. The strength of the optical system is to process scene ray before image formation. Based on the DMD camera, an iterative algorithm of modulated region selection, precise region mapping, and multimodulation provides removal of the parasitic image and reconstruction of a correction image. Finally, experimental results show the performance of the proposed approach.

  11. Achieving thermography with a thermal security camera using uncooled amorphous silicon microbolometer image sensors

    NASA Astrophysics Data System (ADS)

    Wang, Yu-Wei; Tesdahl, Curtis; Owens, Jim; Dorn, David

    2012-06-01

    Advancements in uncooled microbolometer technology over the last several years have opened up many commercial applications which had been previously cost prohibitive. Thermal technology is no longer limited to the military and government market segments. One type of thermal sensor with low NETD which is available in the commercial market segment is the uncooled amorphous silicon (?-Si) microbolometer image sensor. Typical thermal security cameras focus on providing the best image quality by auto tonemaping (contrast enhancing) the image, which provides the best contrast depending on the temperature range of the scene. While this may provide enough information to detect objects and activities, there are further benefits of being able to estimate the actual object temperatures in a scene. This thermographic ability can provide functionality beyond typical security cameras by being able to monitor processes. Example applications of thermography[2] with thermal camera include: monitoring electrical circuits, industrial machinery, building thermal leaks, oil/gas pipelines, power substations, etc...[3][5] This paper discusses the methodology of estimating object temperatures by characterizing/calibrating different components inside a thermal camera utilizing an uncooled amorphous silicon microbolometer image sensor. Plots of system performance across camera operating temperatures will be shown.

  12. A novel IR polarization imaging system designed by a four-camera array

    NASA Astrophysics Data System (ADS)

    Liu, Fei; Shao, Xiaopeng; Han, Pingli

    2014-05-01

    A novel IR polarization staring imaging system employing a four-camera-array is designed for target detection and recognition, especially man-made targets hidden in complex battle field. The design bases on the existence of the difference in infrared radiation's polarization characteristics, which is particularly remarkable between artificial objects and the natural environment. The system designed employs four cameras simultaneously to capture the00 polarization difference to replace the commonly used systems engaging only one camera. Since both types of systems have to obtain intensity images in four different directions (I0 , I45 , I90 , I-45 ), the four-camera design allows better real-time capability and lower error without the mechanical rotating parts which is essential to one-camera systems. Information extraction and detailed analysis demonstrate that the caught polarization images include valuable polarization information which can effectively increase the images' contrast and make it easier to segment the target even the hidden target from various scenes.

  13. An image resolution enhancing technique using adaptive sub-pixel interpolation for digital still camera system

    Microsoft Academic Search

    Yido Koo; Wonchan Kim

    1999-01-01

    An image resolution enhancing technique is described. It is based on extracting 1-dimensional characteristic curves from subsequent frames and sub-pixel displacement values. Through sub-pixel mapping and adaptive interpolation, a high-resolution image can be obtained from several low-resolution image frames. This 1-dimensional algorithm is simple and cost-effective, and can be easily applied in real-time processing for digital still camera application

  14. Image processing for three-dimensional scans generated by time-of-flight range cameras

    NASA Astrophysics Data System (ADS)

    Schöner, Holger; Bauer, Frank; Dorrington, Adrian; Heise, Bettina; Wieser, Volkmar; Payne, Andrew; Cree, Michael J.; Moser, Bernhard

    2012-04-01

    Time-of-flight (TOF) full-field range cameras use a correlative imaging technique to generate three-dimensional measurements of the environment. Though reliable and cheap they have the disadvantage of high measurement noise and errors that limit the practical use of these cameras in industrial applications. We show how some of these limitations can be overcome with standard image processing techniques specially adapted to TOF camera data. Additional information in the multimodal images recorded in this setting, and not available in standard image processing settings, can be used to improve reduction of measurement noise. Three extensions of standard techniques, wavelet thresholding, adaptive smoothing on a clustering based image segmentation, and an extended anisotropic diffusion filtering, make use of this information and are compared on synthetic data and on data acquired from two different off-the-shelf TOF cameras. Of these methods, the adapted anisotropic diffusion technique gives best results, and is implementable to perform in real time using current graphics processing unit (GPU) hardware. Like traditional anisotropic diffusion, it requires some parameter adaptation to the scene characteristics, but allows for low visualization delay and improved visualization of moving objects by avoiding long averaging periods when compared to traditional TOF image denoising.

  15. Full-field 3D measurement using multi-camera digital image correlation system

    NASA Astrophysics Data System (ADS)

    Chen, Fanxiu; Chen, Xu; Xie, Xin; Feng, Xiu; Yang, Lianxiang

    2013-09-01

    A novel non-contact, full-field, three dimensional, multi-camera (N cameras N=2, 3 ,…) Digital Image Correlation (DIC) measurement system is proposed in this work. In the proposed system, multiple cameras are calibrated as a single system. In this system, any two arbitrary cameras can be grouped into pairs, and each pair of cameras measures a part of a 3D object based on the fundamentals of triangulation. The measured data from different pairs of cameras can be mapped into a universal coordinate system based on the calibration data. A 3D contour of the object can be extracted. Further data, such as deformation, can be obtained based on the contour of the object at a different time. The methodology of the proposed system is introduced. Four synchronized Charged Couple Device (CCD) cameras are employed in the experimental setup, and the performance of the setup is tested in both static and dynamic cases to show the potential of the system.

  16. Camera Sensor Arrangement for Crop/Weed Detection Accuracy in Agronomic Images

    PubMed Central

    Romeo, Juan; Guerrero, José Miguel; Montalvo, Martín; Emmi, Luis; Guijarro, María; Gonzalez-de-Santos, Pablo; Pajares, Gonzalo

    2013-01-01

    In Precision Agriculture, images coming from camera-based sensors are commonly used for weed identification and crop line detection, either to apply specific treatments or for vehicle guidance purposes. Accuracy of identification and detection is an important issue to be addressed in image processing. There are two main types of parameters affecting the accuracy of the images, namely: (a) extrinsic, related to the sensor's positioning in the tractor; (b) intrinsic, related to the sensor specifications, such as CCD resolution, focal length or iris aperture, among others. Moreover, in agricultural applications, the uncontrolled illumination, existing in outdoor environments, is also an important factor affecting the image accuracy. This paper is exclusively focused on two main issues, always with the goal to achieve the highest image accuracy in Precision Agriculture applications, making the following two main contributions: (a) camera sensor arrangement, to adjust extrinsic parameters and (b) design of strategies for controlling the adverse illumination effects. PMID:23549361

  17. Camera sensor arrangement for crop/weed detection accuracy in agronomic images.

    PubMed

    Romeo, Juan; Guerrero, José Miguel; Montalvo, Martín; Emmi, Luis; Guijarro, María; Gonzalez-de-Santos, Pablo; Pajares, Gonzalo

    2013-01-01

    In Precision Agriculture, images coming from camera-based sensors are commonly used for weed identification and crop line detection, either to apply specific treatments or for vehicle guidance purposes. Accuracy of identification and detection is an important issue to be addressed in image processing. There are two main types of parameters affecting the accuracy of the images, namely: (a) extrinsic, related to the sensor's positioning in the tractor; (b) intrinsic, related to the sensor specifications, such as CCD resolution, focal length or iris aperture, among others. Moreover, in agricultural applications, the uncontrolled illumination, existing in outdoor environments, is also an important factor affecting the image accuracy. This paper is exclusively focused on two main issues, always with the goal to achieve the highest image accuracy in Precision Agriculture applications, making the following two main contributions: (a) camera sensor arrangement, to adjust extrinsic parameters and (b) design of strategies for controlling the adverse illumination effects. PMID:23549361

  18. A mobile phone-based retinal camera for portable wide field imaging.

    PubMed

    Maamari, Robi N; Keenan, Jeremy D; Fletcher, Daniel A; Margolis, Todd P

    2014-04-01

    Digital fundus imaging is used extensively in the diagnosis, monitoring and management of many retinal diseases. Access to fundus photography is often limited by patient morbidity, high equipment cost and shortage of trained personnel. Advancements in telemedicine methods and the development of portable fundus cameras have increased the accessibility of retinal imaging, but most of these approaches rely on separate computers for viewing and transmission of fundus images. We describe a novel portable handheld smartphone-based retinal camera capable of capturing high-quality, wide field fundus images. The use of the mobile phone platform creates a fully embedded system capable of acquisition, storage and analysis of fundus images that can be directly transmitted from the phone via the wireless telecommunication system for remote evaluation. PMID:24344230

  19. Optimal Camera Trajectory with Image-Based Control

    Microsoft Academic Search

    Youcef Mezouar; François Chaumette

    2003-01-01

    Image-based servo is a local control solution. Thanks to the feedback loop closed in the image space, local convergence and stability in the presence of modeling errors and noise perturbations are ensured when the error is small. The principal deficiency of this approach is that the induced (3D) trajectories are not optimal and sometimes, es- pecially when the displacement to

  20. Preprocessing of Fingerprint Images Captured with a Digital Camera

    Microsoft Academic Search

    Bee Yan Hiew; Andrew Beng Jin Teoh; David Chek Ling Ngo

    2006-01-01

    Reliable touch-less fingerprint recognition still remains a challenge nowadays as the conventional techniques that used to preprocess the optical or capacitance sensor acquired fingerprint image, for segmentation, enhancement and core point detection, are inadequate to serve the purpose. The problems of the touch-less fingerprint recognition are the low contrast between the ridges and the valleys in fingerprint images, defocus and

  1. Towards Dynamic Camera Calibration for Constrained Flexible Mirror Imaging

    E-print Network

    Paris-Sud XI, Université de

    in the literature on the recovery of surface shape from images of a diffuse surface, using either structured light attention from the vision community. This situation differs from the diffuse case since 'features' seen reflection in a single image. They used a bespoke conical calibration ob- ject and concentr

  2. Driving micro-optical imaging systems towards miniature camera applications

    NASA Astrophysics Data System (ADS)

    Brückner, Andreas; Duparré, Jacques; Dannberg, Peter; Leitel, Robert; Bräuer, Andreas

    2010-05-01

    Up to now, multi channel imaging systems have been increasingly studied and approached from various directions in the academic domain due to their promising large field of view at small system thickness. However, specific drawbacks of each of the solutions prevented the diffusion into corresponding markets so far. Most severe problems are a low image resolution and a low sensitivity compared to a conventional single aperture lens besides the lack of a cost-efficient method of fabrication and assembly. We propose a microoptical approach to ultra-compact optics for real-time vision systems that are inspired by the compound eyes of insects. The demonstrated modules achieve a VGA resolution with 700x550 pixels within an optical package of 6.8mm x 5.2mm and a total track length of 1.4mm. The partial images that are separately recorded within different optical channels are stitched together to form a final image of the whole field of view by means of image processing. These software tools allow to correct the distortion of the individual partial images so that the final image is also free of distortion. The so-called electronic cluster eyes are realized by state-of-the-art microoptical fabrication techniques and offer a resolution and sensitivity potential that makes them suitable for consumer, machine vision and medical imaging applications.

  3. Blur detection in image sequences recorded by a wearable camera

    Microsoft Academic Search

    Zhen Li; Zhiqiang Wei; Robert J. Sclabassi; Wenyan Jia; Mingui Sun

    2011-01-01

    A new method based on the Discrete Cosine Transform (DCT) and the Otsu method for blur detection in image sequences is proposed in this paper. In the first step, the standard deviation (STD) and the DCT coefficients are utilized to detect blurred and homogeneous areas in each image. Then, the Otsu method is used to calculate an adaptive threshold in

  4. Shading correction of camera captured document image with depth map information

    NASA Astrophysics Data System (ADS)

    Wu, Chyuan-Tyng; Allebach, Jan P.

    2015-01-01

    Camera modules have become more popular in consumer electronics and office products. As a consequence, people have many opportunities to use a camera-based device to record a hardcopy document in their daily lives. However, it is easy to let undesired shading into the captured document image through the camera. Sometimes, this non-uniformity may degrade the readability of the contents. In order to mitigate this artifact, some solutions have been developed. But most of them are only suitable for particular types of documents. In this paper, we introduce a content-independent and shape-independent method that will lessen the shading effects in captured document images. We want to reconstruct the image such that the result will look like a document image captured under a uniform lighting source. Our method utilizes the 3D depth map of the document surface and a look-up table strategy. We will first discuss the model and the assumptions that we used for the approach. Then, the process of creating and utilizing the look-up table will be described in the paper. We implement this algorithm with our prototype 3D scanner, which also uses a camera module to capture a 2D image of the object. Some experimental results will be presented to show the effectiveness of our method. Both flat and curved surface document examples will be included.

  5. GNSS Carrier Phase Integer Ambiguity Resolution with Camera and Satellite images

    NASA Astrophysics Data System (ADS)

    Henkel, Patrick

    2015-04-01

    Ambiguity Resolution is the key to high precision position and attitude determination with GNSS. However, ambiguity resolution of kinematic receivers becomes challenging in environments with substantial multipath, limited satellite availability and erroneous cycle slip corrections. There is a need for other sensors, e.g. inertial sensors that allow an independent prediction of the position. The change of the predicted position over time can then be used for cycle slip detection and correction. In this paper, we provide a method to improve the initial ambiguity resolution for RTK and PPP with vision-based position information. Camera images are correlated with geo-referenced aerial/ satellite images to obtain an independent absolute position information. This absolute position information is then coupled with the GNSS and INS measurements in an extended Kalman filter to estimate the position, velocity, acceleration, attitude, angular rates, code multipath and biases of the accelerometers and gyroscopes. The camera and satellite images are matched based on some characteristic image points (e.g. corners of street markers). We extract these characteristic image points from the camera images by performing the following steps: An inverse mapping (homogenous projection) is applied to transform the camera images from the driver's perspective to bird view. Subsequently, we detect the street markers by performing (a) a color transformation and reduction with adaptive brightness correction to focus on relevant features, (b) a subsequent morphological operation to enhance the structure recognition, (c) an edge and corner detection to extract feature points, and (d) a point matching of the corner points with a template to recognize the street markers. We verified the proposed method with two low-cost u-blox LEA 6T GPS receivers, the MPU9150 from Invensense, the ASCOS RTK corrections and a PointGrey camera. The results show very precise and seamless position and attitude estimates in an urban environment with substantial multipath.

  6. Efficient Stereo Image Geometrical Reconstruction at Arbitrary Camera Settings from a Single Calibration

    PubMed Central

    Ji, Songbai; Fan, Xiaoyao; Roberts, David W.; Paulsen, Keith D.

    2015-01-01

    Camera calibration is central to obtaining a quantitative image-to-physical-space mapping from stereo images acquired in the operating room (OR). A practical challenge for cameras mounted to the operating microscope is maintenance of image calibration as the surgeon’s field-of-view is repeatedly changed (in terms of zoom and focal settings) throughout a procedure. Here, we present an efficient method for sustaining a quantitative image-to-physical space relationship for arbitrary image acquisition settings (S) without the need for camera re-calibration. Essentially, we warp images acquired at S into the equivalent data acquired at a reference setting, S0, using deformation fields obtained with optical flow by successively imaging a simple phantom. Closed-form expressions for the distortions were derived from which 3D surface reconstruction was performed based on the single calibration at S0. The accuracy of the reconstructed surface was 1.05 mm and 0.59 mm along and perpendicular to the optical axis of the operating microscope on average, respectively, for six phantom image pairs, and was 1.26 mm and 0.71 mm for images acquired with a total of 47 arbitrary settings during three clinical cases. The technique is presented in the context of stereovision; however, it may also be applicable to other types of video image acquisitions (e.g., endoscope) because it does not rely on any a priori knowledge about the camera system itself, suggesting the method is likely of considerable significance. PMID:25333148

  7. Low-complexity camera digital signal imaging for video document projection system

    NASA Astrophysics Data System (ADS)

    Hsia, Shih-Chang; Tsai, Po-Shien

    2011-04-01

    We present high-performance and low-complexity algorithms for real-time camera imaging applications. The main functions of the proposed camera digital signal processing (DSP) involve color interpolation, white balance, adaptive binary processing, auto gain control, and edge and color enhancement for video projection systems. A series of simulations demonstrate that the proposed method can achieve good image quality while keeping computation cost and memory requirements low. On the basis of the proposed algorithms, the cost-effective hardware core is developed using Verilog HDL. The prototype chip has been verified with one low-cost programmable device. The real-time camera system can achieve 1270 × 792 resolution with the combination of extra components and can demonstrate each DSP function.

  8. High-quality virus images obtained by transmission electron microscopy and charge coupled device digital camera technology

    Microsoft Academic Search

    Kenneth L. Tiekotter; Hans-W. Ackermann

    2009-01-01

    The introduction of digital cameras has led to the publication of numerous virus electron micrographs of low magnification, poor contrast, and low resolution. Described herein is the methodology for obtaining highly contrasted virus images in the magnification range of approximately 250,000–300,000×. Based on recent advances in charged couple device (CCD) digital camera technology, methodology is described for optimal imaging parameters

  9. Visible Light Digital Camera --Up to 2.3MP resolution with LED lamps provides sharp images

    E-print Network

    Short, Daniel

    · Visible Light Digital Camera -- Up to 2.3MP resolution with LED lamps provides sharp images-imposed over a digital image · 0.1°C Thermal Sensitivity -- Provides the resolution needed to find problems case FLIR i40 Additional Features · 0.6MP Visible Light Camera resolution · Picture in Picture (PIP

  10. Automatic orthorectification and mosaicking of oblique images from a zoom lens aerial camera

    NASA Astrophysics Data System (ADS)

    Zhou, Qianfei; Liu, Jinghong

    2015-01-01

    For the purpose of image distortion caused by the oblique photography of a zoom lens aerial camera, a fast and accurate image autorectification and mosaicking method in a ground control points (GCPs)-free environment was proposed. With the availability of integrated global positioning system (GPS) and inertial measurement units, the camera's exterior orientation parameters (EOPs) were solved through direct georeferencing. The one-parameter division model was adopted to estimate the distortion coefficient and the distortion center coordinates for the zoom lens to correct the lens distortion. Using the camera's EOPs and the lens distortion parameters, the oblique aerial images specified in the camera frame were geo-orthorectified into the mapping frame and then were mosaicked together based on the mapping coordinates to produce a larger field and high-resolution georeferenced image. Experimental results showed that the orthorectification error was less than 1.80 m at an 1100 m flight height above ground level, when compared with 14 presurveyed ground checkpoints which were measured by differential GPS. The mosaic error was about 1.57 m compared with 18 checkpoints. The accuracy was considered sufficient for urgent response such as military reconnaissance and disaster monitoring where GCPs were not available.

  11. QUALIFICATION OF CLOSE RANGE PHOTOGRAMMETRY CAMERAS BY AVERAGE IMAGE COORDINATES RMS ERROR VS. OBJECT DISTANCE FUNCTION

    Microsoft Academic Search

    K. Fekete; P. Schrott

    In this publication, the concept of image coordinate RMS error derived from average object side RMS is introduced. In the course of derivation, data on network geometry and redundancy were taken into consideration; thereby camera output for a given object distance was characterized by this quantity independent of the shooting arrangement. If this value is determined for several object distances,

  12. A software architecture for image acquisition and camera control in an active computer vision system

    Microsoft Academic Search

    SiniSa SegviC; Vladimir StanisavljeviC; Zoran KalafatiC

    2000-01-01

    In a system for active computer vision, there is an intrinsic need for interfacing two different kinds of specific hardware devices: image digitizer and controllable camera. The architecture of software components responsible for these interfaces must be carefully designed in order to achieve their scalability, testability, maintainability, reusability and portability. Hardware abstraction is a major concern in this context since

  13. Non-photorealistic Camera: Depth Edge Detection and Stylized Rendering using Multi-Flash Imaging

    E-print Network

    California at Santa Barbara, University of

    Non-photorealistic Camera: Depth Edge Detection and Stylized Rendering using Multi-Flash Imaging Ramesh Raskar Kar-Han Tan Mitsubishi Electric Research Labs (MERL) Rogerio Feris UC Santa Barbara Jingyi Yu MIT Matthew Turk UC Santa Barbara Figure 1: (a) A photo of a car engine (b) Stylized rendering

  14. On the sensitivity analysis of camera calibration from images of spheres

    Microsoft Academic Search

    Yan Lu; Shahram Payandeh

    2010-01-01

    This paper presents a novel sensitivity analysis of camera calibration from images of spheres. We improve the accuracy of a conic matrix and hence the accuracy of calibration results by eliminating the ambiguity of the conic orientation that arises from the nature of a circular-ellipse. In addition, relationships between the difference in length of the long and short axes of

  15. A survey of Martian dust devil activity using Mars Global Surveyor Mars Orbiter Camera images

    Microsoft Academic Search

    Jenny A. Fisher; Mark I. Richardson; Claire E. Newman; Mark A. Szwast; Chelsea Graf; Shabari Basu; Shawn P. Ewald; Anthony D. Toigo; R. John Wilson

    2005-01-01

    A survey of dust devils using the Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) wide- and narrow-angle (WA and NA) images has been undertaken. The survey comprises two parts: (1) sampling of nine broad regions from September 1997 to July 2001 and (2) a focused seasonal monitoring of variability in the Amazonis region, an active dust devil site, from

  16. Model-Based Pose Estimation of 3-D Objects from Camera Images Using Neural Networks

    Microsoft Academic Search

    Stefan Winkler

    1996-01-01

    Machines need the ability to determine the pose of objects in their environment in order to beable to reliably and intelligently interact with them. This thesis investigates neural networkapproaches to model-based object pose estimation from camera images. Kohonen maps andsome of their variations are studied for this purpose. It is shown that the performance of thesenetworks depends heavily on the

  17. Real-time pose estimation of 3D objects from camera images using neural networks

    Microsoft Academic Search

    P. Wunsch; S. Winkler; G. Hirzinger

    1997-01-01

    This paper deals with the problem of obtaining a rough estimate of three dimensional object position and orientation from a single two dimensional camera image. Such an estimate is required by most 3-D to 2-D registration and tracking methods that can efficiently refine an initial value by numerical optimization to precisely recover 3-D pose. However the analytic computation of an

  18. Cameras for Stereo Panoramic Imaging Shmuel Peleg Yael Pritch Moshe Ben-Ezra

    E-print Network

    Peleg, Shmuel

    as those used with the rotating cameras. Such a mirror enables the capture of stereo panoramic movies desired direction; (iii) allow free movement. Stereo Panoramas [6, 5, 10, 14] use a new scene to im- age stereo panoramic images, it was impossible to generate video-rate stereo panoramic movies. In this paper

  19. Camera Calibration and 3D Scene Reconstruction from image sequence and rotation sensor data

    E-print Network

    Frahm, Jan-Michael

    a metric reconstruction. During the last decade we have seen a lot of progress in camera selfcalibration methods have been developed. Methods for the calibration of rotating cam- eras with unknown and varying image analysis and external rotation information for self- calibration. There is a lot to be done in this

  20. Eyegaze Detection from Monocular Camera Image for Eyegaze Communication System

    NASA Astrophysics Data System (ADS)

    Ohtera, Ryo; Horiuchi, Takahiko; Kotera, Hiroaki

    An eyegaze interface is one of the key technologies as an input device in the ubiquitous-computing society. In particular, an eyegaze communication system is very important and useful for severely handicapped users such as quadriplegic patients. Most of the conventional eyegaze tracking algorithms require specific light sources, equipment and devices. In this study, a simple eyegaze detection algorithm is proposed using a single monocular video camera. The proposed algorithm works under the condition of fixed head pose, but slight movement of the face is accepted. In our system, we assume that all users have the same eyeball size based on physiological eyeball models. However, we succeed to calibrate the physiologic movement of the eyeball center depending on the gazing direction by approximating it as a change in the eyeball radius. In the gaze detection stage, the iris is extracted from a captured face frame by using the Hough transform. Then, the eyegaze angle is derived by calculating the Euclidean distance of the iris centers between the extracted frame and a reference frame captured in the calibration process. We apply our system to an eyegaze communication interface, and verified the performance through key typing experiments with a visual keyboard on display.

  1. A Three-dimensional Camera: Development and Applications of a Three-dimensional Image Measurement System

    NASA Astrophysics Data System (ADS)

    Lu, Cunwei; Kamitomo, Hiroya; Sun, Ke; Tsujino, Kazuhiro; Cho, Genki

    Three-dimensional (3-D) image measurement is a technique that uses a digital camera to determine the shape and dimensions of the surface of an object. Although it has been studied for a long time, various problems still remain to be solved for practical applications. The goal of our research is to solve these problems and to develop a 3-D camera that can be used for practical 3-D image measurements. This paper analyzes the problems associated with the conventional technology and introduces development goals for the new 3-D camera. The key techniques of this 3-D camera are explained, including techniques for optimizing the intensity-modulation pattern projection, controlling the projection pattern intensity, determining the projection position, and controlling the stripe period. The system is evaluated and some examples of applications are given. The proposed 3-D camera can automatically adjust for variations in an object's size, form, surface color, and reflection characteristics and it can measure non-stationary objects. Consequently, it has the potential to be used in a wide range of applications including product quality control, human measurement, and face recognition.

  2. Chandra High Resolution Camera Imaging of GRS 1758-258

    E-print Network

    W. A. Heindl; D. M. Smith

    2002-08-19

    We observed the "micro-quasar" GRS 1758-258 four times with Chandra. Two HRC-I observations were made in 2000 September-October spanning an intermediate-to-hard spectral transition (identified with RXTE). Another HRC-I and an ACIS/HETG observation were made in 2001 March following a hard-to-soft transition to a very low flux state. Based on the three HRC images and the HETG zero order image, the accurate position (J2000) of the X-ray source is RA = 18h 01m 12.39s, Dec = -25d 44m 36.1s (90% confidence radius = 0".45), consistent with the purported variable radio counterpart. All three HRC images are consistent with GRS 1758-258 being a point source, indicating that any bright jet is less than ~1 light-month in projected length, assuming a distance of 8.5 kpc.

  3. High frame rate CCD cameras with fast optical shutters for military and medical imaging applications

    SciTech Connect

    King, N.S.P.; Albright, K.; Jaramillo, S.A.; McDonald, T.E.; Yates, G.J. [Los Alamos National Lab., NM (United States); Turko, B.T. [Lawrence Berkeley Lab., CA (United States)

    1994-09-01

    Los Alamos National Laboratory has designed and prototyped high-frame rate intensified/shuttered Charge-Coupled-Device (CCD) cameras capable of operating at kilohertz frame rates (non-interlaced mode) with optical shutters capable of acquiring nanosecond-to-microsecond exposures each frame. These cameras utilize an Interline Transfer CCD, Loral Fairchild CCD-222 with 244 {times} 380 pixels operated at pixel rates approaching 100 Mhz. Initial prototype designs demonstrated single-port serial readout rates exceeding 3.97 Kilohertz with greater than 51p/mm spatial resolution at shutter speeds as short as 5ns. Readout was achieved by using a truncated format of 128 {times} 128 pixels by partial masking of the CCD and then subclocking the array at approximately 65Mhz pixel rate. Shuttering was accomplished with a proximity focused microchannel plate (MCP) image intensifier (MCPII) that incorporated a high strip current MCP and a design modification for high-speed stripline gating geometry to provide both fast shuttering and high repetition rate capabilities. Later camera designs use a close-packed quadruple head geometry fabricated using an array of four separate CCDs (pseudo 4-port device). This design provides four video outputs with optional parallel or time-phased sequential readout modes. The quad head format was designed with flexibility for coupling to various image intensifier configurations, including individual intensifiers for each CCD imager, a single intensifier with fiber optic or lens/prism coupled fanout of the input image to be shared by the four CCD imagers or a large diameter phosphor screen of a gateable framing type intensifier for time sequential relaying of a complete new input image to each CCD imager. Camera designs and their potential use in ongoing military and medical time-resolved imaging applications are discussed.

  4. The iQID camera: An ionizing-radiation quantum imaging detector

    PubMed Central

    Miller, Brian W.; Gregory, Stephanie J.; Fuller, Erin S.; Barrett, Harrison H.; Barber, H. Bradford; Furenlid, Lars R.

    2015-01-01

    We have developed and tested a novel, ionizing-radiation Quantum Imaging Detector (iQID). This scintillation-based detector was originally developed as a high-resolution gamma-ray imager, called BazookaSPECT, for use in single-photon emission computed tomography (SPECT). Recently, we have investigated the detector’s response and imaging potential with other forms of ionizing radiation including alpha, neutron, beta, and fission fragment particles. The confirmed response to this broad range of ionizing radiation has prompted its new title. The principle operation of the iQID camera involves coupling a scintillator to an image intensifier. The scintillation light generated by particle interactions is optically amplified by the intensifier and then re-imaged onto a CCD/CMOS camera sensor. The intensifier provides sufficient optical gain that practically any CCD/CMOS camera can be used to image ionizing radiation. The spatial location and energy of individual particles are estimated on an event-by-event basis in real time using image analysis algorithms on high-performance graphics processing hardware. Distinguishing features of the iQID camera include portability, large active areas, excellent detection efficiency for charged particles, and high spatial resolution (tens of microns). Although modest, iQID has energy resolution that is sufficient to discriminate between particles. Additionally, spatial features of individual events can be used for particle discrimination. An important iQID imaging application that has recently been developed is real-time, single-particle digital autoradiography. We present the latest results and discuss potential applications.

  5. Camera Committee 

    E-print Network

    Unknown

    2011-08-17

    objects on stereoscopic still video images. Digital picture elements (pixels) were used as units of measurement. A scale model was devised to emulate low altitude videography. Camera distance was set at 1500 cm to simulate a flight altitude of 1500 feet... above ground level. Accordingly, the model was designed for rods 40 to 100 cm long to represent poles measuring 40 to 100 feet in height. Absolute orientation of each stereoscopic image was obtained by surveying each nadir, camera location and camera...

  6. Real-time full-field photoacoustic imaging using an ultrasonic camera.

    PubMed

    Balogun, Oluwaseyi; Regez, Brad; Zhang, Hao F; Krishnaswamy, Sridhar

    2010-01-01

    A photoacoustic imaging system that incorporates a commercial ultrasonic camera for real-time imaging of two-dimensional (2-D) projection planes in tissue at video rate (30 Hz) is presented. The system uses a Q-switched frequency-doubled Nd:YAG pulsed laser for photoacoustic generation. The ultrasonic camera consists of a 2-D 12 x 12 mm CCD chip with 120 x 120 piezoelectric sensing elements used for detecting the photoacoustic pressure distribution radiated from the target. An ultrasonic lens system is placed in front of the chip to collect the incoming photoacoustic waves, providing the ability for focusing and imaging at different depths. Compared with other existing photoacoustic imaging techniques, the camera-based system is attractive because it is relatively inexpensive and compact, and it can be tailored for real-time clinical imaging applications. Experimental results detailing the real-time photoacoustic imaging of rubber strings and buried absorbing targets in chicken breast tissue are presented, and the spatial resolution of the system is quantified. PMID:20459240

  7. High-speed camera with real time processing for frequency domain imaging

    PubMed Central

    Shia, Victor; Watt, David; Faris, Gregory W.

    2011-01-01

    We describe a high-speed camera system for frequency domain imaging suitable for applications such as in vivo diffuse optical imaging and fluorescence lifetime imaging. 14-bit images are acquired at 2 gigapixels per second and analyzed with real-time pipeline processing using field programmable gate arrays (FPGAs). Performance of the camera system has been tested both for RF-modulated laser imaging in combination with a gain-modulated image intensifier and a simpler system based upon an LED light source. System amplitude and phase noise are measured and compared against theoretical expressions in the shot noise limit presented for different frequency domain configurations. We show the camera itself is capable of shot noise limited performance for amplitude and phase in as little as 3 ms, and when used in combination with the intensifier the noise levels are nearly shot noise limited. The best phase noise in a single pixel is 0.04 degrees for a 1 s integration time. PMID:21750770

  8. Innovative Camera and Image Processing System to Characterize Cryospheric Changes

    Microsoft Academic Search

    A. Schenk; B. M. Csatho; S. Nagarajan

    2010-01-01

    The polar regions play an important role in Earth's climatic and geodynamic systems. Digital photogrammetric mapping provides a means for monitoring the dramatic changes observed in the polar regions during the past decades. High-resolution, photogrammetrically processed digital aerial imagery provides complementary information to surface measurements obtained by laser altimetry systems. While laser points accurately sample the ice surface, stereo images

  9. Single-Quantum Dot Imaging with a Photon Counting Camera

    Microsoft Academic Search

    X. Michalet; R. A. Colyer; J. Antelman; O. H. W. Siegmund; A. Tremsin; J. V. Vallerga; S. Weiss

    2009-01-01

    The expanding spectrum of applications of single-molecule fluorescence imaging ranges from fundamental in vitro studies of biomolecular activity to tracking of receptors in live cells. The success of these assays has relied on pro- gress in organic and non-organic fluorescent probe developments as well as improvements in the sensitivity of light detec- tors. We describe a new type of detector

  10. Engineering performance of IRIS2 infrared imaging camera and spectrograph

    Microsoft Academic Search

    Vladimir Churilov; John Dawson; Greg A. Smith; Lew Waller; John D. Whittard; Roger Haynes; Allan Lankshear; Stuart D. Ryder; Chris G. Tinney

    2004-01-01

    IRIS2, the infrared imager and spectrograph for the Cassegrain focus of the Anglo Australian Telescope, has been in service since October 2001. IRIS2 incorporated many novel features, including multiple cryogenic multislit masks, a dual chambered vacuum vessel (the smaller chamber used to reduce thermal cycle time required to change sets of multislit masks), encoded cryogenic wheel drives with controlled backlash,

  11. High etendue UV camera for simultaneous four-color imaging on a single detector.

    PubMed

    Hicks, Brian A; Danowski, Meredith E; Martel, Jason F; Cook, Timothy A

    2013-07-20

    We describe a high etendue (0.12 cm(2) sr) camera that, without moving parts, simultaneously images four ultraviolet bands centered at 140, 175, 215, and 255 nm on a single detector into a minimum of ~7500 resolution elements. In addition to being an efficient way to make color photometric measurements of a static scene, the camera described here enables detection of spatial and temporal information that can be used to reveal energy dependent physical phenomena to complement the capability of other instruments ranging in complexity from filter wheels to integral field spectrographs. PMID:23872766

  12. MONICA: a compact, portable dual gamma camera system for mouse whole-body imaging

    SciTech Connect

    Choyke, Peter L.; Xia, Wenze; Seidel, Jurgen; Kakareka, John W.; Pohida, Thomas J.; Milenic, Diane E.; Proffitt, James; Majewski, Stan; Weisenberger, Andrew G.; Green, Michael V.

    2010-04-01

    Introduction We describe a compact, portable dual-gamma camera system (named "MONICA" for MObile Nuclear Imaging CAmeras) for visualizing and analyzing the whole-body biodistribution of putative diagnostic and therapeutic single photon emitting radiotracers in animals the size of mice. Methods Two identical, miniature pixelated NaI(Tl) gamma cameras were fabricated and installed ?looking up? through the tabletop of a compact portable cart. Mice are placed directly on the tabletop for imaging. Camera imaging performance was evaluated with phantoms and field performance was evaluated in a weeklong In-111 imaging study performed in a mouse tumor xenograft model. Results Tc-99m performance measurements, using a photopeak energy window of 140 keV?10%, yielded the following results: spatial resolution (FWHM at 1 cm), 2.2 mm; sensitivity, 149 cps (counts per seconds)/MBq (5.5 cps/μCi); energy resolution (FWHM, full width at half maximum), 10.8%; count rate linearity (count rate vs. activity), r2=0.99 for 0?185 MBq (0?5 mCi) in the field of view (FOV); spatial uniformity, <3% count rate variation across the FOV. Tumor and whole-body distributions of the In-111 agent were well visualized in all animals in 5-min images acquired throughout the 168-h study period. Conclusion Performance measurements indicate that MONICA is well suited to whole-body single photon mouse imaging. The field study suggests that inter-device communications and user-oriented interfaces included in the MONICA design facilitate use of the system in practice. We believe that MONICA may be particularly useful early in the (cancer) drug development cycle where basic whole-body biodistribution data can direct future development of the agent under study and where logistical factors, e.g., limited imaging space, portability and, potentially, cost are important.

  13. High image quality sub 100 picosecond gated framing camera development

    SciTech Connect

    Price, R.H.; Wiedwald, J.D.

    1983-11-17

    A major challenge for laser fusion is the study of the symmetry and hydrodynamic stability of imploding fuel capsules. Framed x-radiographs of 10-100 ps duration, excellent image quality, minimum geometrical distortion (< 1%), dynamic range greater than 1000, and more than 200 x 200 pixels are required for this application. Recent progress on a gated proximity focused intensifier which meets these requirements is presented.

  14. Accurate quantification of 131I distribution by gamma camera imaging.

    PubMed

    Green, A J; Dewhurst, S E; Begent, R H; Bagshawe, K D; Riggs, S J

    1990-01-01

    The development of targeted therapy requires that the concentration of the therapeutic agent can be estimated in target and normal tissues. Single photon emission tomography (SPET), with and without scatter correction, and planar imaging using 131I have been compared to develop a method for investigation of targeted therapy. Compton scatter was investigated using line spread functions in air and water, these data were used to set a second peak, adjacent to the photopeak, for scatter correction. The system was calibrated with an eliptical phantom containing sources in background activity of various intensities. Scatter corrected reconstructions gave accurate estimates of activity in the sources regardless of background activity. For planar scanning and SPET without scatter correction there was an overestimate of activity in the source of 290% and 40% respectively. The validity of this method was confirmed in patients by comparing activity in the cardiac ventricles measured by SPET with scatter correction with that in a simultaneous blood sample. A coefficient of correlation of 0.955 was achieved with 25 data points. SPET with scatter correction was compared with planar imaging in measuring activity in the liver and spleen of patients receiving 75 mCi 131I-antibody to CEA intravenously. Planar imaging gave significantly higher values than SPET for the spleen (t = 5.4, P less than 0.001 by the paired t-test) but no significant difference for the liver. SPET with scatter correction forms a basis for an improved technique of quantifying the targeting efficiency. PMID:2351184

  15. The trustworthy digital camera: Restoring credibility to the photographic image

    NASA Technical Reports Server (NTRS)

    Friedman, Gary L.

    1994-01-01

    The increasing sophistication of computers has made digital manipulation of photographic images, as well as other digitally-recorded artifacts such as audio and video, incredibly easy to perform and increasingly difficult to detect. Today, every picture appearing in newspapers and magazines has been digitally altered to some degree, with the severity varying from the trivial (cleaning up 'noise' and removing distracting backgrounds) to the point of deception (articles of clothing removed, heads attached to other people's bodies, and the complete rearrangement of city skylines). As the power, flexibility, and ubiquity of image-altering computers continues to increase, the well-known adage that 'the photography doesn't lie' will continue to become an anachronism. A solution to this problem comes from a concept called digital signatures, which incorporates modern cryptographic techniques to authenticate electronic mail messages. 'Authenticate' in this case means one can be sure that the message has not been altered, and that the sender's identity has not been forged. The technique can serve not only to authenticate images, but also to help the photographer retain and enforce copyright protection when the concept of 'electronic original' is no longer meaningful.

  16. Fast image super-resolution for a dual-resolution camera

    NASA Astrophysics Data System (ADS)

    Chen, Kuo; Chen, Yueting; Feng, Huajun; Xu, Zhihai

    2015-06-01

    High-spatial resolution and wide field of view (FOV) can be satisfied simultaneously with a dual-sensor camera. A special kind of dual-sensor camera named dual-resolution camera has been designed and manufactured; therefore, a high-resolution image with narrow FOV and another low-resolution image with wide FOV are captured by one shot. To generate a high-resolution image with wide FOV, a fast super-resolution reconstruction is proposed, which is composed of wavelet-based super-resolution and back projection. During wavelet-based super-solution, the high-resolution image captured is used to learn the co-occurrence prior by a linear regression function. At last, low-resolution image is reconstructed based on the learnt co-occurrence prior. Simulation and real experiments are carried out, and three other common super-resolution algorithms are compared. The experimental results show that the proposed method reduces time cost significantly, and achieves excellent performance with high PSNR and SSIM.

  17. Stereo Imaging Velocimetry Technique Using Standard Off-the-Shelf CCD Cameras

    NASA Technical Reports Server (NTRS)

    McDowell, Mark; Gray, Elizabeth

    2004-01-01

    Stereo imaging velocimetry is a fluid physics technique for measuring three-dimensional (3D) velocities at a plurality of points. This technique provides full-field 3D analysis of any optically clear fluid or gas experiment seeded with tracer particles. Unlike current 3D particle imaging velocimetry systems that rely primarily on laser-based systems, stereo imaging velocimetry uses standard off-the-shelf charge-coupled device (CCD) cameras to provide accurate and reproducible 3D velocity profiles for experiments that require 3D analysis. Using two cameras aligned orthogonally, we present a closed mathematical solution resulting in an accurate 3D approximation of the observation volume. The stereo imaging velocimetry technique is divided into four phases: 3D camera calibration, particle overlap decomposition, particle tracking, and stereo matching. Each phase is explained in detail. In addition to being utilized for space shuttle experiments, stereo imaging velocimetry has been applied to the fields of fluid physics, bioscience, and colloidal microscopy.

  18. Hyperspectral imaging using a color camera and its application for pathogen detection

    NASA Astrophysics Data System (ADS)

    Yoon, Seung-Chul; Shin, Tae-Sung; Heitschmidt, Gerald W.; Lawrence, Kurt C.; Park, Bosoon; Gamble, Gary

    2015-02-01

    This paper reports the results of a feasibility study for the development of a hyperspectral image recovery (reconstruction) technique using a RGB color camera and regression analysis in order to detect and classify colonies of foodborne pathogens. The target bacterial pathogens were the six representative non-O157 Shiga-toxin producing Escherichia coli (STEC) serogroups (O26, O45, O103, O111, O121, and O145) grown in Petri dishes of Rainbow agar. The purpose of the feasibility study was to evaluate whether a DSLR camera (Nikon D700) could be used to predict hyperspectral images in the wavelength range from 400 to 1,000 nm and even to predict the types of pathogens using a hyperspectral STEC classification algorithm that was previously developed. Unlike many other studies using color charts with known and noise-free spectra for training reconstruction models, this work used hyperspectral and color images, separately measured by a hyperspectral imaging spectrometer and the DSLR color camera. The color images were calibrated (i.e. normalized) to relative reflectance, subsampled and spatially registered to match with counterpart pixels in hyperspectral images that were also calibrated to relative reflectance. Polynomial multivariate least-squares regression (PMLR) was previously developed with simulated color images. In this study, partial least squares regression (PLSR) was also evaluated as a spectral recovery technique to minimize multicollinearity and overfitting. The two spectral recovery models (PMLR and PLSR) and their parameters were evaluated by cross-validation. The QR decomposition was used to find a numerically more stable solution of the regression equation. The preliminary results showed that PLSR was more effective especially with higher order polynomial regressions than PMLR. The best classification accuracy measured with an independent test set was about 90%. The results suggest the potential of cost-effective color imaging using hyperspectral image classification algorithms for rapidly differentiating pathogens in agar plates.

  19. Automatic Generation of Passer-by Record Images using Internet Camera

    NASA Astrophysics Data System (ADS)

    Terada, Kenji; Atsuta, Koji

    Recently, many brutal crimes have shocked us. On the other hand, we have seen a decline in the number of solved crimes. Therefore, the importance of security and self-defense has increased more and more. As an example of self-defense, many surveillance cameras are set up in the buildings, homes and offices. But even if we want to detect a suspicious person, we cannot check the surveillance videos immediately so that huge number of image sequences is stored in each video system. In this paper, we propose an automatic method of generating passer-by record images by using internet camera. In first step, the process of recognizing passer-by is carried out using an image sequence obtained from the internet camera. Our method classifies the subject region into each person by using the space-time image. In addition, we obtain the information of the time, direction and number of passey-by persons from this space-time image. Next, the present method detects five characteristics: the gravity of center, the position of person's head, the brightness, the size, and the shape of person. Finaly, an image of each person is selected among the image sequence by integrating five characteristics, and is added into the passer-by record image. Some experimental results using a simple experimental system are also reported, which indicate effectiveness of the proposed method. In most scenes, the every persons was able to be detected by the proposed method and the passer-by record image was generated.

  20. Engineering performance of IRIS2 infrared imaging camera and spectrograph

    NASA Astrophysics Data System (ADS)

    Churilov, Vladimir; Dawson, John; Smith, Greg A.; Waller, Lew; Whittard, John D.; Haynes, Roger; Lankshear, Allan; Ryder, Stuart D.; Tinney, Chris G.

    2004-09-01

    IRIS2, the infrared imager and spectrograph for the Cassegrain focus of the Anglo Australian Telescope, has been in service since October 2001. IRIS2 incorporated many novel features, including multiple cryogenic multislit masks, a dual chambered vacuum vessel (the smaller chamber used to reduce thermal cycle time required to change sets of multislit masks), encoded cryogenic wheel drives with controlled backlash, a deflection compensating structure, and use of teflon impregnated hard anodizing for gear lubrication at low temperatures. Other noteworthy features were: swaged foil thermal link terminations, the pupil imager, the detector focus mechanism, phased getter cycling to prevent detector contamination, and a flow-through LN2 precooling system. The instrument control electronics was designed to allow accurate positioning of the internal mechanisms with minimal generation of heat. The detector controller was based on the AAO2 CCD controller, adapted for use on the HAWAII1 detector (1024 x 1024 pixels) and is achieving low noise and high performance. We describe features of the instrument design, the problems encountered and the development work required to bring them into operation, and their performance in service.

  1. Real-time analysis of laser beams by simultaneous imaging on a single camera chip

    NASA Astrophysics Data System (ADS)

    Piehler, S.; Boley, M.; Abdou Ahmed, M.; Graf, T.

    2015-03-01

    The fundamental parameters of a laser beam, such as the exact position and size of the focus or the beam quality factor M² yield vital information both for laser developers and end-users. However, each of these parameters can significantly change on a short time scale due to thermally induced effects in the processing optics or in the laser source itself, leading to process instabilities and non-reproducible results. In order to monitor the transient behavior of these effects, we have developed a camera-based measurement system, which enables full laser beam characterization in online. A novel monolithic beam splitter has been designed which generates a 2D array of images on a single camera chip, each of which corresponds to an intensity cross section of the beam along the propagation axis separated by a well-defined spacing. Thus, using the full area of the camera chip, a large number of measurement planes is achieved, leading to a measurement range sufficient for a full beam characterization conforming to ISO 11146 for a broad range of beam parameters of the incoming beam. The exact beam diameters in each plane are derived by calculation of the 2nd order intensity moments of the individual intensity slices. The processing time needed to carry out both the background filtering and the image processing operations for the full analysis of a single camera image is in the range of a few milliseconds. Hence, the measurement frequency of our system is mainly limited by the frame-rate of the camera.

  2. Optimal Design of Anger Camera for Bremsstrahlung Imaging: Monte Carlo Evaluation

    PubMed Central

    Walrand, Stephan; Hesse, Michel; Wojcik, Randy; Lhommel, Renaud; Jamar, François

    2014-01-01

    A conventional Anger camera is not adapted to bremsstrahlung imaging and, as a result, even using a reduced energy acquisition window, geometric x-rays represent <15% of the recorded events. This increases noise, limits the contrast, and reduces the quantification accuracy. Monte Carlo (MC) simulations of energy spectra showed that a camera based on a 30-mm-thick BGO crystal and equipped with a high energy pinhole collimator is well-adapted to bremsstrahlung imaging. The total scatter contamination is reduced by a factor 10 versus a conventional NaI camera equipped with a high energy parallel hole collimator enabling acquisition using an extended energy window ranging from 50 to 350?keV. By using the recorded event energy in the reconstruction method, shorter acquisition time and reduced orbit range will be usable allowing the design of a simplified mobile gantry. This is more convenient for use in a busy catheterization room. After injecting a safe activity, a fast single photon emission computed tomography could be performed without moving the catheter tip in order to assess the liver dosimetry and estimate the additional safe activity that could still be injected. Further long running time MC simulations of realistic acquisitions will allow assessing the quantification capability of such system. Simultaneously, a dedicated bremsstrahlung prototype camera reusing PMT–BGO blocks coming from a retired PET system is currently under design for further evaluation. PMID:24982849

  3. Influence of electron dose rate on electron counting images recorded with the K2 camera

    PubMed Central

    Li, Xueming; Zheng, Shawn Q.; Egami, Kiyoshi; Agard, David A.; Cheng, Yifan

    2013-01-01

    A recent technological breakthrough in electron cryomicroscopy (cryoEM) is the development of direct electron detection cameras for data acquisition. By bypassing the traditional phosphor scintillator and fiber optic coupling, these cameras have greatly enhanced sensitivity and detective quantum efficiency (DQE). Of the three currently available commercial cameras, the Gatan K2 Summit was designed specifically for counting individual electron events. Counting further enhances the DQE, allows for practical doubling of detector resolution and eliminates noise arising from the variable deposition of energy by each primary electron. While counting has many advantages, undercounting of electrons happens when more than one electron strikes the same area of the detector within the analog readout period (coincidence loss), which influences image quality. In this work, we characterized the K2 Summit in electron counting mode, and studied the relationship of dose rate and coincidence loss and its influence on the quality of counted images. We found that coincidence loss reduces low frequency amplitudes but has no significant influence on the signal-to-noise ratio of the recorded image. It also has little influence on high frequency signals. Images of frozen hydrated archaeal 20S proteasome (~700 kDa, D7 symmetry) recorded at the optimal dose rate retained both high-resolution signal and low-resolution contrast and enabled calculating a 3.6 Å three-dimensional reconstruction from only 10,000 particles. PMID:23968652

  4. New design of a gamma camera detector with reduced edge effect for breast imaging

    NASA Astrophysics Data System (ADS)

    Yeon Hwang, Ji; Lee, Seung-Jae; Baek, Cheol-Ha; Hyun Kim, Kwang; Hyun Chung, Yong

    2011-05-01

    In recent years, there has been a growing interest in developing small gamma cameras dedicated to breast imaging. We designed a new detector with trapezoidal shape to expand the field of view (FOV) of camera without increasing its dimensions. To find optimal parameters, images of point sources at the edge area as functions of the angle and optical treatment of crystal side surface were simulated by using a DETECT2000. Our detector employs monolithic CsI(Tl) with dimensions of 48.0×48.0×6.0 mm coupled to an array of photo-sensors. Side surfaces of crystal were treated with three different surface finishes: black absorber, metal reflector and white reflector. The trapezoidal angle varied from 45° to 90° in steps of 15°. Gamma events were generated on 15 evenly spaced points with 1.0 mm spacing in the X-axis starting 1.0 mm away from the side surface. Ten thousand gamma events were simulated at each location and images were formed by calculating the Anger-logic. The results demonstrated that all the 15 points could be identified only for the crystal with trapezoidal shape having 45° angle and white reflector on the side surface. In conclusion, our new detector proved to be a reliable design to expand the FOV of small gamma camera for breast imaging.

  5. Influence of electron dose rate on electron counting images recorded with the K2 camera.

    PubMed

    Li, Xueming; Zheng, Shawn Q; Egami, Kiyoshi; Agard, David A; Cheng, Yifan

    2013-11-01

    A recent technological breakthrough in electron cryomicroscopy (cryoEM) is the development of direct electron detection cameras for data acquisition. By bypassing the traditional phosphor scintillator and fiber optic coupling, these cameras have greatly enhanced sensitivity and detective quantum efficiency (DQE). Of the three currently available commercial cameras, the Gatan K2 Summit was designed specifically for counting individual electron events. Counting further enhances the DQE, allows for practical doubling of detector resolution and eliminates noise arising from the variable deposition of energy by each primary electron. While counting has many advantages, undercounting of electrons happens when more than one electron strikes the same area of the detector within the analog readout period (coincidence loss), which influences image quality. In this work, we characterized the K2 Summit in electron counting mode, and studied the relationship of dose rate and coincidence loss and its influence on the quality of counted images. We found that coincidence loss reduces low frequency amplitudes but has no significant influence on the signal-to-noise ratio of the recorded image. It also has little influence on high frequency signals. Images of frozen hydrated archaeal 20S proteasome (~700 kDa, D7 symmetry) recorded at the optimal dose rate retained both high-resolution signal and low-resolution contrast and enabled calculating a 3.6 Å three-dimensional reconstruction from only 10,000 particles. PMID:23968652

  6. Images of Community\\/Community of Images: Environmental Knowing Through Camera Phones

    Microsoft Academic Search

    Fumitoshi Kato

    There is always a mobile phone within an individual's reach, and information necessary in everyday life is now being stored inside the phone. We are now communicating through exchanging photos over camera phones. From the standpoint of developing a qualitative research method, a camera phone can be understood as a new \\

  7. Autofocusing technique based on image processing for remote-sensing camera

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Sun, Rong-chun; Xu, Shu-yan

    2008-03-01

    The key to the auto-focusing technique based on image processing is the selection of focus measure reflecting image definition. Usually the measures derived are on the premise of the images acquired with the same scene. As for the remote-sensing camera working in linear CCD push-broom imaging mode, the premise doesn't exist because the scenes shot are different at any moment, which brings about difficulties to the selection of the focus measure. To evaluate the image definition, the focus measure based on blur estimation for rough adjustment is proposed to estimate the focused position by only two different lens positions, which greatly saves the auto-focusing time. Another evaluation function based on edge sharpness is developed to find best imaging position in the narrow range. Simulations show that the combination of the two measures has the advantages of rapid reaction and high accuracy.

  8. 1024 1024 HgCdTe CMOS Camera for Infrared Imaging Magnetograph of Big Bear Solar Observatory

    E-print Network

    1024 × 1024 HgCdTe CMOS Camera for Infrared Imaging Magnetograph of Big Bear Solar Observatory W spatial resolution, high spectral resolving power, high magnetic sensitivity. As the detector of IRIM, the 1024 × 1024 HgCdTe TCM8600 CMOS camera manufactured by the Rockwell Scientific Company plays a very

  9. IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 2, NO. 4, OCTOBER 1993 481 Edge-Based 3-D Camera Motion Estimation

    E-print Network

    Zakhor, Avideh

    algorithms use the block matching algorithm (BMA)to model both the camera motion and local motion due that unlike local motion estimation, edge matching can be sufficient in estimating camera motion parameters conferencing, video telephony, medical imaging, and CD- ROM storage. The basic idea is to take advantage

  10. Streak camera coupled with a high-resolution x-ray crystal imager

    Microsoft Academic Search

    V. Serlin; M. Karasik; C. J. Pawley; A. N. Mostovych; S. P. Obenschain; Y. Aglitskiy

    2001-01-01

    An addition of a streak camera to the Nike monochromatic x-ray imaging system makes it possible to analyze continuous time behavior of mass variation, which is necessary to reveal the non-monotonic evolution of the processes under study. Backlighter energy of ~500 J is delivered to a silicon target, producing x-rays that backlight the main target for about 5 ns. The

  11. Update and image quality error budget for the LSST camera optical design

    Microsoft Academic Search

    Brian J. Bauman; Gordon Bowden; John Ku; Martin Nordby; Scot Olivier; Vincent Riot; Andrew Rasmussen; Lynn Seppala; Hong Xiao; Nadine Nurita; David Gilmore; Steven Kahn

    2010-01-01

    The Large Synoptic Survey Telescope (LSST) uses a novel, three-mirror, modified Paul-Baker design, with an 8.4-meter primary mirror, a 3.4-m secondary, and a 5.0-m tertiary feeding a refractive camera design with 3 lenses (0.69-1.55m) and a set of broadband filters\\/corrector lenses. Performance is excellent over a 9.6 square degree field and ultraviolet to near infrared wavelengths. We describe the image

  12. Portable retinal imaging for eye disease screening using a consumer-grade digital camera

    NASA Astrophysics Data System (ADS)

    Barriga, Simon; Larichev, Andrey; Zamora, Gilberto; Soliz, Peter

    2012-03-01

    The development of affordable means to image the retina is an important step toward the implementation of eye disease screening programs. In this paper we present the i-RxCam, a low-cost, hand-held, retinal camera for widespread applications such as tele-retinal screening for eye diseases like diabetic retinopathy (DR), glaucoma, and age-related ocular diseases. Existing portable retinal imagers do not meet the requirements of a low-cost camera with sufficient technical capabilities (field of view, image quality, portability, battery power, and ease-of-use) to be distributed widely to low volume clinics, such as the offices of single primary care physicians serving rural communities. The i-RxCam uses a Nikon D3100 digital camera body. The camera has a CMOS sensor with 14.8 million pixels. We use a 50mm focal lens that gives a retinal field of view of 45 degrees. The internal autofocus can compensate for about 2D (diopters) of focusing error. The light source is an LED produced by Philips with a linear emitting area that is transformed using a light pipe to the optimal shape at the eye pupil, an annulus. To eliminate corneal reflex we use a polarization technique in which the light passes through a nano-wire polarizer plate. This is a novel type of polarizer featuring high polarization separation (contrast ratio of more than 1000) and very large acceptance angle (>45 degrees). The i-RxCam approach will yield a significantly more economical retinal imaging device that would allow mass screening of the at-risk population.

  13. Star-field identification algorithm. [for implementation on CCD-based imaging camera

    NASA Technical Reports Server (NTRS)

    Scholl, M. S.

    1993-01-01

    A description of a new star-field identification algorithm that is suitable for implementation on CCD-based imaging cameras is presented. The minimum identifiable star pattern element consists of an oriented star triplet defined by three stars, their celestial coordinates, and their visual magnitudes. The algorithm incorporates tolerance to faulty input data, errors in the reference catalog, and instrument-induced systematic errors.

  14. Noise estimation from a single image taken by specific digital camera using a priori information

    NASA Astrophysics Data System (ADS)

    Ito, Hitomi; Kamimura, Kenji; Tsumura, Norimichi; Nakaguchi, Toshiya; Motomura, Hideto; Miyake, Yoichi

    2008-01-01

    It is important to estimate the noise of digital image quantitatively and efficiently for many applications such as noise removal, compression, feature extraction, pattern recognition, and also image quality assessment. For these applications, it is necessary to estimate the noise accurately from a single image. Ce et al proposed a method to use a Bayesian MAP for the estimation of noise. In this method, the noise level function (NLF) which is standard deviation of intensity of image was estimated from the input image itself. Many NLFs were generated by using computer simulation to construct a priori information for Bayesian MAP. This a priori information was effective for the accurate noise estimation but not enough for practical applications since the a priori information didn't reflect the variable characteristics of the individual camera depending on the exposure and shutter speed. In this paper, therefore, we propose a new method to construct a priori information for specific camera in order to improve accuracy of noise estimation. To construct a priori information of noise, the NLFs were measured and calculated from the images captured under various conditions. We compared the accuracy of noise estimation between proposed method and Ce's model. The results showed that our model improved the accuracy of noise estimation.

  15. COMPACT CdZnTe-BASED GAMMA CAMERA FOR PROSTATE CANCER IMAGING

    SciTech Connect

    CUI, Y.; LALL, T.; TSUI, B.; YU, J.; MAHLER, G.; BOLOTNIKOV, A.; VASKA, P.; DeGERONIMO, G.; O'CONNOR, P.; MEINKEN, G.; JOYAL, J.; BARRETT, J.; CAMARDA, G.; HOSSAIN, A.; KIM, K.H.; YANG, G.; POMPER, M.; CHO, S.; WEISMAN, K.; SEO, Y.; BABICH, J.; LaFRANCE, N.; AND JAMES, R.B.

    2011-10-23

    In this paper, we discuss the design of a compact gamma camera for high-resolution prostate cancer imaging using Cadmium Zinc Telluride (CdZnTe or CZT) radiation detectors. Prostate cancer is a common disease in men. Nowadays, a blood test measuring the level of prostate specific antigen (PSA) is widely used for screening for the disease in males over 50, followed by (ultrasound) imaging-guided biopsy. However, PSA tests have a high false-positive rate and ultrasound-guided biopsy has a high likelihood of missing small cancerous tissues. Commercial methods of nuclear medical imaging, e.g. PET and SPECT, can functionally image the organs, and potentially find cancer tissues at early stages, but their applications in diagnosing prostate cancer has been limited by the smallness of the prostate gland and the long working distance between the organ and the detectors comprising these imaging systems. CZT is a semiconductor material with wide band-gap and relatively high electron mobility, and thus can operate at room temperature without additional cooling. CZT detectors are photon-electron direct-conversion devices, thus offering high energy-resolution in detecting gamma rays, enabling energy-resolved imaging, and reducing the background of Compton-scattering events. In addition, CZT material has high stopping power for gamma rays; for medical imaging, a few-mm-thick CZT material provides adequate detection efficiency for many SPECT radiotracers. Because of these advantages, CZT detectors are becoming popular for several SPECT medical-imaging applications. Most recently, we designed a compact gamma camera using CZT detectors coupled to an application-specific-integrated-circuit (ASIC). This camera functions as a trans-rectal probe to image the prostate gland from a distance of only 1-5 cm, thus offering higher detection efficiency and higher spatial resolution. Hence, it potentially can detect prostate cancers at their early stages. The performance tests of this camera have been completed. The results show better than 6-mm resolution at a distance of 1 cm. Details of the test results are discussed in this paper.

  16. Feasibility of monitoring patient motion with opposed stereo infrared cameras during supine medical imaging

    NASA Astrophysics Data System (ADS)

    Beach, Richard D.; McNamara, Joseph E.; Terlecki, George; King, Michael A.

    2006-10-01

    Patient motion during single photon emission computed tomographic (SPECT) acquisition causes inconsistent projection data and reconstruction artifacts which can significantly affect diagnostic accuracy. We have investigated use of the Polaris stereo infrared motion-tracking system to track 6-Degrees-of-Freedom (6-DOF) motion of spherical reflectors (markers) on stretchy bands about the patient's chest and abdomen during cardiac SPECT imaging. The marker position information, obtained by opposed stereo infrared-camera systems, requires processing to correctly record tracked markers, and map Polaris co-ordinate data into the SPECT co-ordinate system. One stereo camera views the markers from the patient's head direction, and the other from the patient's foot direction. The need for opposed cameras is to overcome anatomical and geometrical limitations which sometimes prevent all markers from being seen by a single stereo camera. Both sets of marker data are required to compute rotational and translational 6-DOF motion of the patient which ultimately will be used for SPECT patient-motion corrections. The processing utilizes an algorithm involving least-squares fitting, to each other, of two 3-D point sets using singular value decomposition (SVD) resulting in the rotation matrix and translation of the rigid body centroid. We have previously demonstrated the ability to monitor multiple markers for twelve patients viewing from the foot end, and employed a neural network to separate the periodic respiratory motion component of marker motion from aperiodic body motion. We plan to initiate routine 6-DOF tracking of patient motion during SPECT imaging in the future, and are herein evaluating the feasibility of employing opposed stereo cameras.

  17. Monochromatic imaging camera for spectrally and spatially resolved optical emission spectroscopy

    SciTech Connect

    Hareland, W.A. [Sandia National Laboratories, Albuquerque, NM (United States)

    1994-12-31

    Spectrally and spatially resolved emissions have been measured from argon plasmas in an experimental radio-frequency plasma reactor. The monochromatic imaging camera records 2-dimensional images at a single wavelength of light, and the 2-dimensional images are treated by Abel inversion to produce 3-dimensional maps of single excited species in radio-frequency plasmas. Monochromatic images of argon were measured at a spectral bandwidth of 2.4 nm over the wavelength range from 394 to 912 nm. The spatial distribution of excited argon varies with excitation state. Lower-energy argon (< 13 eV) is found throughout the plasma, whereas, higher-energy argon is observed in and directly above the sheath in capacitively coupled discharges. Monochromatic imaging provides new optical diagnostics for measuring and monitoring plasmas.

  18. Exploring the Moon at High-Resolution: First Results From the Lunar Reconnaissance Orbiter Camera (LROC)

    Microsoft Academic Search

    Mark Robinson; Harald Hiesinger; Alfred McEwen; Brad Jolliff; Peter C. Thomas; Elizabeth Turtle; Eric Eliason; Mike Malin; A. Ravine; Ernest Bowman-Cisneros

    2010-01-01

    The Lunar Reconnaissance Orbiter (LRO) spacecraft was launched on an Atlas V 401 rocket from the Cape Canaveral Air Force Station Launch Complex 41 on June 18, 2009. After spending four days in Earth-Moon transit, the spacecraft entered a three month commissioning phase in an elliptical 30×200 km orbit. On September 15, 2009, LRO began its planned one-year nominal mapping

  19. Intraoperative Imaging Guidance for Sentinel Node Biopsy in Melanoma Using a Mobile Gamma Camera

    SciTech Connect

    Dengel, Lynn T; Judy, Patricia G; Petroni, Gina R; Smolkin, Mark E; Rehm, Patrice K; Majewski, Stan; Williams, Mark B

    2011-04-01

    The objective is to evaluate the sensitivity and clinical utility of intraoperative mobile gamma camera (MGC) imaging in sentinel lymph node biopsy (SLNB) in melanoma. The false-negative rate for SLNB for melanoma is approximately 17%, for which failure to identify the sentinel lymph node (SLN) is a major cause. Intraoperative imaging may aid in detection of SLN near the primary site, in ambiguous locations, and after excision of each SLN. The present pilot study reports outcomes with a prototype MGC designed for rapid intraoperative image acquisition. We hypothesized that intraoperative use of the MGC would be feasible and that sensitivity would be at least 90%. From April to September 2008, 20 patients underwent Tc99 sulfur colloid lymphoscintigraphy, and SLNB was performed with use of a conventional fixed gamma camera (FGC), and gamma probe followed by intraoperative MGC imaging. Sensitivity was calculated for each detection method. Intraoperative logistical challenges were scored. Cases in which MGC provided clinical benefit were recorded. Sensitivity for detecting SLN basins was 97% for the FGC and 90% for the MGC. A total of 46 SLN were identified: 32 (70%) were identified as distinct hot spots by preoperative FGC imaging, 31 (67%) by preoperative MGC imaging, and 43 (93%) by MGC imaging pre- or intraoperatively. The gamma probe identified 44 (96%) independent of MGC imaging. The MGC provided defined clinical benefit as an addition to standard practice in 5 (25%) of 20 patients. Mean score for MGC logistic feasibility was 2 on a scale of 1-9 (1 = best). Intraoperative MGC imaging provides additional information when standard techniques fail or are ambiguous. Sensitivity is 90% and can be increased. This pilot study has identified ways to improve the usefulness of an MGC for intraoperative imaging, which holds promise for reducing false negatives of SLNB for melanoma.

  20. The measurement of astronomical parallaxes with CCD imaging cameras on small telescopes

    SciTech Connect

    Ratcliff, S.J. (Department of Physics, Middlebury College, Middlebury, Vermont 05753 (United States)); Balonek, T.J. (Department of Physics and Astronomy, Colgate University, 13 Oak Dr., Hamilton, New York 13346 (United States)); Marschall, L.A. (Department of Physics, Gettysburg College, Gettysburg, Pennsylvania 17325 (United States)); DuPuy, D.L. (Department of Physics and Astronomy, Virginia Military Institute, Lexington, Virginia 24450 (United States)); Pennypacker, C.R. (Space Sciences Laboratory, University of California, Berkeley, California 94720 (United States)); Verma, R. (Department of Physics, Middlebury College, Middlebury, Vermont 05753 (United States)); Alexov, A. (Department of Astronomy, Wesleyan University, Middletown, Connecticut 06457 (United States)); Bonney, V. (Space Sciences Laboratory, University of California, Berkeley, California 94720 (United States))

    1993-03-01

    Small telescopes equipped with charge-coupled device (CCD) imaging cameras are well suited to introductory laboratory exercises in positional astronomy (astrometry). An elegant example is the determination of the parallax of extraterrestrial objects, such as asteroids. For laboratory exercises suitable for introductory students, the astronomical hardware needs are relatively modest, and, under the best circumstances, the analysis requires little more than arithmetic and a microcomputer with image display capabilities. Results from the first such coordinated parallax observations of asteroids ever made are presented. In addition, procedures for several related experiments, involving single-site observations and/or parallaxes of earth-orbiting artificial satellites, are outlined.

  1. Fast calculation of bokeh image structure in camera lenses with multiple aspheric surfaces

    NASA Astrophysics Data System (ADS)

    Sivokon, V. P.; Thorpe, M. D.

    2014-12-01

    Three different approaches to calculation of internal structure of bokeh image in camera lenses with two aspheric surfaces are analyzed and compared - the transfer function approach, the beam propagation approach and direct raytracing in an optical design software. The transfer function approach is the fastest and provides accurate results when peak-to-valley of mid-spatial frequency phase modulation induced at the lens exit pupil is below ?/10. Aspheric surfaces are shown to contribute to the bokeh structure differently increasing the complexity of bokeh image especially for offaxis bokeh.

  2. Preliminary results from a single-photon imaging X-ray charge coupled device /CCD/ camera

    NASA Technical Reports Server (NTRS)

    Griffiths, R. E.; Polucci, G.; Mak, A.; Murray, S. S.; Schwartz, D. A.; Zombeck, M. V.

    1981-01-01

    A CCD camera is described which has been designed for single-photon X-ray imaging in the 1-10 keV energy range. Preliminary results are presented from the front-side illuminated Fairchild CCD 211, which has been shown to image well at 3 keV. The problem of charge-spreading above 4 keV is discussed by analogy with a similar problem at infrared wavelengths. The total system noise is discussed and compared with values obtained by other CCD users.

  3. Optics & Photonics News -Ultrafast Camera Could Detect Wandering Cancer Cells Co-authors of the "imaging flow analyzer"

    E-print Network

    Jalali. Bahram

    -authors of the "imaging flow analyzer" paper (left to right): Dino Di Carlo, Daniel R. Gossett, Bahram Jalali, Coleman analyzer incorporates a unique camera invented by OSA Fellow Bahram Jalali's group at UCLA three years ago

  4. Linearisation of RGB Camera Responses for Quantitative Image Analysis of Visible and UV Photography: A Comparison of Two Techniques

    PubMed Central

    Garcia, Jair E.; Dyer, Adrian G.; Greentree, Andrew D.; Spring, Gale; Wilksch, Philip A.

    2013-01-01

    Linear camera responses are required for recovering the total amount of incident irradiance, quantitative image analysis, spectral reconstruction from camera responses and characterisation of spectral sensitivity curves. Two commercially-available digital cameras equipped with Bayer filter arrays and sensitive to visible and near-UV radiation were characterised using biexponential and Bézier curves. Both methods successfully fitted the entire characteristic curve of the tested devices, allowing for an accurate recovery of linear camera responses, particularly those corresponding to the middle of the exposure range. Nevertheless the two methods differ in the nature of the required input parameters and the uncertainty associated with the recovered linear camera responses obtained at the extreme ends of the exposure range. Here we demonstrate the use of both methods for retrieving information about scene irradiance, describing and quantifying the uncertainty involved in the estimation of linear camera responses. PMID:24260244

  5. Linearisation of RGB camera responses for quantitative image analysis of visible and UV photography: a comparison of two techniques.

    PubMed

    Garcia, Jair E; Dyer, Adrian G; Greentree, Andrew D; Spring, Gale; Wilksch, Philip A

    2013-01-01

    Linear camera responses are required for recovering the total amount of incident irradiance, quantitative image analysis, spectral reconstruction from camera responses and characterisation of spectral sensitivity curves. Two commercially-available digital cameras equipped with Bayer filter arrays and sensitive to visible and near-UV radiation were characterised using biexponential and Bézier curves. Both methods successfully fitted the entire characteristic curve of the tested devices, allowing for an accurate recovery of linear camera responses, particularly those corresponding to the middle of the exposure range. Nevertheless the two methods differ in the nature of the required input parameters and the uncertainty associated with the recovered linear camera responses obtained at the extreme ends of the exposure range. Here we demonstrate the use of both methods for retrieving information about scene irradiance, describing and quantifying the uncertainty involved in the estimation of linear camera responses. PMID:24260244

  6. EPR-based ghost imaging using a single-photon-sensitive camera

    E-print Network

    Reuben S. Aspden; Daniel S. Tasca; Robert W. Boyd; Miles J. Padgett

    2013-08-05

    Correlated-photon imaging, popularly known as ghost imaging, is a technique whereby an image is formed from light that has never interacted with the object. In ghost imaging experiments two correlated light fields are produced. One of these fields illuminates the object, and the other field is measured by a spatially resolving detector. In the quantum regime, these correlated light fields are produced by entangled photons created by spontaneous parametric down-conversion. To date, all correlated-photon ghost-imaging experiments have scanned a single-pixel detector through the field of view to obtain the spatial information. However, scanning leads to a poor sampling efficiency, which scales inversely with the number of pixels, N, in the image. In this work we overcome this limitation by using a time-gated camera to record the single-photon events across the full scene. We obtain high-contrast images, 90%, in either the image plane or the far-field of the photon pair source, taking advantage of the EPR-like correlations in position and momentum of the photon pairs. Our images contain a large number of modes, >500, creating opportunities in low-light-level imaging and in quantum information processing.

  7. Theory of bokeh image structure in camera lenses with an aspheric surface

    NASA Astrophysics Data System (ADS)

    Sivokon, Viktor P.; Thorpe, Michael D.

    2014-06-01

    We present theoretical, numerical, and experimental analysis of the cause of internal structure in out-of-focus images of point light sources seen in shots taken with camera lenses that incorporate aspheric surfaces. This "bokeh" structure is found to be due to diffraction on the phase grating at the lens exit pupil induced by small-scale undulations (ripples) of aspheric surfaces. We develop a phase-to-intensity transfer function approach which leads to a simple formula for estimating the intensity modulation ratio in the resulting bokeh based on the out-of-focus distance, amplitude, and frequency of surface undulations. Numerical simulations of bokeh image formation are carried out for a parabolic mirror imager and a double Gauss objective. We find that modulation depth in the bokeh structure calculated by light propagation based simulation agrees with theory when the modulation depth is <30%. Bokeh images are shown to be more sensitive to manufacturing artifacts of an aspheric surface than corresponding degradation in the lens modulation transfer function for a sharp focused image. We apply the transfer function approach to the calculation of the bokeh produced by a measured aspheric surface in a built camera lens and find reasonable agreement between the calculated and measured bokeh structure.

  8. Performance of CID camera x-ray imagers at NIF in a harsh neutron environment

    NASA Astrophysics Data System (ADS)

    Palmer, Nathan E.; Schneider, Marilyn B.; Bell, Perry M.; Piston, Ken W.; Moody, James D.; James, D. L.; Ness, Ron A.; Haugh, Michael J.; Lee, Joshua J.; Romano, Edward D.

    2013-09-01

    Charge-injection devices (CIDs) are solid-state 2D imaging sensors similar to CCDs, but their distinct architecture makes CIDs more resistant to ionizing radiation. CID cameras have been used extensively for X-ray imaging at the OMEGA Laser Facility with neutron fluences at the sensor approaching 109 n/cm2 (DT, 14 MeV). A CID Camera X-ray Imager (CCXI) system has been designed and implemented at NIF that can be used as a rad-hard electronic-readout alternative for time-integrated X-ray imaging. This paper describes the design and implementation of the system, calibration of the sensor for X-rays in the 3 - 14 keV energy range, and preliminary data acquired on NIF shots over a range of neutron yields. The upper limit of neutron fluence at which CCXI can acquire useable images is ~ 108 n/cm2 and there are noise problems that need further improvement, but the sensor has proven to be very robust in surviving high yield shots (~ 1014 DT neutrons) with minimal damage.

  9. Determining 3D flow fields via multi-camera light field imaging.

    PubMed

    Truscott, Tadd T; Belden, Jesse; Nielson, Joseph R; Daily, David J; Thomson, Scott L

    2013-01-01

    In the field of fluid mechanics, the resolution of computational schemes has outpaced experimental methods and widened the gap between predicted and observed phenomena in fluid flows. Thus, a need exists for an accessible method capable of resolving three-dimensional (3D) data sets for a range of problems. We present a novel technique for performing quantitative 3D imaging of many types of flow fields. The 3D technique enables investigation of complicated velocity fields and bubbly flows. Measurements of these types present a variety of challenges to the instrument. For instance, optically dense bubbly multiphase flows cannot be readily imaged by traditional, non-invasive flow measurement techniques due to the bubbles occluding optical access to the interior regions of the volume of interest. By using Light Field Imaging we are able to reparameterize images captured by an array of cameras to reconstruct a 3D volumetric map for every time instance, despite partial occlusions in the volume. The technique makes use of an algorithm known as synthetic aperture (SA) refocusing, whereby a 3D focal stack is generated by combining images from several cameras post-capture (1). Light Field Imaging allows for the capture of angular as well as spatial information about the light rays, and hence enables 3D scene reconstruction. Quantitative information can then be extracted from the 3D reconstructions using a variety of processing algorithms. In particular, we have developed measurement methods based on Light Field Imaging for performing 3D particle image velocimetry (PIV), extracting bubbles in a 3D field and tracking the boundary of a flickering flame. We present the fundamentals of the Light Field Imaging methodology in the context of our setup for performing 3DPIV of the airflow passing over a set of synthetic vocal folds, and show representative results from application of the technique to a bubble-entraining plunging jet. PMID:23486112

  10. Imaging and radiometric performance simulation for a new high-performance dual-band airborne reconnaissance camera

    NASA Astrophysics Data System (ADS)

    Seong, Sehyun; Yu, Jinhee; Ryu, Dongok; Hong, Jinsuk; Yoon, Jee-Yeon; Kim, Sug-Whan; Lee, Jun-Ho; Shin, Myung-Jin

    2009-05-01

    In recent years, high performance visible and IR cameras have been used widely for tactical airborne reconnaissance. The process improvement for efficient discrimination and analysis of complex target information from active battlefields requires for simultaneous multi-band measurement from airborne platforms at various altitudes. We report a new dual band airborne camera designed for simultaneous registration of both visible and IR imagery from mid-altitude ranges. The camera design uses a common front end optical telescope of around 0.3m in entrance aperture and several relay optical sub-systems capable of delivering both high spatial resolution visible and IR images to the detectors. The camera design is benefited from the use of several optical channels packaged in a compact space and the associated freedom to choose between wide (~3 degrees) and narrow (~1 degree) field of view. In order to investigate both imaging and radiometric performances of the camera, we generated an array of target scenes with optical properties such as reflection, refraction, scattering, transmission and emission. We then combined the target scenes and the camera optical system into the integrated ray tracing simulation environment utilizing Monte Carlo computation technique. Taking realistic atmospheric radiative transfer characteristics into account, both imaging and radiometric performances were then investigated. The simulation results demonstrate successfully that the camera design satisfies NIIRS 7 detection criterion. The camera concept, details of performance simulation computation, the resulting performances are discussed together with future development plan.

  11. 200 ps FWHM and 100 MHz repetition rate ultrafast gated camera for optical medical functional imaging

    NASA Astrophysics Data System (ADS)

    Uhring, Wilfried; Poulet, Patrick; Hanselmann, Walter; Glazenborg, René; Zint, Virginie; Nouizi, Farouk; Dubois, Benoit; Hirschi, Werner

    2012-04-01

    The paper describes the realization of a complete optical imaging device to clinical applications like brain functional imaging by time-resolved, spectroscopic diffuse optical tomography. The entire instrument is assembled in a unique setup that includes a light source, an ultrafast time-gated intensified camera and all the electronic control units. The light source is composed of four near infrared laser diodes driven by a nanosecond electrical pulse generator working in a sequential mode at a repetition rate of 100 MHz. The resulting light pulses, at four wavelengths, are less than 80 ps FWHM. They are injected in a four-furcated optical fiber ended with a frontal light distributor to obtain a uniform illumination spot directed towards the head of the patient. Photons back-scattered by the subject are detected by the intensified CCD camera; there are resolved according to their time of flight inside the head. The very core of the intensified camera system is the image intensifier tube and its associated electrical pulse generator. The ultrafast generator produces 50 V pulses, at a repetition rate of 100 MHz and a width corresponding to the 200 ps requested gate. The photocathode and the Micro-Channel-Plate of the intensifier have been specially designed to enhance the electromagnetic wave propagation and reduce the power loss and heat that are prejudicial to the quality of the image. The whole instrumentation system is controlled by an FPGA based module. The timing of the light pulses and the photocathode gating is precisely adjustable with a step of 9 ps. All the acquisition parameters are configurable via software through an USB plug and the image data are transferred to a PC via an Ethernet link. The compactness of the device makes it a perfect device for bedside clinical applications.

  12. Image and\\/or Movie Analyses of 100GHz Traveling Waves on the Basis of Real-Time Observation With a Live Electrooptic Imaging Camera

    Microsoft Academic Search

    Masahiro Tsuchiya; Atsushi Kanno; Kiyotaka Sasagawa; Takahiro Shiozawa

    2009-01-01

    A 100-GHz-class live electrooptic imaging camera (LEI-camera), a novel microwave and millimeter-wave instrument, has been implemented for a scheme of W-band wave analyses. The first real-time visualization of W-band waves as phase-resolved images and phase-evolving movies has been successfully demonstrated, and the relevant images and\\/or movies obtained have been applied to the analyses of 100-GHz traveling waves. In conjunction with

  13. Non-contact imaging of venous compliance in humans using an RGB camera

    NASA Astrophysics Data System (ADS)

    Nakano, Kazuya; Satoh, Ryota; Hoshi, Akira; Matsuda, Ryohei; Suzuki, Hiroyuki; Nishidate, Izumi

    2015-04-01

    We propose a technique for non-contact imaging of venous compliance that uses the red, green, and blue (RGB) camera. Any change in blood concentration is estimated from an RGB image of the skin, and a regression formula is calculated from that change. Venous compliance is obtained from a differential form of the regression formula. In vivo experiments with human subjects confirmed that the proposed method does differentiate the venous compliances among individuals. In addition, the image of venous compliance is obtained by performing the above procedures for each pixel. Thus, we can measure venous compliance without physical contact with sensors and, from the resulting images, observe the spatial distribution of venous compliance, which correlates with the distribution of veins.

  14. A Gaseous Compton Camera using a 2D-sensitive gaseous photomultiplier for Nuclear Medical Imaging

    NASA Astrophysics Data System (ADS)

    Azevedo, C. D. R.; Pereira, F. A.; Lopes, T.; Correia, P. M. M.; Silva, A. L. M.; Carramate, L. F. N. D.; Covita, D. S.; Veloso, J. F. C. A.

    2013-12-01

    A new Compton Camera (CC) concept based on a High Pressure Scintillation Chamber coupled to a position-sensitive Gaseous PhotoMultiplier for Nuclear Medical Imaging applications is proposed. The main goal of this work is to describe the development of a ?25×12 cm3 cylindrical prototype, which will be suitable for scintimammography and for small-animal imaging applications. The possibility to scale it to an useful human size device is also in study. The idea is to develop a device capable to compete with the standard Anger Camera. Despite the large success of the Anger Camera, it still presents some limitations, such as: low position resolution and fair energy resolutions for 140 keV. The CC arises a different solution as it provides information about the incoming photon direction, avoiding the use of a collimator, which is responsible for a huge reduction (10-4) of the sensitivity. The main problem of the CC's is related with the Doppler Broadening which is responsible for the loss of angular resolution. In this work, calculations for the Doppler Broadening in Xe, Ar, Ne and their mixtures are presented. Simulations of the detector performance together with discussion about the gas choice are also included .

  15. Scene reconstruction from distorted images using self-calibration of camera parameters

    NASA Astrophysics Data System (ADS)

    Kawahara, Isao; Morimura, Atsushi

    1992-03-01

    In this paper, we propose a new method for reconstructing a scene from different views through a high-distortion lens camera. Unlike other approaches, no a priori calibrations nor specific test patterns are required. Several pairs of correspondence between input images are used to estimate intrinsic parameters such as focal length and distortion coefficients. From these correspondences, relative movement of the camera between input images is computed as rotation matrices. We assumed radial lens distortion, modeled with a third order polynomial with two distortion coefficients, which covers highly distorted zoom lenses. Since we allow distortion with two coefficients and focal length to be unknown, it is not easy to get these three parameters explicitly from the correspondence alone. To avoid time consumption and the problem of local minima, we take the following steps: uniform searching in the reduced dimension; fitting a function to get a better guess of focal length; and polishing solutions by repeating the uniform search to get the final coefficients of distortion. The total number of evaluations is remarkably reduced by this multistage optimization. Some experimental results are presented, showing that more than 5% of lens distortion is reduced and the rotation of the camera is recovered, and we show a registration of four outdoor pictures.

  16. 0-7803-8363-X/04/$20.00 2004 IEEE 20th IEEE SEMI-THERM Symposium Camera For Thermal Imaging of Semiconductor Devices Based on Thermoreflectance

    E-print Network

    technique, this camera achieves 50mK temperature sensitivity and sub- micron spatial resolution when imaging-5 micron sensitivity of infrared `blackbody' cameras. However, the challenge of the thermoreflectance0-7803-8363-X/04/$20.00 ©2004 IEEE 20th IEEE SEMI-THERM Symposium Camera For Thermal Imaging

  17. An evaluation of a fluorescent screen-isocon camera system for x-ray imaging in radiology.

    PubMed

    Nelson, R S; Barbaric, Z L; Gomes, A S; Moler, C L; Deckard, M E

    1982-01-01

    A large field format imaging system which utilizes a flat fluorescent screen rather than the traditional image intensifier as the primary x-ray detector is discussed in this paper. A low light level image isocon camera is optically coupled to a rare-earth screen. An overview of the isocon camera and its return beam mode of operation is included. Theoretical limitations to this approach to real time radiographic imaging are discussed in terms of system efficiency and signal-to-noise ratio (SNR) requirements. The viability of this concept for radiographic imaging is considered with respect to the image intensifier-camera unit. The merits of employing a CsI:Na screen as the primary detector and an isocon tube with an improved SNR are presented as potential areas for further investigation. PMID:7155082

  18. Applications of high-resolution still video cameras to ballistic imaging

    NASA Astrophysics Data System (ADS)

    Snyder, Donald R.; Kosel, Frank M.

    1991-01-01

    The Aeroballistic Research Facility is one of several free-flight aerodynamic research facilities in the world and has gained a reputation as one of the most accurately instrumented. This facility was developed for the exterior ballistic testing of gyroscopically stabilized fin stabilized or mass stabilized projectiles. Such testing includes bullets missiles and subscale aircraft configurations. The primary source of data for this type of facility is the trajectory information derived from orthogonal pairs of shadowgraph film cameras. The loading unloading processing digitizing and off-line analysis of the film data is extremely costly and time consuming. The unavailability of even unreduced images for subjective evaluation often means delays of days between experiments. In this paper we describe evaluation of an advanced still video system as the baseline for development of an integrated real-time electronic shadowgraph to replace the film cameras for normal range operations. 1.

  19. Optimization of camera exposure durations for multi-exposure speckle imaging of the microcirculation

    PubMed Central

    Kazmi, S. M. Shams; Balial, Satyajit; Dunn, Andrew K.

    2014-01-01

    Improved Laser Speckle Contrast Imaging (LSCI) blood flow analyses that incorporate inverse models of the underlying laser-tissue interaction have been used to develop more quantitative implementations of speckle flowmetry such as Multi-Exposure Speckle Imaging (MESI). In this paper, we determine the optimal camera exposure durations required for obtaining flow information with comparable accuracy with the prevailing MESI implementation utilized in recent in vivo rodent studies. A looping leave-one-out (LOO) algorithm was used to identify exposure subsets which were analyzed for accuracy against flows obtained from analysis with the original full exposure set over 9 animals comprising n = 314 regional flow measurements. From the 15 original exposures, 6 exposures were found using the LOO process to provide comparable accuracy, defined as being no more than 10% deviant, with the original flow measurements. The optimal subset of exposures provides a basis set of camera durations for speckle flowmetry studies of the microcirculation and confers a two-fold faster acquisition rate and a 28% reduction in processing time without sacrificing accuracy. Additionally, the optimization process can be used to identify further reductions in the exposure subsets for tailoring imaging over less expansive flow distributions to enable even faster imaging. PMID:25071956

  20. Multi-frame image processing with panning cameras and moving subjects

    NASA Astrophysics Data System (ADS)

    Paolini, Aaron; Humphrey, John; Curt, Petersen; Kelmelis, Eric

    2014-06-01

    Imaging scenarios commonly involve erratic, unpredictable camera behavior or subjects that are prone to movement, complicating multi-frame image processing techniques. To address these issues, we developed three techniques that can be applied to multi-frame image processing algorithms in order to mitigate the adverse effects observed when cameras are panning or subjects within the scene are moving. We provide a detailed overview of the techniques and discuss the applicability of each to various movement types. In addition to this, we evaluated algorithm efficacy with demonstrated benefits using field test video, which has been processed using our commercially available surveillance product. Our results show that algorithm efficacy is significantly improved in common scenarios, expanding our software's operational scope. Our methods introduce little computational burden, enabling their use in real-time and low-power solutions, and are appropriate for long observation periods. Our test cases focus on imaging through turbulence, a common use case for multi-frame techniques. We present results of a field study designed to test the efficacy of these techniques under expanded use cases.

  1. Megapixel imaging camera for expanded H{sup {minus}} beam measurements

    SciTech Connect

    Simmons, J.E.; Lillberg, J.W.; McKee, R.J.; Slice, R.W.; Torrez, J.H. [Los Alamos National Lab., NM (United States); McCurnin, T.W.; Sanchez, P.G. [EG and G Energy Measurements, Inc., Los Alamos, NM (United States). Los Alamos Operations

    1994-02-01

    A charge coupled device (CCD) imaging camera system has been developed as part of the Ground Test Accelerator project at the Los Alamos National Laboratory to measure the properties of a large diameter, neutral particle beam. The camera is designed to operate in the accelerator vacuum system for extended periods of time. It would normally be cooled to reduce dark current. The CCD contains 1024 {times} 1024 pixels with pixel size of 19 {times} 19 {mu}m{sup 2} and with four phase parallel clocking and two phase serial clocking. The serial clock rate is 2.5{times}10{sup 5} pixels per second. Clock sequence and timing are controlled by an external logic-word generator. The DC bias voltages are likewise located externally. The camera contains circuitry to generate the analog clocks for the CCD and also contains the output video signal amplifier. Reset switching noise is removed by an external signal processor that employs delay elements to provide noise suppression by the method of double-correlated sampling. The video signal is digitized to 12 bits in an analog to digital converter (ADC) module controlled by a central processor module. Both modules are located in a VME-type computer crate that communicates via ethernet with a separate workstation where overall control is exercised and image processing occurs. Under cooled conditions the camera shows good linearity with dynamic range of 2000 and with dark noise fluctuations of about {plus_minus}1/2 ADC count. Full well capacity is about 5{times}10{sup 5} electron charges.

  2. Temperature dependent operation of PSAPD-based compact gamma camera for SPECT imaging

    PubMed Central

    Kim, Sangtaek; McClish, Mickel; Alhassen, Fares; Seo, Youngho; Shah, Kanai S.; Gould, Robert G.

    2011-01-01

    We investigated the dependence of image quality on the temperature of a position sensitive avalanche photodiode (PSAPD)-based small animal single photon emission computed tomography (SPECT) gamma camera with a CsI:Tl scintillator. Currently, nitrogen gas cooling is preferred to operate PSAPDs in order to minimize the dark current shot noise. Being able to operate a PSAPD at a relatively high temperature (e.g., 5 °C) would allow a more compact and simple cooling system for the PSAPD. In our investigation, the temperature of the PSAPD was controlled by varying the flow of cold nitrogen gas through the PSAPD module and varied from ?40 °C to 20 °C. Three experiments were performed to demonstrate the performance variation over this temperature range. The point spread function (PSF) of the gamma camera was measured at various temperatures, showing variation of full-width-half-maximum (FWHM) of the PSF. In addition, a 99mTc-pertechnetate (140 keV) flood source was imaged and the visibility of the scintillator segmentation (16×16 array, 8 mm × 8 mm area, 400 ?m pixel size) at different temperatures was evaluated. Comparison of image quality was made at ?25 °C and 5 °C using a mouse heart phantom filled with an aqueous solution of 99mTc-pertechnetate and imaged using a 0.5 mm pinhole collimator made of tungsten. The reconstructed image quality of the mouse heart phantom at 5 °C degraded in comparision to the reconstructed image quality at ?25 °C. However, the defect and structure of the mouse heart phantom were clearly observed, showing the feasibility of operating PSAPDs for SPECT imaging at 5 °C, a temperature that would not need the nitrogen cooling. All PSAPD evaluations were conducted with an applied bias voltage that allowed the highest gain at a given temperature. PMID:24465051

  3. MR-i: high-speed dual-cameras hyperspectral imaging FTS

    NASA Astrophysics Data System (ADS)

    Prel, Florent; Moreau, Louis; Lantagne, Stéphane; Roy, Claude; Vallières, Christian; Lévesque, Luc

    2011-11-01

    From scientific research to deployable operational solutions, Fourier-Transform Infrared (FT-IR) spectroradiometry is widely used for the development and enhancement of military and research applications. These techniques include targets IR signature characterization, development of advanced camouflage techniques, aircraft engine's plumes monitoring, meteorological sounding and atmospheric composition analysis such as detection and identification of chemical threats. Imaging FT-IR spectrometers have the capability of generating 3D images composed of multiple spectra associated with every pixel of the mapped scene. That data allow for accurate spatial characterization of target's signature by resolving spatially the spectral characteristics of the observed scenes. MR-i is the most recent addition to the MR product line series and generates spectral data cubes in the MWIR and LWIR. The instrument is designed to acquire the spectral signature of various scenes with high temporal, spatial and spectral resolution. The four port architecture of the interferometer brings modularity and upgradeability since the two output ports of the instrument can be populated with different combinations of detectors (imaging or not). For instance to measure over a broad spectral range from 1.3 to 13 ?m, one output port can be equipped with a LWIR camera while the other port is equipped with a MWIR camera. Both ports can be equipped with cameras serving the same spectral range but set at different sensitivity levels in order to increase the measurement dynamic range and avoid saturation of bright parts of the scene while simultaneously obtaining good measurement of the faintest parts of the scene. Various telescope options are available for the input port. Overview of the instrument capabilities will be presented as well as test results and results from field trials for a configuration with two MWIR cameras. That specific system is dedicated to the characterization of airborne targets. The expanded dynamic range allowed by the two MWIR cameras enables to simultaneously measure the spectral signature of the cold background and of the warmest elements of the scene (flares, jet engines exhausts, etc.).

  4. Toward Real-time quantum imaging with a single pixel camera

    SciTech Connect

    Lawrie, Benjamin J [ORNL; Pooser, Raphael C [ORNL

    2013-01-01

    We present a workbench for the study of real-time quantum imaging by measuring the frame-by-frame quantum noise reduction of multi-spatial-mode twin beams generated by four wave mixing in Rb vapor. Exploiting the multiple spatial modes of this squeezed light source, we utilize spatial light modulators to selectively transmit macropixels of quantum correlated modes from each of the twin beams to a high quantum efficiency balanced detector. In low-light-level imaging applications, the ability to measure the quantum correlations between individual spatial modes and macropixels of spatial modes with a single pixel camera will facilitate compressive quantum imaging with sensitivity below the photon shot noise limit.

  5. Development of a multispectral multiphoton fluorescence lifetime imaging microscopy system using a streak camera

    NASA Astrophysics Data System (ADS)

    Qu, Junle; Liu, Lixin; Guo, Baoping; Lin, Ziyang; Hu, Tao; Tian, Jindong; Wang, Shuyan; Zhang, Jikang; Niu, Hanben

    2005-01-01

    We report on the development of a multispectral multiphoton fluorescence lifetime imaging microscopy (MM-FLIM) system that is the combination a streak camera, a prism spectrophotometer, a femtosecond Ti: Sapphire laser and a fluorescence microscope. This system is versatile with multispectral capability, high temporal (10ps) and spatial (0.36?m) resolution and can be used to make 3-dimensional (3D) (x-y-z) multiphoton fluorescence intensity, spectrally resolved intensity and lifetime measurements with a single detector. The system was calibrated with a F-P etalon and a standard fluorescent dye and the lifetime value obtained was in good agreement with the value reported in the literature. Preliminary results suggest that this MM-FLIM system has integrated high temporal, spatial, and spectral resolution fluorescence detection in one microscopy system. Potential applications of this system include multiwell imaging, tissue discrimination, intracellular physiology and fluorescence resonance energy transfer imaging.

  6. A new tubeless nanosecond streak camera based on optical deflection and direct CCD imaging

    SciTech Connect

    Lai, C.C.

    1992-12-01

    A new optically deflected streaking camera with performance of nanosecond-range resolution, superior imaging quality, high signal detectability, and large format recording has been conceived and developed. Its construction is composed of an optomechanical deflector that deflects the line-shape image of spatial-distributed time-varying signals across the sensing surface of a cooled scientific two-dimensional CCD array with slow readout driving electronics, a lens assembly, and a desk-top computer for prompt digital data acquisition and processing. Its development utilizes the synergism of modern technologies in sensor, optical deflector, optics and microcomputer. With laser light as signal carrier, the deflecting optics produces near diffraction-limited streak images resolving to a single pixel size of 25[mu]. A 1kx1k-pixel array can thus provide a vast record of 1,000 digital data points along each spatial or temporal axis. Since only one photon-to-electron conversion exists in the entire signal recording path, the camera responses linearly to the incident light over a wide dynamic range in excess of 10[sup 4]:1. Various image deflection techniques are assessed for imaging fidelity, deflection speed, and capacity for external triggering. Innovative multiple-pass deflection methods for utilizing optomechanical deflector have been conceived and developed to attain multi-fold amplification for the optical scanning. speed across the CCD surface at a given angular deflector speed. Without significantly compromising imaging. quality or flux throughput efficiency, these optical methods enable a sub-10 ns/pixel streak speed with the deflector moving benignly at 500 radians/second, or equivalently 80 revolutions /second. Test results of the prototype performance are summarized including a spatial resolution of 10 lp/mm at 65% CTF and a temporal resolution of 11.4 ns at 3.8 ns/pixel.

  7. A new tubeless nanosecond streak camera based on optical deflection and direct CCD imaging

    SciTech Connect

    Lai, C.C.

    1992-12-01

    A new optically deflected streaking camera with performance of nanosecond-range resolution, superior imaging quality, high signal detectability, and large format recording has been conceived and developed. Its construction is composed of an optomechanical deflector that deflects the line-shape image of spatial-distributed time-varying signals across the sensing surface of a cooled scientific two-dimensional CCD array with slow readout driving electronics, a lens assembly, and a desk-top computer for prompt digital data acquisition and processing. Its development utilizes the synergism of modern technologies in sensor, optical deflector, optics and microcomputer. With laser light as signal carrier, the deflecting optics produces near diffraction-limited streak images resolving to a single pixel size of 25{mu}. A 1kx1k-pixel array can thus provide a vast record of 1,000 digital data points along each spatial or temporal axis. Since only one photon-to-electron conversion exists in the entire signal recording path, the camera responses linearly to the incident light over a wide dynamic range in excess of 10{sup 4}:1. Various image deflection techniques are assessed for imaging fidelity, deflection speed, and capacity for external triggering. Innovative multiple-pass deflection methods for utilizing optomechanical deflector have been conceived and developed to attain multi-fold amplification for the optical scanning. speed across the CCD surface at a given angular deflector speed. Without significantly compromising imaging. quality or flux throughput efficiency, these optical methods enable a sub-10 ns/pixel streak speed with the deflector moving benignly at 500 radians/second, or equivalently 80 revolutions /second. Test results of the prototype performance are summarized including a spatial resolution of 10 lp/mm at 65% CTF and a temporal resolution of 11.4 ns at 3.8 ns/pixel.

  8. Evaluation of a multistage CdZnTe Compton camera for prompt ? imaging for proton therapy

    NASA Astrophysics Data System (ADS)

    McCleskey, M.; Kaye, W.; Mackin, D. S.; Beddar, S.; He, Z.; Polf, J. C.

    2015-06-01

    A new detector system, Polaris J from H3D, has been evaluated for its potential application as a Compton camera (CC) imaging device for prompt ? rays (PGs) emitted during proton radiation therapy (RT) for the purpose of dose range verification. This detector system consists of four independent CdZnTe detector stages and a coincidence module, allowing the user to construct a Compton camera in different geometrical configurations and to accept both double and triple scatter events. Energy resolution for the 662 keV line from 137Cs was found to be 9.7 keV FWHM. The raw absolute efficiencies for double and triple scatter events were 2.2 ×10-5 and 5.8 ×10-7, respectively, for ?s from a 60Co source. The position resolution for the reconstruction of a point source from the measured CC data was about 2 mm. Overall, due to the low efficiency of the Polaris J CC, the current system was deemed not viable for imaging PGs emitted during proton RT treatment delivery. However, using a validated Monte Carlo model of the CC, we found that by increasing the size of the detectors and placing them in a two stage configuration, the efficiency could be increased to a level to make PG imaging possible during proton RT.

  9. Near InfraRed IMager: A NICMOS3 near-IR camera

    NASA Astrophysics Data System (ADS)

    Meixner, M.; Owl, R. Young; Leach, R.

    1996-12-01

    We describe the characteristics and design of Near--InfraRed IMager (NIRIM), a new near--IR camera with a spectral coverage of 0.8--2.5mu m. NIRIM serves a dual purpose of wide field imaging on the Mt. Laguna 1 m telescope and high angular resolution imaging as a science camera on a laser-guided adaptive optics system on the Mt. Wilson 100 inch telescope. NIRIM houses a 256x256 HgCdTe array (a.k.a. NICMOS3 array), reimaging optics to decrease background radiation and 14 filters; all cryogenically cooled with liquid nitrogen to ~ 77 K. On the Mt. Laguna 1 m telescope, where it saw first light in July 1995, NIRIM has three plate scale options: 0.\\arcsec 5, 1\\arcsec, and 2\\arcsec per pixel with corresponding field of views of 2.1\\arcmin, 4.3\\arcmin and 8.5\\arcmin. The University of Illinois Seeing Improvement System (UnISIS) is the laser guide adaptive optics system being built for the Mt. Wilson 100 inch telescope. We expect NIRIM to see first light on UnISIS in summer 1997 when UnISIS is operational. Characterizations of NIRIM are presented for NIRIM on the Mt. Laguna 1 m telescope.

  10. Mars Orbiter Camera High Resolution Images: Some Results From The First 6 Weeks In Orbit

    NASA Technical Reports Server (NTRS)

    1997-01-01

    The Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) images acquired shortly after orbit insertion were relatively poor in both resolution and image quality. This poor performance was solely the result of low sunlight conditions and the relative distance to the planet, both of which have been progressively improving over the past six weeks. Some of the better images are used here (see PIA01021 through PIA01029) to illustrate how the MOC images provide substantially better views of the martian surface than have ever been recorded previously from orbit.

    This U.S. Geological Survey shaded relief map provides an overall context for the MGS MOC images of the Tithonium/Ius Chasma, Ganges Chasma, and Schiaparelli Crater. Closeup images of the Tithonium/Ius Chasma area are visible in PIA01021 through PIA01023. Closeups of Ganges Chasma are available as PIA01027 through PIA01029, and Schiaparelli Crater is shown in PIA01024 through PIA01026. The Mars Pathfinder landing site is shown to the north of the sites of the MGS images.

    Launched on November 7, 1996, Mars Global Surveyor entered Mars orbit on Thursday, September 11, 1997. The original mission plan called for using friction with the planet's atmosphere to reduce the orbital energy, leading to a two-year mapping mission from close, circular orbit (beginning in March 1998). Owing to difficulties with one of the two solar panels, aerobraking was suspended in mid-October and resumed in November 8. Many of the original objectives of the mission, and in particular those of the camera, are likely to be accomplished as the mission progresses.

    Malin Space Science Systems and the California Institute of Technology built the MOC using spare hardware from the Mars Observer mission. MSSS operates the camera from its facilities in San Diego, CA. The Jet Propulsion Laboratory's Mars Surveyor Operations Project operates the Mars Global Surveyor spacecraft with its industrial partner, Lockheed Martin Astronautics, from facilities in Pasadena, CA and Denver, CO.

  11. Evaluation of a large format image tube camera for the shuttle sortie mission

    NASA Technical Reports Server (NTRS)

    Tifft, W. C.

    1976-01-01

    A large format image tube camera of a type under consideration for use on the Space Shuttle Sortie Missions is evaluated. The evaluation covers the following subjects: (1) resolving power of the system (2) geometrical characteristics of the system (distortion etc.) (3) shear characteristics of the fiber optic coupling (4) background effects in the tube (5) uniformity of response of the tube (as a function of wavelength) (6) detective quantum efficiency of the system (7) astronomical applications of the system. It must be noted that many of these characteristics are quantitatively unique to the particular tube under discussion and serve primarily to suggest what is possible with this type of tube.

  12. Digital imaging microscopy: the marriage of spectroscopy and the solid state CCD camera

    NASA Astrophysics Data System (ADS)

    Jovin, Thomas M.; Arndt-Jovin, Donna J.

    1991-12-01

    Biological samples have been imaged using microscopes equipped with slow-scan CCD cameras. Examples are presented of studies based on the detection of light emission signals in the form of fluorescence and phosphorescence. They include applications in the field of cell biology: (a) replication and topology of mammalian cell nuclei; (b) cytogenetic analysis of human metaphase chromosomes; and (c) time-resolved measurements of DNA-binding dyes in cells and on isolated chromosomes, as well as of mammalian cell surface antigens, using the phosphorescence of acridine orange and fluorescence resonance energy transfer of labeled lectins, respectively.

  13. Camera Image Transformation and Registration for Safe Spacecraft Landing and Hazard Avoidance

    NASA Technical Reports Server (NTRS)

    Jones, Brandon M.

    2005-01-01

    Inherent geographical hazards of Martian terrain may impede a safe landing for science exploration spacecraft. Surface visualization software for hazard detection and avoidance may accordingly be applied in vehicles such as the Mars Exploration Rover (MER) to induce an autonomous and intelligent descent upon entering the planetary atmosphere. The focus of this project is to develop an image transformation algorithm for coordinate system matching between consecutive frames of terrain imagery taken throughout descent. The methodology involves integrating computer vision and graphics techniques, including affine transformation and projective geometry of an object, with the intrinsic parameters governing spacecraft dynamic motion and camera calibration.

  14. Plume Imaging Using an IR Camera to Estimate Sulphur Dioxide Flux on Volcanoes of Northern Chile

    NASA Astrophysics Data System (ADS)

    Rosas Sotomayor, F.; Amigo, A.

    2014-12-01

    Remote sensing is a fast and safe method to obtain gas abundances in volcanic plumes, in particular when the access to the vent is difficult or during volcanic crisis. In recent years, a ground-based infrared camera (NicAir) has been developed by Nicarnica Aviation, which quantifies SO2 and ash on volcanic plumes, based on the infrared radiance at specific wavelengths through the application of filters. NicAir cameras have been acquired by the Geological Survey of Chile in order to study degassing of active volcanoes. This contribution focuses on series of measurements done in December 2013 in volcanoes of northern Chile, in particular Láscar, Irruputuncu and Ollagüe, which are characterized by persistent quiescent degassing. During fieldwork, plumes from all three volcanoes showed regular behavior and the atmospheric conditions were very favorable (cloud-free and dry air). Four, two and one sets of measurements, up to 100 images each, were taken for Láscar, Irruputuncu and Ollagüe volcano, respectively. Matlab software was used for image visualizing and processing of the raw data. For instance, data visualization is performed through Matlab IPT functions imshow() and imcontrast(), and one algorithm was created for extracting necessary metadata. Image processing considers radiation at 8.6 and 10 ?m wavelengths, due to differential SO2 and water vapor absorption. Calibration was performed in the laboratory through a detector correlation between digital numbers (raw data in image pixel values) and spectral radiance, and also in the field considering the camera self-emissions of infrared radiation. A gradient between the plume core and plume rim is expected, due to quick reaction of sulphur dioxide with water vapor, therefore a flux underestimate is also expected. Results will be compared with other SO2 remote sensing instruments such as DOAS and UV-camera. The implementation of this novel technique in Chilean volcanoes will be a major advance in our understanding of volcanic emissions and is also a strong complement for gas monitoring in active volcanoes such as Láscar, Villarrica, Lastarria, Cordón Caulle, among others and in rough volcanic terrains, due to its portability, easy operation, fast data acquisition and data processing.

  15. New Mars Camera's First Image of Mars from Mapping Orbit (Full Frame)

    NASA Technical Reports Server (NTRS)

    2006-01-01

    The high resolution camera on NASA's Mars Reconnaissance Orbiter captured its first image of Mars in the mapping orbit, demonstrating the full resolution capability, on Sept. 29, 2006. The High Resolution Imaging Science Experiment (HiRISE) acquired this first image at 8:16 AM (Pacific Time). With the spacecraft at an altitude of 280 kilometers (174 miles), the image scale is 25 centimeters per pixel (10 inches per pixel). If a person were located on this part of Mars, he or she would just barely be visible in this image.

    The image covers a small portion of the floor of Ius Chasma, one branch of the giant Valles Marineris system of canyons. The image illustrates a variety of processes that have shaped the Martian surface. There are bedrock exposures of layered materials, which could be sedimentary rocks deposited in water or from the air. Some of the bedrock has been faulted and folded, perhaps the result of large-scale forces in the crust or from a giant landslide. The image resolves rocks as small as small as 90 centimeters (3 feet) in diameter. It includes many dunes or ridges of windblown sand.

    This image (TRA_000823_1720) was taken by the High Resolution Imaging Science Experiment camera onboard the Mars Reconnaissance Orbiter spacecraft on Sept. 29, 2006. Shown here is the full image, centered at minus 7.8 degrees latitude, 279.5 degrees east longitude. The image is oriented such that north is to the top. The range to the target site was 297 kilometers (185.6 miles). At this distance the image scale is 25 centimeters (10 inches) per pixel (with one-by-one binning) so objects about 75 centimeters (30 inches) across are resolved. The image was taken at a local Mars time of 3:30 PM and the scene is illuminated from the west with a solar incidence angle of 59.7 degrees, thus the sun was about 30.3 degrees above the horizon. The season on Mars is northern winter, southern summer.

    [Photojournal note: Due to the large sizes of the high-resolution TIFF and JPEG files, some systems may experience extremely slow downlink time while viewing or downloading these images; some systems may be incapable of handling the download entirely.]

    NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the Mars Reconnaissance Orbiter for NASA's Science Mission Directorate, Washington. Lockheed Martin Space Systems, Denver, is the prime contractor for the project and built the spacecraft. The HiRISE camera was built by Ball Aerospace & Technologies Corporation, Boulder, Colo., and is operated by the University of Arizona, Tucson.

  16. Lunar Reconnaissance Orbiter Camera Narrow Angle Cameras: Laboratory and Initial Flight Calibration

    NASA Astrophysics Data System (ADS)

    Humm, D. C.; Tschimmel, M.; Denevi, B. W.; Lawrence, S.; Mahanti, P.; Tran, T. N.; Thomas, P. C.; Eliason, E.; Robinson, M. S.

    2009-12-01

    The Lunar Reconnaissance Orbiter Camera (LROC) has two identical Narrow Angle Cameras (NACs). Each NAC is a monochrome pushbroom scanner, providing images with a pixel scale of 50 cm from a 50-km orbit. A single NAC image has a swath width of 2.5 km and a length of up to 26 km. The NACs are mounted to acquire side-by-side imaging for a combined swath width of 5 km. The NAC is designed to fully characterize future human and robotic landing sites in terms of scientific and resource merit, trafficability, and hazards. The North and South poles will be mapped at 1-meter-scale poleward of 85.5 degrees latitude. Stereo coverage is achieved by pointing the NACs off-nadir, which requires planning in advance. Read noise is 91 and 93 e- and the full well capacity is 334,000 and 352,000 e- for NAC-L and NAC-R respectively. Signal-to-noise ranges from 42 for low-reflectance material with 70 degree illumination to 230 for high-reflectance material with 0 degree illumination. Longer exposure times and 2x binning are available to further increase signal-to-noise with loss of spatial resolution. Lossy data compression from 12 bits to 8 bits uses a companding table selected from a set optimized for different signal levels. A model of focal plane temperatures based on flight data is used to command dark levels for individual images, optimizing the performance of the companding tables and providing good matching of the NAC-L and NAC-R images even before calibration. The preliminary NAC calibration pipeline includes a correction for nonlinearity at low signal levels with an offset applied for DN>600 and a logistic function for DN<600. Flight images taken on the limb of the Moon provide a measure of stray light performance. Averages over many lines of images provide a measure of flat field performance in flight. These are comparable with laboratory data taken with a diffusely reflecting uniform panel.

  17. System for photometric calibration of optoelectronic imaging devices especially streak cameras

    DOEpatents

    Boni, Robert; Jaanimagi, Paul

    2003-11-04

    A system for the photometric calibration of streak cameras and similar imaging devices provides a precise knowledge of the camera's flat-field response as well as a mapping of the geometric distortions. The system provides the flat-field response, representing the spatial variations in the sensitivity of the recorded output, with a signal-to-noise ratio (SNR) greater than can be achieved in a single submicrosecond streak record. The measurement of the flat-field response is carried out by illuminating the input slit of the streak camera with a signal that is uniform in space and constant in time. This signal is generated by passing a continuous wave source through an optical homogenizer made up of a light pipe or pipes in which the illumination typically makes several bounces before exiting as a spatially uniform source field. The rectangular cross-section of the homogenizer is matched to the usable photocathode area of the streak tube. The flat-field data set is obtained by using a slow streak ramp that may have a period from one millisecond (ms) to ten seconds (s), but may be nominally one second in duration. The system also provides a mapping of the geometric distortions, by spatially and temporarily modulating the output of the homogenizer and obtaining a data set using the slow streak ramps. All data sets are acquired using a CCD camera and stored on a computer, which is used to calculate all relevant corrections to the signal data sets. The signal and flat-field data sets are both corrected for geometric distortions prior to applying the flat-field correction. Absolute photometric calibration is obtained by measuring the output fluence of the homogenizer with a "standard-traceable" meter and relating that to the CCD pixel values for a self-corrected flat-field data set.

  18. High-resolution imaging of the Pluto-Charon system with the Faint Object Camera of the Hubble Space Telescope

    NASA Technical Reports Server (NTRS)

    Albrecht, R.; Barbieri, C.; Adorf, H.-M.; Corrain, G.; Gemmo, A.; Greenfield, P.; Hainaut, O.; Hook, R. N.; Tholen, D. J.; Blades, J. C.

    1994-01-01

    Images of the Pluto-Charon system were obtained with the Faint Object Camera (FOC) of the Hubble Space Telescope (HST) after the refurbishment of the telescope. The images are of superb quality, allowing the determination of radii, fluxes, and albedos. Attempts were made to improve the resolution of the already diffraction limited images by image restoration. These yielded indications of surface albedo distributions qualitatively consistent with models derived from observations of Pluto-Charon mutual eclipses.

  19. Real-time image processing and fusion for a new high-speed dual-band infrared camera

    NASA Astrophysics Data System (ADS)

    Müller, Markus; Schreer, Oliver; López Sáenz, Monica

    2007-04-01

    A dual-band infrared camera system based on a dual-band quantum well infrared photodetector (QWIP) has been developed for acquiring images from both the mid-wavelength (MWIR) and long-wavelength (LWIR) infrared spectral band. The system delivers exactly pixel-registered simultaneously acquired images. It has the advantage that appropriate signal and image processing permit to exploit differences in the characteristics of those bands. Thus, the camera reveals more information than a single-band camera. It helps distinguishing between targets and decoys and has the ability to defeat many IR countermeasures such as smoke, camouflage and flares. Furthermore, the system permits to identify materials (e.g. glass, asphalt, slate, etc.), to distinguish sun reflections from hot objects and to visualize hot exhaust gases. Furthermore, dedicated software for processing and exploitation in real-time extends the application domain of the camera system. One component corrects the images and allows for overlays with complementary colors such that differences become apparent. Another software component aims at a robust estimation of transformation parameters of consecutive images in the image stream for image registration purposes. This feature stabilizes the images also under rugged conditions and it allows for the automatic stitching of the image stream to construct large mosaic images. Mosaic images facilitate the inspection of large objects and scenarios and create a better overview for human observers. In addition, image based MTI (moving target indication) also for the case of a moving camera is under development. This component aims at surveillance applications and could also be used for camouflage assessment of moving targets.

  20. Final Manuscript for ICOM, The Hague, September 2005 Spectral imaging using a commercial color-filter array digital camera

    E-print Network

    Berns, Roy

    is underway at Rochester Institute of Technology to develop an image acquisition system that records-filter array digital camera Roy S. Berns Lawrence A. Taplin Mahdi Nezamabadi Mahnaz Mohammadi Yonghui Zhao Munsell Color Science Laboratory Chester F. Carlson Center for Imaging Science Rochester Institute

  1. These days most phones and tablets have cameras with resolutions measured in megapixels. This means the images they

    E-print Network

    Lucarini, Valerio

    . This means the images they produce are made out of millions of separate squares that form the picture#12;These days most phones and tablets have cameras with resolutions measured in megapixels that is reflected to the sensor. By shining a number of patterns and recording the level of returned light an image

  2. Illuminance of the image in the ultraspeed cameras SFR, ZhLV-2 and ZhFR-3

    Microsoft Academic Search

    A. S. Dubovik; Iu. A. Zatsepin; A. O. Daragan; N. M. Sitsinskaia

    1976-01-01

    On the basis of the structural characteristics of their optical schemes, formulas are derived for the illuminance of the image in the streak camera (SFR), the high-speed slave-type camera ZhLV-2, and the slave photographic detector ZhFR-3, in both photographic detector and high-speed slow-motion variants. The SFR contains two lenses, the first of which is the Industar-51 lens with a focal

  3. Experimental Results of the Gamma-Ray Imaging Capability With a Si\\/CdTe Semiconductor Compton Camera

    Microsoft Academic Search

    Shin'ichiro Takeda; Hiroyuki Aono; Sho Okuyama; Shin-Nosuke Ishikawa; Hirokazu Odaka; Shin Watanabe; Motohide Kokubun; Tadayuki Takahashi; Kazuhiro Nakazawa; Hiroyasu Tajima; Naoki Kawachi

    2009-01-01

    A semiconductor Compton camera that combines silicon (Si) and cadmium telluride (CdTe) detectors was developed, and its imaging capability was examined with various kinds of gamma-ray targets such as a point source, arranged point sources and an extended source. The camera consists of one double-sided Si strip detector and four layers of CdTe pad detectors, and was designed to minimize

  4. Characterization of gravity waves at Venus cloud top from the Venus Monitoring Camera images

    NASA Astrophysics Data System (ADS)

    Piccialli, A.; Titov, D.; Svedhem, H.; Markiewicz, W. J.

    2012-04-01

    Since 2006 the European mission Venus Express (VEx) is studying Venus atmosphere with a focus on atmospheric dynamics and circulation. Recently, several experiments on board Venus Express have detected waves in the Venus atmosphere both as oscillations in the temperature and wind fields and as patterns on the cloud layer. Waves could be playing an important role in the maintenance of the atmospheric circulation of Venus since they can transport energy and momentum. High resolution images of Venus Northern hemisphere obtained with the Venus Monitoring Camera (VMC/VEx) show distinct wave patterns at the cloud tops (~70 km altitude) interpreted as gravity waves. Venus Monitoring Camera (VMC) is a CCD-based camera specifically designed to take images of Venus in four narrow band filters in UV (365 nm), visible (513 nm), and near-IR (965 and 1000 nm). A systematic visual search of waves in VMC images was performed; more than 1700 orbits were analyzed and wave patterns were observed in about 200 images. With the aim to characterize the wave types and their possible origin, we retrieved wave properties such as location (latitude and longitude), local time, solar zenith angle, packet length and width, and orientation. A wavelet analysis was also applied to determine the wavelength and the region of dominance of each wave. Four types of waves were identified in VMC images: long, medium, short and irregular waves. The long type waves are characterized by long and narrow straight features extending more than a few hundreds kilometers and with a wavelength within the range of 7 to 48 km. Medium type waves have irregular wavefronts extending more than 100 km and with wavelengths in the range 8 - 21 km. Short wave packets have a width of several tens of kilometers and extends to few hundreds kilometers and are characterized by small wavelengths (3 - 16 km). Often short waves trains are observed at the edges of long features and seem connected to them. Irregular wave fields extend beyond the field of view of VMC and appear to be the result of wave breaking or wave interference. The waves are often identified in all channels and are mostly found at high latitudes (60-80°N) in the Northern hemisphere and seem to be concentrated above Ishtar Terra, a continental size highland that includes the highest mountain belts of the planet, thus suggesting a possible orographic origin of the waves. However, at the moment it is not possible to rule out a bias in the observations due to the spacecraft orbit that prevents waves to be seen at lower latitudes, because of lower resolution, and on the night side of the planet.

  5. A two camera video imaging system with application to parafoil angle of attack measurements

    NASA Technical Reports Server (NTRS)

    Meyn, Larry A.; Bennett, Mark S.

    1991-01-01

    This paper describes the development of a two-camera, video imaging system for the determination of three-dimensional spatial coordinates from stereo images. This system successfully measured angle of attack at several span-wise locations for large-scale parafoils tested in the NASA Ames 80- by 120-Foot Wind Tunnel. Measurement uncertainty for angle of attack was less than 0.6 deg. The stereo ranging system was the primary source for angle of attack measurements since inclinometers sewn into the fabric ribs of the parafoils had unknown angle offsets acquired during installation. This paper includes discussions of the basic theory and operation of the stereo ranging system, system measurement uncertainty, experimental set-up, calibration results, and test results. Planned improvements and enhancements to the system are also discussed.

  6. Geometric calibration of multi-sensor image fusion system with thermal infrared and low-light camera

    NASA Astrophysics Data System (ADS)

    Peric, Dragana; Lukic, Vojislav; Spanovic, Milana; Sekulic, Radmila; Kocic, Jelena

    2014-10-01

    A calibration platform for geometric calibration of multi-sensor image fusion system is presented in this paper. The accurate geometric calibration of the extrinsic geometric parameters of cameras that uses planar calibration pattern is applied. For calibration procedure specific software is made. Patterns used in geometric calibration are prepared with aim to obtain maximum contrast in both visible and infrared spectral range - using chessboards which fields are made of different emissivity materials. Experiments were held in both indoor and outdoor scenarios. Important results of geometric calibration for multi-sensor image fusion system are extrinsic parameters in form of homography matrices used for homography transformation of the object plane to the image plane. For each camera a corresponding homography matrix is calculated. These matrices can be used for image registration of images from thermal and low light camera. We implemented such image registration algorithm to confirm accuracy of geometric calibration procedure in multi-sensor image fusion system. Results are given for selected patterns - chessboard with fields made of different emissivity materials. For the final image registration algorithm in surveillance system for object tracking we have chosen multi-resolution image registration algorithm which naturally combines with a pyramidal fusion scheme. The image pyramids which are generated at each time step of image registration algorithm may be reused at the fusion stage so that overall number of calculations that must be performed is greatly reduced.

  7. The use of vision-based image quality metrics to predict low-light performance of camera phones

    NASA Astrophysics Data System (ADS)

    Hultgren, B.; Hertel, D.

    2010-01-01

    Small digital camera modules such as those in mobile phones have become ubiquitous. Their low-light performance is of utmost importance since a high percentage of images are made under low lighting conditions where image quality failure may occur due to blur, noise, and/or underexposure. These modes of image degradation are not mutually exclusive: they share common roots in the physics of the imager, the constraints of image processing, and the general trade-off situations in camera design. A comprehensive analysis of failure modes is needed in order to understand how their interactions affect overall image quality. Low-light performance is reported for DSLR, point-and-shoot, and mobile phone cameras. The measurements target blur, noise, and exposure error. Image sharpness is evaluated from three different physical measurements: static spatial frequency response, handheld motion blur, and statistical information loss due to image processing. Visual metrics for sharpness, graininess, and brightness are calculated from the physical measurements, and displayed as orthogonal image quality metrics to illustrate the relative magnitude of image quality degradation as a function of subject illumination. The impact of each of the three sharpness measurements on overall sharpness quality is displayed for different light levels. The power spectrum of the statistical information target is a good representation of natural scenes, thus providing a defined input signal for the measurement of power-spectrum based signal-to-noise ratio to characterize overall imaging performance.

  8. Two Years of Digital Terrain Model Production Using the Lunar Reconnaissance Orbiter Narrow Angle Camera

    NASA Astrophysics Data System (ADS)

    Burns, K.; Robinson, M. S.; Speyerer, E.; LROC Science Team

    2011-12-01

    One of the primary objectives of the Lunar Reconnaissance Orbiter Camera (LROC) is to gather stereo observations with the Narrow Angle Camera (NAC). These stereo observations are used to generate digital terrain models (DTMs). The NAC has a pixel scale of 0.5 to 2.0 meters but was not designed for stereo observations and thus requires the spacecraft to roll off-nadir to acquire these images. Slews interfere with the data collection of the other instruments, so opportunities are currently limited to four per day. Arizona State University has produced DTMs from 95 stereo pairs for 11 Constellation Project (CxP) sites (Aristarchus, Copernicus crater, Gruithuisen domes, Hortensius domes, Ina D-caldera, Lichtenberg crater, Mare Ingenii, Marius hills, Reiner Gamma, South Pole-Aitkin Rim, Sulpicius Gallus) as well as 30 other regions of scientific interest (including: Bhabha crater, highest and lowest elevation points, Highland Ponds, Kugler Anuchin, Linne Crater, Planck Crater, Slipher crater, Sears Crater, Mandel'shtam Crater, Virtanen Graben, Compton/Belkovich, Rumker Domes, King Crater, Luna 16/20/23/24 landing sites, Ranger 6 landing site, Wiener F Crater, Apollo 11/14/15/17, fresh craters, impact melt flows, Larmor Q crater, Mare Tranquillitatis pit, Hansteen Alpha, Moore F Crater, and Lassell Massif). To generate DTMs, the USGS ISIS software and SOCET SET° from BAE Systems are used. To increase the absolute accuracy of the DTMs, data obtained from the Lunar Orbiter Laser Altimeter (LOLA) is used to coregister the NAC images and define the geodetic reference frame. NAC DTMs have been used in examination of several sites, e.g. Compton-Belkovich, Marius Hills and Ina D-caldera [1-3]. LROC will continue to acquire high-resolution stereo images throughout the science phase of the mission and any extended mission opportunities, thus providing a vital dataset for scientific research as well as future human and robotic exploration. [1] B.L. Jolliff (2011) Nature Geoscience, in press. [2] Lawrence et al. (2011) LPSC XLII, Abst 2228. [3] Garry et al. (2011) LPSC XLII, Abst 2605.

  9. Imaging system for cardiac planar imaging using a dedicated dual-head gamma camera

    DOEpatents

    Majewski, Stanislaw (Morgantown, VA); Umeno, Marc M. (Woodinville, WA)

    2011-09-13

    A cardiac imaging system employing dual gamma imaging heads co-registered with one another to provide two dynamic simultaneous views of the heart sector of a patient torso. A first gamma imaging head is positioned in a first orientation with respect to the heart sector and a second gamma imaging head is positioned in a second orientation with respect to the heart sector. An adjustment arrangement is capable of adjusting the distance between the separate imaging heads and the angle between the heads. With the angle between the imaging heads set to 180 degrees and operating in a range of 140-159 keV and at a rate of up to 500kHz, the imaging heads are co-registered to produce simultaneous dynamic recording of two stereotactic views of the heart. The use of co-registered imaging heads maximizes the uniformity of detection sensitivity of blood flow in and around the heart over the whole heart volume and minimizes radiation absorption effects. A normalization/image fusion technique is implemented pixel-by-corresponding pixel to increase signal for any cardiac region viewed in two images obtained from the two opposed detector heads for the same time bin. The imaging system is capable of producing enhanced first pass studies, bloodpool studies including planar, gated and non-gated EKG studies, planar EKG perfusion studies, and planar hot spot imaging.

  10. Vehicle's Orientation Measurement Method by Single-Camera Image Using Known-Shaped Planar Object

    Microsoft Academic Search

    Nozomu Araki; Takao Sato; Yasuo Konishi; Hiroyuki Ishigaki

    2009-01-01

    This paper describes the development of a vehicle's orientation measurement system using a single camera. We applied a single camera calibration method using a planar pattern to measure the orientation from the camera to a planar object. With this technique, the orientation from camera to planar object can be obtained when using feature coordinates of more than 4 points on

  11. Localizing the Imaged-Object Position by a Stationary Position-Sensitive Scintillation Camera Using Tilted-Collimator Technique

    Microsoft Academic Search

    Nikolay M. Uzunov; Michele Bello; Pasquale Boccaccio; Giuliano Moschini; Davide Camporese; Dante Bollini; Giuseppe Baldazzi

    2006-01-01

    A method to measure the detector-to-object distance from the images obtained with a stationary high-spatial-resolution gamma-ray camera for in-vivo small-object studies has been developed. It exploits the shift of the imaged object in the image plane, obtained for tilted positions of a parallel-hole collimator. In this way three-dimensional information about the object position can be obtained without moving either the

  12. Imaging early demineralization on tooth occlusional surfaces with a high definition InGaAs camera

    NASA Astrophysics Data System (ADS)

    Fried, William A.; Fried, Daniel; Chan, Kenneth H.; Darling, Cynthia L.

    In vivo and in vitro studies have shown that high contrast images of tooth demineralization can be acquired in the near-IR due to the high transparency of dental enamel. The purpose of this study is to compare the lesion contrast in reflectance at near-IR wavelengths coincident with high water absorption with those in the visible, the near-IR at 1300-nm and with fluorescence measurements for early lesions in occlusal surfaces. Twenty-four human molars were used in this in vitro study. Teeth were painted with an acidresistant varnish, leaving a 4×4 mm window in the occlusal surface of each tooth exposed for demineralization. Artificial lesions were produced in the exposed windows after 1 and 2-day exposure to a demineralizing solution at pH 4.5. Lesions were imaged using NIR reflectance at 3 wavelengths, 1310, 1460 and 1600-nm using a high definition InGaAs camera. Visible light reflectance, and fluorescence with 405-nm excitation and detection at wavelengths greater than 500-nm were also used to acquire images for comparison. Crossed polarizers were used for reflectance measurements to reduce interference from specular reflectance. The contrast of both the 24 hr and 48 hr lesions were significantly higher (P<0.05) for NIR reflectance imaging at 1460-nm and 1600-nm than it was for NIR reflectance imaging at 1300-nm, visible reflectance imaging, and fluorescence. The results of this study suggest that NIR reflectance measurements at longer near-IR wavelengths coincident with higher water absorption are better suited for imaging early caries lesions.

  13. Imaging Early Demineralization on Tooth Occlusal Surfaces with a High Definition InGaAs Camera

    PubMed Central

    Fried, William A.; Fried, Daniel; Chan, Kenneth H.; Darling, Cynthia L.

    2013-01-01

    In vivo and in vitro studies have shown that high contrast images of tooth demineralization can be acquired in the near-IR due to the high transparency of dental enamel. The purpose of this study is to compare the lesion contrast in reflectance at near-IR wavelengths coincident with high water absorption with those in the visible, the near-IR at 1300-nm and with fluorescence measurements for early lesions in occlusal surfaces. Twenty-four human molars were used in this in vitro study. Teeth were painted with an acid-resistant varnish, leaving a 4×4 mm window in the occlusal surface of each tooth exposed for demineralization. Artificial lesions were produced in the exposed windows after 1 & 2-day exposure to a demineralizing solution at pH 4.5. Lesions were imaged using NIR reflectance at 3 wavelengths, 1310, 1460 and 1600-nm using a high definition InGaAs camera. Visible light reflectance, and fluorescence with 405-nm excitation and detection at wavelengths greater than 500-nm were also used to acquire images for comparison. Crossed polarizers were used for reflectance measurements to reduce interference from specular reflectance. The contrast of both the 24 hr and 48 hr lesions were significantly higher (P<0.05) for NIR reflectance imaging at 1460-nm and 1600-nm than it was for NIR reflectance imaging at 1300-nm, visible reflectance imaging, and fluorescence. The results of this study suggest that NIR reflectance measurements at longer near-IR wavelengths coincident with higher water absorption are better suited for imaging early caries lesions. PMID:24357911

  14. A six-camera digital video imaging system sensitive to visible, red edge, near-infrared, and mid-infrared wavelengths

    Microsoft Academic Search

    R. S. Fletcher; J. H. Everitt

    2007-01-01

    This paper describes a six-camera multispectral digital video imaging system designed for natural resource assessment and shows its potential as a research tool. It has five visible to near-infrared light sensitive cameras, one near-infrared to mid-infrared light sensitive camera, a monitor, a computer with a multichannel digitizing board, a keyboard, a power distributor, an amplifier, and a mouse. Each camera

  15. A Novel Method of Object Detection from a Moving Camera Based on Image Matching and Frame Coupling

    PubMed Central

    Chen, Yong; Zhang, Rong hua; Shang, Lei

    2014-01-01

    A new method based on image matching and frame coupling to handle the problems of object detection caused by a moving camera and object motion is presented in this paper. First, feature points are extracted from each frame. Then, motion parameters can be obtained. Sub-images are extracted from the corresponding frame via these motion parameters. Furthermore, a novel searching method for potential orientations improves efficiency and accuracy. Finally, a method based on frame coupling is adopted, which improves the accuracy of object detection. The results demonstrate the effectiveness and feasibility of our proposed method for a moving object with changing posture and with a moving camera. PMID:25354301

  16. Calibration of the Lunar Reconnaissance Orbiter Camera

    NASA Astrophysics Data System (ADS)

    Tschimmel, M.; Robinson, M. S.; Humm, D. C.; Denevi, B. W.; Lawrence, S. J.; Brylow, S.; Ravine, M.; Ghaemi, T.

    2008-12-01

    The Lunar Reconnaissance Orbiter Camera (LROC) onboard the NASA Lunar Reconnaissance Orbiter (LRO) spacecraft consists of three cameras: the Wide-Angle Camera (WAC) and two identical Narrow Angle Cameras (NAC-L, NAC-R). The WAC is push-frame imager with 5 visible wavelength filters (415 to 680 nm) at a spatial resolution of 100 m/pixel and 2 UV filters (315 and 360 nm) with a resolution of 400 m/pixel. In addition to the multicolor imaging the WAC can operate in monochrome mode to provide a global large- incidence angle basemap and a time-lapse movie of the illumination conditions at both poles. The WAC has a highly linear response, a read noise of 72 e- and a full well capacity of 47,200 e-. The signal-to-noise ratio in each band is 140 in the worst case. There are no out-of-band leaks and the spectral response of each filter is well characterized. Each NAC is a monochrome pushbroom scanner, providing images with a resolution of 50 cm/pixel from a 50-km orbit. A single NAC image has a swath width of 2.5 km and a length of up to 26 km. The NACs are mounted to acquire side-by-side imaging for a combined swath width of 5 km. The NAC is designed to fully characterize future human and robotic landing sites in terms of topography and hazard risks. The North and South poles will be mapped on a 1-meter-scale poleward of 85.5° latitude. Stereo coverage can be provided by pointing the NACs off-nadir. The NACs are also highly linear. Read noise is 71 e- for NAC-L and 74 e- for NAC-R and the full well capacity is 248,500 e- for NAC-L and 262,500 e- for NAC- R. The focal lengths are 699.6 mm for NAC-L and 701.6 mm for NAC-R; the system MTF is 28% for NAC-L and 26% for NAC-R. The signal-to-noise ratio is at least 46 (terminator scene) and can be higher than 200 (high sun scene). Both NACs exhibit a straylight feature, which is caused by out-of-field sources and is of a magnitude of 1-3%. However, as this feature is well understood it can be greatly reduced during ground processing. All three cameras were calibrated in the laboratory under ambient conditions. Future thermal vacuum tests will characterize critical behaviors across the full range of lunar operating temperatures. In-flight tests will check for changes in response after launch and provide key data for meeting the requirements of 1% relative and 10% absolute radiometric calibration.

  17. 3D displacement measurement with a single camera based on digital image correlation technique

    NASA Astrophysics Data System (ADS)

    Sun, Wei; Dong, E.-liang; He, Xiaoyuan

    2007-12-01

    In this paper, a simple speckle technique utilizing a charge-coupled device (CCD) camera and frequency and spatial domain combined correlation, is developed to measure 3D rigid-body displacement of an object. Rigid-body displacement in an arbitrary direction in space consists of both in- and out-of-plane components. On the basis of the pinhole perspective camera model, the presence of in-plane component of displacement results in a shift of the displacement vector center and constant slope of the displacement vector is related to the out-of-plane component. The proposed method facilitates the separation of in- and out-of-plane displacements. Based on the linear distribution features of the displacement vector, digital image correlation (DIC) in global coordinates is employed to locate the displaced position of each point on the object, which has been validated to be more accurate than conventional means. Correlation in frequency domain is combined with the technique in spatial domain to improve the speed and automation in the initial searching process. Simulation and experimental results demonstrate that both in-plane and out-of-plane displacements can be accurately measured with the proposed method.

  18. An accelerated threshold-based back-projection algorithm for Compton camera image reconstruction

    SciTech Connect

    Mundy, Daniel W.; Herman, Michael G. [Department of Radiation Oncology, Mayo Clinic, Rochester, Minnesota 55905 (United States)

    2011-01-15

    Purpose: Compton camera imaging (CCI) systems are currently under investigation for radiotherapy dose reconstruction and verification. The ability of such a system to provide real-time images during dose delivery will be limited by the computational speed of the image reconstruction algorithm. In this work, the authors present a fast and simple method by which to generate an initial back-projected image from acquired CCI data, suitable for use in a filtered back-projection algorithm or as a starting point for iterative reconstruction algorithms, and compare its performance to the current state of the art. Methods: Each detector event in a CCI system describes a conical surface that includes the true point of origin of the detected photon. Numerical image reconstruction algorithms require, as a first step, the back-projection of each of these conical surfaces into an image space. The algorithm presented here first generates a solution matrix for each slice of the image space by solving the intersection of the conical surface with the image plane. Each element of the solution matrix is proportional to the distance of the corresponding voxel from the true intersection curve. A threshold function was developed to extract those pixels sufficiently close to the true intersection to generate a binary intersection curve. This process is repeated for each image plane for each CCI detector event, resulting in a three-dimensional back-projection image. The performance of this algorithm was tested against a marching algorithm known for speed and accuracy. Results: The threshold-based algorithm was found to be approximately four times faster than the current state of the art with minimal deficit to image quality, arising from the fact that a generically applicable threshold function cannot provide perfect results in all situations. The algorithm fails to extract a complete intersection curve in image slices near the detector surface for detector event cones having axes nearly parallel to the image plane. This effect decreases the sum of the image, thereby also affecting the mean, standard deviation, and SNR of the image. All back-projected events associated with a simulated point source intersected the voxel containing the source and the FWHM of the back-projected image was similar to that obtained from the marching method. Conclusions: The slight deficit to image quality observed with the threshold-based back-projection algorithm described here is outweighed by the 75% reduction in computation time. The implementation of this method requires the development of an optimum threshold function, which determines the overall accuracy of the method. This makes the algorithm well-suited to applications involving the reconstruction of many large images, where the time invested in threshold development is offset by the decreased image reconstruction time. Implemented in a parallel-computing environment, the threshold-based algorithm has the potential to provide real-time dose verification for radiation therapy.

  19. Mars Orbiter Camera Acquires High Resolution Stereoscopic Images of the Viking One Landing Site

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Two MOC images of the vicinity of the Viking Lander 1 (MOC 23503 and 25403), acquired separately on 12 April 1998 at 08:32 PDT and 21 April 1998 at 13:54 PDT (respectively), are combined here in a stereoscopic anaglyph. The more recent, slightly better quality image is in the red channel, while the earlier image is shown in the blue and green channels. Only the overlap portion of the images is included in the composite.

    Image 23503 was taken at a viewing angle of 31.6o from vertical; 25403 was taken at an angle of 22.4o, for a difference of 9.4o. Although this is not as large a difference as is typically used in stereo mapping, it is sufficient to provide some indication of relief, at least in locations of high relief.

    The image shows the raised rims and deep interiors of the larger impact craters in the area (the largest crater is about 650 m/2100 feet across). It shows that the relief on the ridges is very subtle, and that, in general, the Viking landing site is very flat. This result is, of course, expected: the VL-1 site was chosen specifically because it was likely to have low to very low slopes that represented potential hazards to the spacecraft.

    Malin Space Science Systems and the California Institute of Technology built the MOC using spare hardware from the Mars Observer mission. MSSS operates the camera from its facilities in San Diego, CA. The Jet Propulsion Laboratory's Mars Surveyor Operations Project operates the Mars Global Surveyor spacecraft with its industrial partner, Lockheed Martin Astronautics, from facilities in Pasadena, CA and Denver, CO.

  20. Use of digirad 2020tc Imager, a multi-crystal scintillation camera with solid-state detectors in one case for the imaging of autografts of parathyroid glands.

    PubMed

    Fukumitsu, N; Tsuchida, D; Ogi, S; Uchiyama, M; Mori, Y; Ooshita, T; Narrita, H; Yamamoto, H; Takeyama, H

    2001-12-01

    99mTc-methoxy-isobutyl-isonitrile (99mTc-MIBI) scintigraphy with Digirad 2020tc ImagerTM (2020tc), which was a multi-crystal scintillation camera with solid-state detectors was performed for patients with secondary hyperparathyroidism having autografts of parathyroid glands in the right arm. With the 2020tc camera, three abnormal accumulations were found in the right arm. The images obtained with this camera were superior in resolution to those obtained with a conventional NaI crystal gamma camera (ZLC7500, Siemens, Germany). The next day, resection of autografts of parathyroid glands was done. Four hyperplastic parathyroid glands were resected and all were hyperplastic in pathological findings. PMID:11831402

  1. Evaluation of a CdTe semiconductor based compact gamma camera for sentinel lymph node imaging

    SciTech Connect

    Russo, Paolo; Curion, Assunta S.; Mettivier, Giovanni; Esposito, Michela; Aurilio, Michela; Caraco, Corradina; Aloj, Luigi; Lastoria, Secondo [Dipartimento di Scienze Fisiche, Universita di Napoli Federico II, I-80126 Napoli (Italy) and Istituto Nazionale di Fisica Nucleare, Sezione di Napoli, I-80126 Napoli (Italy); Medicina Nucleare, Istituto Nazionale per lo Studio e la Cura dei Tumori, Fondazione G. Pascale, I-80131 Napoli (Italy)

    2011-03-15

    Purpose: The authors assembled a prototype compact gamma-ray imaging probe (MediPROBE) for sentinel lymph node (SLN) localization. This probe is based on a semiconductor pixel detector. Its basic performance was assessed in the laboratory and clinically in comparison with a conventional gamma camera. Methods: The room-temperature CdTe pixel detector (1 mm thick) has 256x256 square pixels arranged with a 55 {mu}m pitch (sensitive area 14.08x14.08 mm{sup 2}), coupled pixel-by-pixel via bump-bonding to the Medipix2 photon-counting readout CMOS integrated circuit. The imaging probe is equipped with a set of three interchangeable knife-edge pinhole collimators (0.94, 1.2, or 2.1 mm effective diameter at 140 keV) and its focal distance can be regulated in order to set a given field of view (FOV). A typical FOV of 70 mm at 50 mm skin-to-collimator distance corresponds to a minification factor 1:5. The detector is operated at a single low-energy threshold of about 20 keV. Results: For {sup 99m}Tc, at 50 mm distance, a background-subtracted sensitivity of 6.5x10{sup -3} cps/kBq and a system spatial resolution of 5.5 mm FWHM were obtained for the 0.94 mm pinhole; corresponding values for the 2.1 mm pinhole were 3.3x10{sup -2} cps/kBq and 12.6 mm. The dark count rate was 0.71 cps. Clinical images in three patients with melanoma indicate detection of the SLNs with acquisition times between 60 and 410 s with an injected activity of 26 MBq {sup 99m}Tc and prior localization with standard gamma camera lymphoscintigraphy. Conclusions: The laboratory performance of this imaging probe is limited by the pinhole collimator performance and the necessity of working in minification due to the limited detector size. However, in clinical operative conditions, the CdTe imaging probe was effective in detecting SLNs with adequate resolution and an acceptable sensitivity. Sensitivity is expected to improve with the future availability of a larger CdTe detector permitting operation at shorter distances from the patient skin.

  2. Results of shuttle EMU thermal vacuum tests incorporating an infrared imaging camera data acquisition system

    NASA Technical Reports Server (NTRS)

    Anderson, James E.; Tepper, Edward H.; Trevino, Louis A.

    1991-01-01

    Manned tests in Chamber B at NASA JSC were conducted in May and June of 1990 to better quantify the Space Shuttle Extravehicular Mobility Unit's (EMU) thermal performance in the cold environmental extremes of space. Use of an infrared imaging camera with real-time video monitoring of the output significantly added to the scope, quality and interpretation of the test conduct and data acquisition. Results of this test program have been effective in the thermal certification of a new insulation configuration and the '5000 Series' glove. In addition, the acceptable thermal performance of flight garments with visually deteriorated insulation was successfully demonstrated, thereby saving significant inspection and garment replacement cost. This test program also established a new method for collecting data vital to improving crew thermal comfort in a cold environment.

  3. Decision strategies that maximize the area under the LROC curve

    Microsoft Academic Search

    Parmeshwar Khurd; Gene Gindi

    2005-01-01

    For the 2-class detection problem (signal ab- sent\\/present), the likelihood ratio is an ideal observer in that it minimizes Bayes risk for arbitrary costs and it maximizes AUC, the area under the ROC curve. The AUC-optimizing property makes it a valuable tool in imaging system optimization. If one considered a different task, namely, joint detection and localization of the signal,

  4. Estimating the camera direction of a geotagged image using reference images

    E-print Network

    by recent research results that the additional global positioning system (GPS) information helps visual the fusion of user photos and satellite images obtained using the global positioning system (GPS) information

  5. Image quality tests on the Canarias InfraRed Camera Experiment (CIRCE)

    NASA Astrophysics Data System (ADS)

    Lasso Cabrera, Nestor M.; Eikenberry, Stephen S.; Garner, Alan; Raines, S. Nicholas; Charcos-Llorens, Miguel V.; Edwards, Michelle L.; Marin-Franch, Antonio

    2012-09-01

    In this paper we present the results of image quality tests performed on the optical system of the Canarias InfraRed Camera Experiment (CIRCE), a visitor-class near-IR imager, spectrograph, and polarimeter for the 10.4 meter Gran Telescopio Canarias (GTC). The CIRCE optical system is comprised of eight gold-coated aluminum alloy 6061 mirrors. We present surface roughness analysis of each individual component as well as optical quality of the whole system. We found all individual mirror surface roughness are within specifications except Fold mirrors 1 and 2. We plan to have these components re-cut and re-coated. We used a flat 0.2-arcseconds pinhole mask placed in the focal plane of the telescope to perform the optical quality tests of the system. The pinhole mask covers the entire field of view of the instrument. The resulting image quality allows seeing-limited performance down to seeing of 0.3 arcseconds FWHM. We also observed that our optical system produces a negative field curvature, which compensates the field curvature of the Ritchey-Chretien GTC design once the instrument is on the telescope.

  6. HERSCHEL/SCORE, imaging the solar corona in visible and EUV light: CCD camera characterization.

    PubMed

    Pancrazzi, M; Focardi, M; Landini, F; Romoli, M; Fineschi, S; Gherardi, A; Pace, E; Massone, G; Antonucci, E; Moses, D; Newmark, J; Wang, D; Rossi, G

    2010-07-01

    The HERSCHEL (helium resonant scattering in the corona and heliosphere) experiment is a rocket mission that was successfully launched last September from White Sands Missile Range, New Mexico, USA. HERSCHEL was conceived to investigate the solar corona in the extreme UV (EUV) and in the visible broadband polarized brightness and provided, for the first time, a global map of helium in the solar environment. The HERSCHEL payload consisted of a telescope, HERSCHEL EUV Imaging Telescope (HEIT), and two coronagraphs, HECOR (helium coronagraph) and SCORE (sounding coronagraph experiment). The SCORE instrument was designed and developed mainly by Italian research institutes and it is an imaging coronagraph to observe the solar corona from 1.4 to 4 solar radii. SCORE has two detectors for the EUV lines at 121.6 nm (HI) and 30.4 nm (HeII) and the visible broadband polarized brightness. The SCORE UV detector is an intensified CCD with a microchannel plate coupled to a CCD through a fiber-optic bundle. The SCORE visible light detector is a frame-transfer CCD coupled to a polarimeter based on a liquid crystal variable retarder plate. The SCORE coronagraph is described together with the performances of the cameras for imaging the solar corona. PMID:20428852

  7. LRO Camera Imaging of the Moon: Apollo 17 and other Sites for Ground Truth

    NASA Astrophysics Data System (ADS)

    Jolliff, B. L.; Wiseman, S. M.; Robinson, M. S.; Lawrence, S.; Denevi, B. W.; Bell, J. F.

    2009-12-01

    One of the fundamental goals of the Lunar Reconnaissance Orbiter (LRO) is the determination of mineralogic and compositional distributions and their relation to geologic features on the Moon’s surface. Through a combination of imaging with the LRO narrow-angle cameras and wide-angle camera (NAC, WAC), very fine-scale geologic features are resolved with better than meter-per-pixel resolution (NAC) and correlated to spectral variations mapped with the lower resolution, 7-band WAC (400-m/pix, ultraviolet bands centered at 321 and 360 nm; 100-m/pix, visible bands centered at 415, 566, 604, 643, and 689 nm). Keys to understanding spectral variations in terms of composition, and relationships between compositional variations and surface geology, are ground-truth sites where surface compositions and mineralogy, as well as geology and geologic history, are well known. The Apollo 17 site is especially useful because the site geology includes a range of features from high-Ti mare basalts to Serenitatis-Basin-related massifs containing basin impact-melt breccia and feldspathic highlands materials, and a regional black and orange pyroclastic deposit. Moreover, relative and absolute ages of these features are known. In addition to rock samples, astronauts collected well-documented soil samples at 22 different sample locations across this diverse area. Many of these sample sites can be located in the multispectral data using the co-registered NAC images. Digital elevation data are used to normalize illumination geometry and thus fully exploit the multispectral data and compare derived compositional parameters for different geologic units. Regolith characteristics that are known in detail from the Apollo 17 samples, such as maturity and petrography of mineral, glass, and lithic components, contribute to spectral variations and are considered in the assessment of spectral variability at the landing site. In this work, we focus on variations associated with the ilmenite content (a Ti-rich mineral) of the soils and with known compositional and mineralogic characteristics of different geomorphic units. Results will be compared to those derived from analysis of data from the Clementine UV-VIS camera and from the Hubble Space Telescope.

  8. Tower Camera Handbook

    SciTech Connect

    Moudry, D

    2005-01-01

    The tower camera in Barrow provides hourly images of ground surrounding the tower. These images may be used to determine fractional snow cover as winter arrives, for comparison with the albedo that can be calculated from downward-looking radiometers, as well as some indication of present weather. Similarly, during spring time, the camera images show the changes in the ground albedo as the snow melts. The tower images are saved in hourly intervals. In addition, two other cameras, the skydeck camera in Barrow and the piling camera in Atqasuk, show the current conditions at those sites.

  9. Determining patient 6-degrees-of-freedom motion from stereo infrared cameras during supine medical imaging

    NASA Astrophysics Data System (ADS)

    Beach, Richard D.; Feng, Bing; Shazeeb, Mohammed S.; King, Michael A.

    2006-03-01

    Patient motion during SPECT acquisition causes inconsistent projection data and reconstruction artifacts which can significantly affect the diagnostic accuracy of SPECT. The tracking of motion by infrared monitoring spherical reflectors (markers) on the patient's surface can provide 6-Degrees-of-Freedom (6-DOF) motion information capable of providing clinically robust correction. Object rigid-body motion can be described by 3 translational DOF and 3 rotational DOF. Polaris marker position information obtained by stereo infrared cameras requires algorithmic processing to correctly record the tracked markers, and to calibrate and map Polaris co-ordinate data into the SPECT co-ordinate system. Marker data then requires processing to determine the rotational and translational 6-DOF motion to ultimately be used for SPECT image corrections. This processing utilizes an algorithm involving least-squares fitting, to each other, of two 3-D point sets using singular value decomposition (SVD) resulting in the rotation matrix and translation of the rigid body centroid. We have demonstrated the ability to monitor 12 clinical patients as well as 7 markers on 2 elastic belts worn by a volunteer while intentionally moving, and determined the 3 axis Euclidian rotation angles and centroid translations. An anthropomorphic phantom with Tc-99m added to the heart, liver, and body was simultaneously SPECT imaged and motion tracked using 4 rigidly mounted markers. The determined rotation matrix and translation information was used to correct the image resulting in virtually identical "no motion" and "corrected" images. We plan to initiate routine 6-DOF tracking of patient motion during SPECT imaging in the future.

  10. A triple energy window scatter subtraction approach for quantitative anger camera imaging of iodine-131

    SciTech Connect

    Grant, E.J.; Macey, D.J.; Bayouth, J.E. [Univ. of Texas, Houston, TX (United States)] [and others

    1994-05-01

    Dose estimates for organs and tumor volumes in radioimmunotherapy with I-131 frequently depend on in-vivo quantitation methods using planar Anger camera images. Compton scatter and collimator septal penetration result in overestimation of activity and dose. The objective of this study was to assess the effectiveness of a triple energy window subtraction method for quantitative imaging of I-131. The energy spectrum of I-131 was modeled as a superposition of the spectra of Cr-51 (320 keV) and Cs-137 (662 keV). Images were acquired with three adjacent 15% energy windows--photopeak(PP), upper scatter(US), and lower scatter(LS)--for small sources of these radionuclides. The PP window was centered at 364 keV for I-131 and Cs-137 and 320 keV for Cr-51. Three scatter multipliers were derived from analysis of count profiles of the Cs-137 and Cr-51 images, and used to sequentially remove septal penetration and scatter events included in the 364 keV photopeak of I-131. This method was tested by acquiring images of an abdominal phantom containing a liver, spleen and spherical {open_quotes}tumor{close_quotes} filled with different concentrations of I-131, both with and without background activity in the surrounding phantom. A body thickness attenuation compensation factor was applied to the geometric mean of the conjugate view counts using a narrow beam linear attenuation coefficient of 0.11 cm{sup -1}. With scatter subtraction, the accuracy and reproducibility of activity quantitation was improved because the background count density was more uniformly scored. Also, the influence of different activity concentrations in source organs relative to background on the accuracy of quantitation was removed, and the perimeters of organs were more clearly defined. This method has been used to provide improved dose estimates for I-131 labeled antibody therapy in breast cancer patients.

  11. Improved determination of volcanic SO2 emission rates from SO2 camera images

    NASA Astrophysics Data System (ADS)

    Klein, Angelika; Lübcke, Peter; Bobrowski, Nicole; Platt, Ulrich

    2015-04-01

    SO2 cameras determine the SO2 emissions of volcanoes with a high temporal and spatial resolution. They thus visualize the plume morphology and give information about turbulence and plume dispersion. Moreover, from SO2 camera image series emission rates can be determined with high time resolution (as will be explained below), these data can help to improve our understanding of variations in the degassing regime of volcanoes. The first step to obtain emission rates is to integrate the column amount of SO2 along two different plume cross sections (ideally perpendicular to the direction of plume propagation); combined with wind speed information this allows the determination of SO2 fluxes. A popular method to determine the mean wind speed relies on estimating the time lag of the SO2 signal derived for two cross sections of the plume at different distances downwind of the source. This can be done by searching the maximum cross-correlation coefficient of the two signals. Another, more sophisticated method to obtain the wind speed is to use the optical flow technique to obtain a more detailed wind field in the plume from a series of SO2 camera images. While the cross correlation method only gives the mean wind speed between the two cross sections of the plume, the optical flow technique allows to determine the wind speed and direction for each pixel individually (in other words, a two-dimensional projection of the entire wind field in the plume is obtained). While optical flow algorithms in general give a more detailed information about the wind velocities in the volcanic plume, they may fail to determine wind speeds in homogeneous regions (i.e. regions with no spatial variation in SO2 column densities) of the plume. Usually the wind speed is automatically set to zero in those regions, which leads to an underestimation of the total SO2 emission flux. This behavior was observed more than once on a data set of SO2 camera images taken at Etna, Italy in July, 2014. For those data the cross-correlation method leads to a more realistic result, which was close to simultaneously measured SO2 fluxes calculated from spectra taken by a zenith looking differential optical absorption spectroscopy (DOAS) instrument traversing underneath the plume. In the analyzed data the flux determined with the cross-correlation method was twice the flux determined with the optical flow algorithm. We further investigated the potential error in the SO2 flux determination caused by a slant view on the plume. This is a situation commonly encountered when observing volcanic SO2-fluxes by remote sensing techniques. Frequently it is difficult to determine the precise angle between wind direction (i.e. plume propagation direction) and observation direction. We find that in volcanic plumes with an inclination that differs more than 20 degree from the assumed wind direction, can cause an error in the determined SO2 flux can deviate from the true value by more than 10 percent.

  12. SPLASSH: Open source software for camera-based high-speed, multispectral in-vivo optical image acquisition

    PubMed Central

    Sun, Ryan; Bouchard, Matthew B.; Hillman, Elizabeth M. C.

    2010-01-01

    Camera-based in-vivo optical imaging can provide detailed images of living tissue that reveal structure, function, and disease. High-speed, high resolution imaging can reveal dynamic events such as changes in blood flow and responses to stimulation. Despite these benefits, commercially available scientific cameras rarely include software that is suitable for in-vivo imaging applications, making this highly versatile form of optical imaging challenging and time-consuming to implement. To address this issue, we have developed a novel, open-source software package to control high-speed, multispectral optical imaging systems. The software integrates a number of modular functions through a custom graphical user interface (GUI) and provides extensive control over a wide range of inexpensive IEEE 1394 Firewire cameras. Multispectral illumination can be incorporated through the use of off-the-shelf light emitting diodes which the software synchronizes to image acquisition via a programmed microcontroller, allowing arbitrary high-speed illumination sequences. The complete software suite is available for free download. Here we describe the software’s framework and provide details to guide users with development of this and similar software. PMID:21258475

  13. The social camera: a case-study in contextual image recommendation

    Microsoft Academic Search

    Steven Bourke; Kevin McCarthy; Barry Smyth

    2011-01-01

    The digital camera revolution has changed the world of photography and now most people have access to, and even regularly carry, a digital camera. Often these cameras have been designed with simplicity in mind: they harness a variety of sophisticated technologies in order to automatically take care of all manner of complex settings (aperture, shutter speed, flash etc.) for point-and-shoot

  14. Multi-camera conical imaging; calibration and robust 3-D motion estimation for ROV based mapping and positioning

    Microsoft Academic Search

    Pezhman Firoozfam; Shahriar Negahdaripour

    2002-01-01

    Over the last decade, there has been an increasing interest in developing vision systems and technologies that support the operation of unmanned submersible platforms. Selected examples include the detection of obstacles and tracking of moving targets, station keeping and positioning, pipeline following, navigation and mapping. Currently, these developments rely on images form standard CCD cameras with a single optical center

  15. Exploiting Mutual Camera Visibility in Multi-camera Motion Estimation

    Microsoft Academic Search

    Christian Kurz; Thorsten Thormählen; Bodo Rosenhahn; Hans-peter Seidel

    2009-01-01

    This paper addresses the estimation of camera motion and 3D reconstruction from image sequences for multiple independently\\u000a moving cameras. If multiple moving cameras record the same scene, a camera is often visible in another camera’s field of view.\\u000a This poses a constraint on the position of the observed camera, which can be included into the conjoined optimization process.\\u000a The paper

  16. Statistical performance evaluation and comparison of a Compton medical imaging system and a collimated Anger camera for higher energy photon imaging

    NASA Astrophysics Data System (ADS)

    Han, Li; Rogers, W. Leslie; Huh, Sam S.; Clinthorne, Neal

    2008-12-01

    In radionuclide treatment, tumor cells are primarily destroyed by charged particles emitted by the compound while associated higher energy photons are used to image the tumor in order to determine radiation dose and monitor shrinkage. However, the higher energy photons are difficult to image with conventional collimated Anger cameras, since a tradeoff exists between resolution and sensitivity, and the collimator septal penetration and scattering is increased due to the high energy photons. This research compares imaging performance of the conventional Anger camera to a Compton imaging system that can have improved spatial resolution and sensitivity for high energy photons because this tradeoff is decoupled, and the effect of Doppler broadening at higher gamma energies is decreased. System performance is analyzed by the modified uniform Cramer-Rao bound (M-UCRB) algorithms based on the developed system modeling. The bound shows that the effect of Doppler broadening is the limiting factor for Compton camera performance for imaging 364.4 keV photons emitted from 131I. According to the bound, the Compton camera outperforms the collimated system for an equal number of detected events when the desired spatial resolution for a 26 cm diameter uniform disk object is better than 12 mm FWHM. For a 3D cylindrical phantom, the lower bound on variance for the collimated camera is greater than for the Compton imaginer over the resolution range from 0.5 to 2 cm FWHM. Furthermore, the detection sensitivity of the proposed Compton imaging system is about 15-20 times higher than that of the collimated Anger camera.

  17. Scatter correction in scintillation camera imaging of positron-emitting radionuclides

    SciTech Connect

    Ljungberg, M.; Danfelter, M.; Strand, S.E. [Lund Univ. (Sweden)] [and others

    1996-12-31

    The use of Anger scintillation cameras for positron SPECT has become of interest recently due to their use with imaging 2-{sup 18}F deoxyglucose. Due to the special crystal design (thin and wide), a significant amount of primary events will be also recorded in the Compton region of the energy spectra. Events recorded in a second Compton window (CW) can add information to the data in the photopeak window (PW), since some events are correctly positioned in the CW. However, a significant amount of the scatter is also included in CW which needs to be corrected. This work describes a method whereby a third scatter window (SW) is used to estimate the scatter distribution in the CW and the PW. The accuracy of estimation has been evaluated by Monte Carlo simulations in a homogeneous elliptical phantom for point and extended sources. Two examples of clinical application are also provided. Results from simulations show that essentially only scatter from the phantom is recorded between the 511 keV PW and 340 keV CW. Scatter projection data with a constant multiplier can estimate the scatter in the CW and PW, although the scatter distribution in SW corresponds better to the scatter distribution in the CW. The multiplier k for the CW varies significantly more with depth than it does for the PW. Clinical studies show an improvement in image quality when using scatter corrected combined PW and CW.

  18. Digital chemiluminescence imaging of DNA sequencing blots using a charge-coupled device camera.

    PubMed

    Karger, A E; Weiss, R; Gesteland, R F

    1992-12-25

    Digital chemiluminescence imaging with a cryogenically cooled charge-coupled device (CCD) camera is used to visualize DNA sequencing fragments covalently bound to a blotting membrane. The detection is based on DNA hybridization with an alkaline phosphatase(AP) labeled oligodeoxyribonucleotide probe and AP triggered chemiluminescence of the substrate 3-(2'-spiro-adamantane)-4-methoxy-4-(3"-phosphoryloxy)phenyl- 1,2-dioxetane (AMPPD). The detection using a direct AP-oligonucleotide conjugate is compared to the secondary detection of biotinylated oligonucleotides with respect to their sensitivity and nonspecific binding to the nylon membrane by quantitative imaging. Using the direct oligonucleotide-AP conjugate as a hybridization probe, sub-attomol (0.5 pg of 2.7 kb pUC plasmid DNA) quantities of membrane bound DNA are detectable with 30 min CCD exposures. Detection using the biotinylated probe in combination with streptavidin-AP was found to be background limited by nonspecific binding of streptavidin-AP and the oligo(biotin-11-dUTP) label in equal proportions. In contrast, the nonspecific background of AP-labeled oligonucleotide is indistinguishable from that seen with 5'-32P-label, in that respect making AP an ideal enzymatic label. The effect of hybridization time, probe concentration, and presence of luminescence enhancers on the detection of plasmid DNA were investigated. PMID:1480487

  19. First results from the Faint Object Camera - Imaging the core of R Aquarii

    NASA Technical Reports Server (NTRS)

    Paresce, F.; Albrecht, R.; Barbieri, C.; Blades, J. C.; Boksenberg, A.

    1991-01-01

    The Faint Object Camera on the HST was pointed toward the symbiotic long-period M7e Mira variable R Aquarii, and very high resolution images of the inner core, mainly in the ionized oxygen emission lines in the optical, are reported. Both images show bright arcs, knots, and filaments superposed on a fainter, diffuse nebulosity extending in a general SW-NE direction from the variable to the edge of the field at 10 arcsec distance. The core is resolved in forbidden O III 5007 A and forbidden O II 3727 A into at least two bright knots of emission whose positions and structures are aligned with PA = 50 deg. The central knots appear to be the source of a continuous, well-collimated, stream of material extending out to 3-4 arcsec in the northern sector corresponding to a linear distance of about 1000 AU. The northern stream seems to bend around an opaque obstacle and form a spiral before breaking up into wisps and knots. The southern stream is composed of smaller, discrete parcels of emitting gas curving to the SE.

  20. A novel Compton camera design featuring a rear-panel shield for substantial noise reduction in gamma-ray images

    NASA Astrophysics Data System (ADS)

    Nishiyama, T.; Kataoka, J.; Kishimoto, A.; Fujita, T.; Iwamoto, Y.; Taya, T.; Ohsuka, S.; Nakamura, S.; Hirayanagi, M.; Sakurai, N.; Adachi, S.; Uchiyama, T.

    2014-12-01

    After the Japanese nuclear disaster in 2011, large amounts of radioactive isotopes were released and still remain a serious problem in Japan. Consequently, various gamma cameras are being developed to help identify radiation hotspots and ensure effective decontamination operation. The Compton camera utilizes the kinematics of Compton scattering to contract images without using a mechanical collimator, and features a wide field of view. For instance, we have developed a novel Compton camera that features a small size (13 × 14 × 15 cm3) and light weight (1.9 kg), but which also achieves high sensitivity thanks to Ce:GAGG scintillators optically coupled wiith MPPC arrays. By definition, in such a Compton camera, gamma rays are expected to scatter in the ``scatterer'' and then be fully absorbed in the ``absorber'' (in what is called a forward-scattered event). However, high energy gamma rays often interact with the detector in the opposite direction - initially scattered in the absorber and then absorbed in the scatterer - in what is called a ``back-scattered'' event. Any contamination of such back-scattered events is known to substantially degrade the quality of gamma-ray images, but determining the order of gamma-ray interaction based solely on energy deposits in the scatterer and absorber is quite difficult. For this reason, we propose a novel yet simple Compton camera design that includes a rear-panel shield (a few mm thick) consisting of W or Pb located just behind the scatterer. Since the energy of scattered gamma rays in back-scattered events is much lower than that in forward-scattered events, we can effectively discriminate and reduce back-scattered events to improve the signal-to-noise ratio in the images. This paper presents our detailed optimization of the rear-panel shield using Geant4 simulation, and describes a demonstration test using our Compton camera.

  1. Development of a pixelated GSO gamma camera system with tungsten parallel hole collimator for single photon imaging

    SciTech Connect

    Yamamoto, S.; Watabe, H.; Kanai, Y.; Shimosegawa, E.; Hatazawa, J. [Kobe City College of Technology, 8-3 Gakuen-Higashi-machi, Nishi-ku, Kobe 651-2194 (Japan); Department of Molecular Imaging in Medicine, Osaka University Graduate School of Medicine, Osaka 565-0871 (Japan); Department of Nuclear Medicine and Tracer Kinetics, Osaka University Graduate School of Medicine, Osaka 565-0871 (Japan); Department of Molecular Imaging in Medicine, Osaka University Graduate School of Medicine, Osaka 565-0871 (Japan) and Department of Nuclear Medicine and Tracer Kinetics, Osaka University Graduate School of Medicine, Osaka 565-0871 (Japan)

    2012-02-15

    Purpose: In small animal imaging using a single photon emitting radionuclide, a high resolution gamma camera is required. Recently, position sensitive photomultiplier tubes (PSPMTs) with high quantum efficiency have been developed. By combining these with nonhygroscopic scintillators with a relatively low light output, a high resolution gamma camera can become useful for low energy gamma photons. Therefore, the authors developed a gamma camera by combining a pixelated Ce-doped Gd{sub 2}SiO{sub 5} (GSO) block with a high quantum efficiency PSPMT. Methods: GSO was selected for the scintillator, because it is not hygroscopic and does not contain any natural radioactivity. An array of 1.9 mm x 1.9 mm x 7 mm individual GSO crystal elements was constructed. These GSOs were combined with a 0.1-mm thick reflector to form a 22 x 22 matrix and optically coupled to a high quantum efficiency PSPMT (H8500C-100 MOD8). The GSO gamma camera was encased in a tungsten gamma-ray shield with tungsten pixelated parallel hole collimator, and the basic performance was measured for Co-57 gamma photons (122 keV). Results: In a two-dimensional position histogram, all pixels were clearly resolved. The energy resolution was {approx}15% FWHM. With the 20-mm thick tungsten pixelated collimator, the spatial resolution was 4.4-mm FWHM 40 mm from the collimator surface, and the sensitivity was {approx}0.05%. Phantom and small animal images were successfully obtained with our developed gamma camera. Conclusions: These results confirmed that the developed pixelated GSO gamma camera has potential as an effective instrument for low energy gamma photon imaging.

  2. Factors affecting the repeatability of gamma camera calibration for quantitative imaging applications using a sealed source.

    PubMed

    Anizan, N; Wang, H; Zhou, X C; Wahl, R L; Frey, E C

    2015-02-01

    Several applications in nuclear medicine require absolute activity quantification of single photon emission computed tomography images. Obtaining a repeatable calibration factor that converts voxel values to activity units is essential for these applications. Because source preparation and measurement of the source activity using a radionuclide activity meter are potential sources of variability, this work investigated instrumentation and acquisition factors affecting repeatability using planar acquisition of sealed sources. The calibration factor was calculated for different acquisition and geometry conditions to evaluate the effect of the source size, lateral position of the source in the camera field-of-view (FOV), source-to-camera distance (SCD), and variability over time using sealed Ba-133 sources. A small region of interest (ROI) based on the source dimensions and collimator resolution was investigated to decrease the background effect. A statistical analysis with a mixed-effects model was used to evaluate quantitatively the effect of each variable on the global calibration factor variability. A variation of 1?cm in the measurement of the SCD from the assumed distance of 17?cm led to a variation of 1-2% in the calibration factor measurement using a small disc source (0.4?cm diameter) and less than 1% with a larger rod source (2.9?cm diameter). The lateral position of the source in the FOV and the variability over time had small impacts on calibration factor variability. The residual error component was well estimated by Poisson noise. Repeatability of better than 1% in a calibration factor measurement using a planar acquisition of a sealed source can be reasonably achieved. The best reproducibility was obtained with the largest source with a count rate much higher than the average background in the ROI, and when the SCD was positioned within 5?mm of the desired position. In this case, calibration source variability was limited by the quantum noise. PMID:25592130

  3. Evaluating intensified camera systems

    SciTech Connect

    S. A. Baker

    2000-07-01

    This paper describes image evaluation techniques used to standardize camera system characterizations. Key areas of performance include resolution, noise, and sensitivity. This team has developed a set of analysis tools, in the form of image processing software used to evaluate camera calibration data, to aid an experimenter in measuring a set of camera performance metrics. These performance metrics identify capabilities and limitations of the camera system, while establishing a means for comparing camera systems. Analysis software is used to evaluate digital camera images recorded with charge-coupled device (CCD) cameras. Several types of intensified camera systems are used in the high-speed imaging field. Electro-optical components are used to provide precise shuttering or optical gain for a camera system. These components including microchannel plate or proximity focused diode image intensifiers, electro-static image tubes, or electron-bombarded CCDs affect system performance. It is important to quantify camera system performance in order to qualify a system as meeting experimental requirements. The camera evaluation tool is designed to provide side-by-side camera comparison and system modeling information.

  4. Snapshot Serengeti, high-frequency annotated camera trap images of 40 mammalian species in an African savanna.

    PubMed

    Swanson, Alexandra; Kosmala, Margaret; Lintott, Chris; Simpson, Robert; Smith, Arfon; Packer, Craig

    2015-01-01

    Camera traps can be used to address large-scale questions in community ecology by providing systematic data on an array of wide-ranging species. We deployed 225 camera traps across 1,125?km(2)?in Serengeti National Park, Tanzania, to evaluate spatial and temporal inter-species dynamics. The cameras have operated continuously since 2010 and had accumulated 99,241 camera-trap days and produced 1.2 million sets of pictures by 2013. Members of the general public classified the images via the citizen-science website www.snapshotserengeti.org. Multiple users viewed each image and recorded the species, number of individuals, associated behaviours, and presence of young. Over 28,000 registered users contributed 10.8 million classifications. We applied a simple algorithm to aggregate these individual classifications into a final 'consensus' dataset, yielding a final classification for each image and a measure of agreement among individual answers. The consensus classifications and raw imagery provide an unparalleled opportunity to investigate multi-species dynamics in an intact ecosystem and a valuable resource for machine-learning and computer-vision research. PMID:26097743

  5. Snapshot Serengeti, high-frequency annotated camera trap images of 40 mammalian species in an African savanna

    PubMed Central

    Swanson, Alexandra; Kosmala, Margaret; Lintott, Chris; Simpson, Robert; Smith, Arfon; Packer, Craig

    2015-01-01

    Camera traps can be used to address large-scale questions in community ecology by providing systematic data on an array of wide-ranging species. We deployed 225 camera traps across 1,125?km2?in Serengeti National Park, Tanzania, to evaluate spatial and temporal inter-species dynamics. The cameras have operated continuously since 2010 and had accumulated 99,241 camera-trap days and produced 1.2 million sets of pictures by 2013. Members of the general public classified the images via the citizen-science website www.snapshotserengeti.org. Multiple users viewed each image and recorded the species, number of individuals, associated behaviours, and presence of young. Over 28,000 registered users contributed 10.8 million classifications. We applied a simple algorithm to aggregate these individual classifications into a final ‘consensus’ dataset, yielding a final classification for each image and a measure of agreement among individual answers. The consensus classifications and raw imagery provide an unparalleled opportunity to investigate multi-species dynamics in an intact ecosystem and a valuable resource for machine-learning and computer-vision research. PMID:26097743

  6. On-Orbit Geometric Calibration of the Lunar Reconnaissance Orbiter Wide Angle Camera

    NASA Astrophysics Data System (ADS)

    Speyerer, E. J.; Wagner, R.; Robinson, M. S.

    2013-12-01

    Lunar Reconnaissance Orbiter (LRO) is equipped with a single Wide Angle Camera (WAC) [1] designed to collect monochromatic and multispectral observations of the lunar surface. Cartographically accurate image mosaics and stereo image based terrain models requires the position of each pixel in a given image be known to a corresponding point on the lunar surface with a high degree of accuracy and precision. The Lunar Reconnaissance Orbiter Camera (LROC) team initially characterized the WAC geometry prior to launch at the Malin Space Science Systems calibration facility. After lunar orbit insertion, the LROC team recognized spatially varying geometric offsets between color bands. These misregistrations made analysis of the color data problematic and showed that refinements to the pre-launch geometric analysis were necessary. The geometric parameters that define the WAC optical system were characterized from statistics gathered from co-registering over 84,000 image pairs. For each pair, we registered all five visible WAC bands to a precisely rectified Narrow Angle Camera (NAC) image (accuracy <15 m) [2] to compute key geometric parameters. In total, we registered 2,896 monochrome and 1,079 color WAC observations to nearly 34,000 NAC observations and collected over 13.7 million data points across the visible portion of the WAC CCD. Using the collected statistics, we refined the relative pointing (yaw, pitch and roll), effective focal length, principal point coordinates, and radial distortion coefficients. This large dataset also revealed spatial offsets between bands after orthorectification due to chromatic aberrations in the optical system. As white light enters the optical system, the light bends at different magnitudes as a function of wavelength, causing a single incident ray to disperse in a spectral spread of color [3,4]. This lateral chromatic aberration effect, also known as 'chromatic difference in magnification' [5] introduces variation to the effective focal length for each WAC band. Secondly, tangential distortions caused by minor decentering in the optical system altered the derived exterior orientation parameters for each 14-line WAC band. We computed the geometric parameter sets separately for each band to characterize the lateral chromatic aberrations and the decentering components in the WAC optical system. From this approach, we negated the need for additional tangential terms in the distortion model, thus reducing the number of computations during image orthorectification and therefore expediting the orthorectification process. We undertook a similar process for refining the geometry for the UV bands (321 and 360 nm), except we registered each UV bands to orthorectified visible bands of the same WAC observation (the visible bands have resolutions 4 times greater than the UV). The resulting 7-band camera model with refined geometric parameters enables map projection with sub-pixel accuracy. References: [1] Robinson et al. (2010) Space Sci. Rev. 150, 81-124 [2] Wagner et al. (2013) Lunar Sci Forum [3] Mahajan, V.N. (1998) Optical Imaging and Aberrations [4] Fiete, R.D. (2013), Manual of Photogrammetry, pp. 359-450 [5] Brown, D.C. (1966) Photometric Eng. 32, 444-462.

  7. Retrieval of sulfur dioxide from a ground-based thermal infrared imaging camera

    NASA Astrophysics Data System (ADS)

    Prata, A. J.; Bernardo, C.

    2014-09-01

    Recent advances in uncooled detector technology now offer the possibility of using relatively inexpensive thermal (7 to 14 ?m) imaging devices as tools for studying and quantifying the behaviour of hazardous gases and particulates in atmospheric plumes. An experimental fast-sampling (60 Hz) ground-based uncooled thermal imager (Cyclops), operating with four spectral channels at central wavelengths of 8.6, 10, 11 and 12 ?m and one broadband channel (7-14 ?m) has been tested at several volcanoes and at an industrial site, where SO2 was a major constituent of the plumes. This paper presents new algorithms, which include atmospheric corrections to the data and better calibrations to show that SO2 slant column density can be reliably detected and quantified. Our results indicate that it is relatively easy to identify and discriminate SO2 in plumes, but more challenging to quantify the column densities. A full description of the retrieval algorithms, illustrative results and a detailed error analysis are provided. The noise-equivalent temperature difference (NE?T) of the spectral channels, a fundamental measure of the quality of the measurements, lies between 0.4 and 0.8 K, resulting in slant column density errors of 20%. Frame averaging and improved NE?T's can reduce this error to less than 10%, making a stand-off, day or night operation of an instrument of this type very practical for both monitoring industrial SO2 emissions and for SO2 column densities and emission measurements at active volcanoes. The imaging camera system may also be used to study thermal radiation from meteorological clouds and the atmosphere.

  8. Active hyperspectral imaging using a quantum cascade laser (QCL) array and digital-pixel focal plane array (DFPA) camera.

    PubMed

    Goyal, Anish; Myers, Travis; Wang, Christine A; Kelly, Michael; Tyrrell, Brian; Gokden, B; Sanchez, Antonio; Turner, George; Capasso, Federico

    2014-06-16

    We demonstrate active hyperspectral imaging using a quantum-cascade laser (QCL) array as the illumination source and a digital-pixel focal-plane-array (DFPA) camera as the receiver. The multi-wavelength QCL array used in this work comprises 15 individually addressable QCLs in which the beams from all lasers are spatially overlapped using wavelength beam combining (WBC). The DFPA camera was configured to integrate the laser light reflected from the sample and to perform on-chip subtraction of the passive thermal background. A 27-frame hyperspectral image was acquired of a liquid contaminant on a diffuse gold surface at a range of 5 meters. The measured spectral reflectance closely matches the calculated reflectance. Furthermore, the high-speed capabilities of the system were demonstrated by capturing differential reflectance images of sand and KClO3 particles that were moving at speeds of up to 10 m/s. PMID:24977536

  9. Simultaneously Capturing Real-time Images in Two Emission Channels Using a Dual Camera Emission Splitting System: Applications to Cell Adhesion

    PubMed Central

    Carlson, Grady E.; Martin, Eric W.; Burdick, Monica M.

    2015-01-01

    Multi-color immunofluorescence microscopy to detect specific molecules in the cell membrane can be coupled with parallel plate flow chamber assays to investigate mechanisms governing cell adhesion under dynamic flow conditions. For instance, cancer cells labeled with multiple fluorophores can be perfused over a potentially reactive substrate to model mechanisms of cancer metastasis. However, multi-channel single camera systems and color cameras exhibit shortcomings in image acquisition for real-time live cell analysis. To overcome these limitations, we used a dual camera emission splitting system to simultaneously capture real-time image sequences of fluorescently labeled cells in the flow chamber. Dual camera emission splitting systems filter defined wavelength ranges into two monochrome CCD cameras, thereby simultaneously capturing two spatially identical but fluorophore-specific images. Subsequently, psuedocolored one-channel images are combined into a single real-time merged sequence that can reveal multiple target molecules on cells moving rapidly across a region of interest. PMID:24056855

  10. Test of Compton camera components for prompt gamma imaging at the ELBE bremsstrahlung beam

    NASA Astrophysics Data System (ADS)

    Hueso-González, F.; Golnik, C.; Berthel, M.; Dreyer, A.; Enghardt, W.; Fiedler, F.; Heidel, K.; Kormoll, T.; Rohling, H.; Schöne, S.; Schwengner, R.; Wagner, A.; Pausch, G.

    2014-05-01

    In the context of ion beam therapy, particle range verification is a major challenge for the quality assurance of the treatment. One approach is the measurement of the prompt gamma rays resulting from the tissue irradiation. A Compton camera based on several position sensitive gamma ray detectors, together with an imaging algorithm, is expected to reconstruct the prompt gamma ray emission density map, which is correlated with the dose distribution. At OncoRay and Helmholtz-Zentrum Dresden-Rossendorf (HZDR), a Compton camera setup is being developed consisting of two scatter planes: two CdZnTe (CZT) cross strip detectors, and an absorber consisting of one Lu2SiO5 (LSO) block detector. The data acquisition is based on VME electronics and handled by software developed on the ROOT framework. The setup has been tested at the linear electron accelerator ELBE at HZDR, which is used in this experiment to produce bunched bremsstrahlung photons with up to 12.5 MeV energy and a repetition rate of 13 MHz. Their spectrum has similarities with the shape expected from prompt gamma rays in the clinical environment, and the flux is also bunched with the accelerator frequency. The charge sharing effect of the CZT detector is studied qualitatively for different energy ranges. The LSO detector pixel discrimination resolution is analyzed and it shows a trend to improve for high energy depositions. The time correlation between the pulsed prompt photons and the measured detector signals, to be used for background suppression, exhibits a time resolution of 3 ns FWHM for the CZT detector and of 2 ns for the LSO detector. A time walk correction and pixel-wise calibration is applied for the LSO detector, whose resolution improves up to 630 ps. In conclusion, the detector setup is suitable for time-resolved background suppression in pulsed clinical particle accelerators. Ongoing tasks are the quantitative comparison with simulations and the test of imaging algorithms. Experiments at proton accelerators have also been performed and are currently under analysis.

  11. Classification of volcanic ash particles from Sakurajima volcano using CCD camera image and cluster analysis

    NASA Astrophysics Data System (ADS)

    Miwa, T.; Shimano, T.; Nishimura, T.

    2012-12-01

    Quantitative and speedy characterization of volcanic ash particle is needed to conduct a petrologic monitoring of ongoing eruption. We develop a new simple system using CCD camera images for quantitatively characterizing ash properties, and apply it to volcanic ash collected at Sakurajima. Our method characterizes volcanic ash particles by 1) apparent luminance through RGB filters and 2) a quasi-fractal dimension of the shape of particles. Using a monochromatic CCD camera (Starshoot by Orion Co. LTD.) attached to a stereoscopic microscope, we capture digital images of ash particles that are set on a glass plate under which white colored paper or polarizing plate is set. The images of 1390 x 1080 pixels are taken through three kinds of color filters (Red, Green and Blue) under incident-light and transmitted-light through polarizing plate. Brightness of the light sources is set to be constant, and luminance is calibrated by white and black colored papers. About fifteen ash particles are set on the plate at the same time, and their images are saved with a bit map format. We first extract the outlines of particles from the image taken under transmitted-light through polarizing plate. Then, luminances for each color are represented by 256 tones at each pixel in the particles, and the average and its standard deviation are calculated for each ash particle. We also measure the quasi-fractal dimension (qfd) of ash particles. We perform box counting that counts the number of boxes which consist of 1×1 and 128×128 pixels that catch the area of the ash particle. The qfd is estimated by taking the ratio of the former number to the latter one. These parameters are calculated by using software R. We characterize volcanic ash from Showa crater of Sakurajima collected in two days (Feb 09, 2009, and Jan 13, 2010), and apply cluster analyses. Dendrograms are formed from the qfd and following four parameters calculated from the luminance: Rf=R/(R+G+B), G=G/(R+G+B), B=B/(R+G+B), and total luminance=(R+G+B)/665. We classify the volcanic ash particles from the Dendrograms into three groups based on the euclid distance. The groups are named as Group A, B and C in order of increasing of the average value of total luminance. The classification shows that the numbers of particles belonging to Group A, B and C are 77, 25 and 6 in Feb, 09, 2009 sample, and 102, 19 and 6 in Jan, 13, 2010 sample, respectively. The examination under stereoscopic microscope suggests that Group A, B and C mainly correspond with juvenile, altered and free-crystal particles, respectively. So the result of classification by present method demonstrates a difference in the contribution of juvenile material between the two days. To evaluate reliability of our classification, we classify pseudo-samples in which errors of 10% are added in the measured parameters. We apply our method to one thousand psuedo-samples, and the result shows that the numbers of particles classified into the three groups vary less than 20 % of the total number of 235 particles. Our system can classify 120 particles within 6 minutes so that we easily increase the number of ash particles, which enable us to improve reliabilities and resolutions of the classification and to speedily capture temporal changes of the property of ash particles from active volcanoes.

  12. Removing cosmic-ray hits from multiorbit HST Wide Field Camera images

    NASA Technical Reports Server (NTRS)

    Windhorst, Rogier A.; Franklin, Barbara E.; Neuschaefer, Lyman W.

    1994-01-01

    We present an optimized algorithm that removes cosmic rays ('CRs') from multiorbit Hubble Space Telescope (HST) Wide Field/Planetary Camera ('WF/PC') images. It computes the image noise in every iteration from the WF/PC CCD equation. This includes all known sources of random and systematic calibration errors. We test this algorithm on WF/PC stacks of 2-12 orbits as a function of the number of available orbits and the formal Poissonian sigma-clipping level. We find that the algorithm needs greater than or equal 4 WF/PC exposures to locate the minimal sky signal (which is noticeably affected by CRs), with an optimal clipping level at 2-2.5 x sigma(sub Poisson). We analyze the CR flux detected on multiorbit 'CR stacks,' which are constructed by subtracting the best CR filtered images from the unfiltered 8-12 orbit average. We use an automated object finder to determine the surface density of CRS as a function of the apparent magnitude (or ADU flux) they would have generated in the images had they not been removed. The power law slope of the CR 'counts' (gamma approximately = 0.6 for N(m) m(exp gamma)) is steeper than that of the faint galaxy counts down to V approximately = 28 mag. The CR counts show a drop off between 28 less than or approximately V less than or approximately 30 mag (the latter is our formal 2 sigma point source sensitivity without spherical aberration). This prevents the CR sky integral from diverging, and is likely due to a real cutoff in the CR energy distribution below approximately 11 ADU per orbit. The integral CR surface density is less than or approximately 10(exp 8)/sq. deg, and their sky signal is V approximately = 25.5-27.0 mag/sq. arcsec, or 3%-13% of our NEP sky background (V = 23.3 mag/sq. arcsec), and well above the EBL integral of the deepest galaxy counts (B(sub J) approximately = 28.0 mag/sq. arcsec). We conclude that faint CRs will always contribute to the sky signal in the deepest WF/PC images. Since WFPC2 has approximately 2.7x lower read noise and a thicker CCD, this will result in more CR detections than in WF/PC, potentially affecting approximately 10%-20% of the pixels in multiorbit WFPC2 data cubes.

  13. Automated Camera Calibration

    NASA Technical Reports Server (NTRS)

    Chen, Siqi; Cheng, Yang; Willson, Reg

    2006-01-01

    Automated Camera Calibration (ACAL) is a computer program that automates the generation of calibration data for camera models used in machine vision systems. Machine vision camera models describe the mapping between points in three-dimensional (3D) space in front of the camera and the corresponding points in two-dimensional (2D) space in the camera s image. Calibrating a camera model requires a set of calibration data containing known 3D-to-2D point correspondences for the given camera system. Generating calibration data typically involves taking images of a calibration target where the 3D locations of the target s fiducial marks are known, and then measuring the 2D locations of the fiducial marks in the images. ACAL automates the analysis of calibration target images and greatly speeds the overall calibration process.

  14. UVUDF: Ultraviolet Imaging of the Hubble Ultradeep Field with Wide-field Camera 3

    E-print Network

    Teplitz, Harry I; Kurczynski, Peter; Bond, Nicholas A; Grogin, Norman; Koekemoer, Anton M; Atek, Hakim; Brown, Thomas M; Coe, Dan; Colbert, James W; Ferguson, Henry C; Finkelstein, Steven L; Gardner, Jonathan P; Gawiser, Eric; Giavalisco, Mauro; Gronwall, Caryl; Hanish, Daniel J; Lee, Kyoung-Soo; de Mello, Duilia F; Ravindranath, Swara; Ryan, Russell E; Siana, Brian D; Scarlata, Claudia; Soto, Emmaris; Voyer, Elysse N; Wolfe, Arthur M

    2013-01-01

    We present an overview of a 90-orbit Hubble Space Telescope treasury program to obtain near ultraviolet imaging of the Hubble Ultra Deep Field using the Wide Field Camera 3 UVIS detector with the F225W, F275W, and F336W filters. This survey is designed to: (i) Investigate the episode of peak star formation activity in galaxies at 1

  15. A double photomultiplier Compton camera and its readout system for mice imaging

    SciTech Connect

    Fontana, Cristiano Lino [Physics Department Galileo Galilei, University of Padua, Via Marzolo 8, Padova 35131 (Italy) and INFN Padova, Via Marzolo 8, Padova 35131 (Italy); Atroshchenko, Kostiantyn [Physics Department Galileo Galilei, University of Padua, Via Marzolo 8, Padova 35131 (Italy) and INFN Legnaro, Viale dell'Universita 2, Legnaro PD 35020 (Italy); Baldazzi, Giuseppe [Physics Department, University of Bologna, Viale Berti Pichat 6/2, Bologna 40127, Italy and INFN Bologna, Viale Berti Pichat 6/2, Bologna 40127 (Italy); Bello, Michele [INFN Legnaro, Viale dell'Universita 2, Legnaro PD 35020 (Italy); Uzunov, Nikolay [Department of Natural Sciences, Shumen University, 115 Universitetska str., Shumen 9712, Bulgaria and INFN Legnaro, Viale dell'Universita 2, Legnaro PD 35020 (Italy); Di Domenico, Giovanni [Physics Department, University of Ferrara, Via Saragat 1, Ferrara 44122 (Italy) and INFN Ferrara, Via Saragat 1, Ferrara 44122 (Italy)

    2013-04-19

    We have designed a Compton Camera (CC) to image the bio-distribution of gamma-emitting radiopharmaceuticals in mice. A CC employs the 'electronic collimation', i.e. a technique that traces the gamma-rays instead of selecting them with physical lead or tungsten collimators. To perform such a task, a CC measures the parameters of the Compton interaction that occurs in the device itself. At least two detectors are required: one (tracker), where the primary gamma undergoes a Compton interaction and a second one (calorimeter), in which the scattered gamma is completely absorbed. Eventually the polar angle and hence a 'cone' of possible incident directions are obtained (event with 'incomplete geometry'). Different solutions for the two detectors are proposed in the literature: our design foresees two similar Position Sensitive Photomultipliers (PMT, Hamamatsu H8500). Each PMT has 64 output channels that are reduced to 4 using a charge multiplexed readout system, i.e. a Series Charge Multiplexing net of resistors. Triggering of the system is provided by the coincidence of fast signals extracted at the last dynode of the PMTs. Assets are the low cost and the simplicity of design and operation, having just one type of device; among drawbacks there is a lower resolution with respect to more sophisticated trackers and full 64 channels Readout. This paper does compare our design of our two-Hamamatsu CC to other solutions and shows how the spatial and energy accuracy is suitable for the inspection of radioactivity in mice.

  16. SPEMT imaging with a dedicated VAoR dual-head camera: preliminary results

    NASA Astrophysics Data System (ADS)

    Camarda, M.; Belcari, N.; Del Guerra, A.; Vecchio, S.; Bennati, P.; Cinti, M. N.; Pani, R.; Campanini, R.; Iampieri, E.; Lanconelli, N.

    2009-10-01

    We have developed a SPEMT (Single Photon Emission MammoTomography) scanner that is made up of two cameras rotating around the pendulous breast of the prone patient, in Vertical Axis of Rotation (VAoR) geometry. Monte Carlo simulations indicate that the device should be able to detect tumours of 8 mm diameter with a tumour/background uptake ratio of 5:1. The scanner field of view is 41.6 mm height and 147 mm in diameter. Each head is composed of one pixilated NaI(Tl) crystal matrix coupled to three Hamamatsu H8500 64-anodes PMT's read out via resistive networks. A dedicated software has been developed to combine data from different PMT's, thus recovering the dead areas between adjacent tubes. A single head has been fully characterized in stationary configuration both in active and dead areas using a point-like source in order to verify the effectiveness of the readout method in recovering the dead regions. The scanner has been installed at the Nuclear Medicine Division of the University of Pisa for its validation using breast phantoms. The very first tomographic images of a breast phantom show a good agreement with Monte Carlo simulation results.

  17. [Evaluation of efficiency of a multi-crystal scintillation camera Digirad 2020tc Imager using a solid-state detectors].

    PubMed

    Narita, H; Kawaida, Y; Ooshita, T; Itoh, T; Tsuchida, D; Fukumitsu, N; Mori, Y; Makino, M

    2001-07-01

    Digirad 2020tc Imager is the movable scintillation camera, consisting of combining multi-crystal scintillators (CsI(Tl)) and photo-diodes. Total numbers of element are 4096, which are further divided into 16 x 16 modules. Each module contains 4 x 4 elements. We have examined Digirad 2020tc according to NEMA (National Electrical Manufactures Association), and the following results are obtained; the maximum count rate; 221 kcps, total system uniformity; 1.3% (integral uniformity), 0.9% (differential uniformity), system spatial resolution; 6.97 +/- 0.72 mm (the LEHR collimator to 99mTc source at 10 cm), intrinsic energy resolution; 12.8%, total system sensitivity; 3270.8 cpm/MBq (with LEHR collimator using 99mTc source at 10 cm). Further more, we determined the contrast of an imaging using the pin-hole (100 microns phi) 99mTc source in order to know the signal per noise (S/N) ratio among the pixels (S/N; 93.4 +/- 46.2 (first pixels)). Although the physical dimension of the camera has a smaller field of view, comparing with the standard camera, Digirad 2020tc has the equivalent characteristics as well as that of the standard camera and its field view is enough to measure the adult lung perfusion using a diverging collimator. We will further examine Digirad 2020tc with its movable portability and expect applications in nuclear medicine. PMID:11530383

  18. Development of a large-angle pinhole gamma camera with depth-of-interaction capability for small animal imaging

    NASA Astrophysics Data System (ADS)

    Baek, C.-H.; An, S. J.; Kim, H.-I.; Choi, Y.; Chung, Y. H.

    2012-01-01

    A large-angle gamma camera was developed for imaging small animal models used in medical and biological research. The simulation study shows that a large field of view (FOV) system provides higher sensitivity with respect to a typical pinhole gamma cameras by reducing the distance between the pinhole and the object. However, this gamma camera suffers from the degradation of the spatial resolution at the periphery region due to parallax error by obliquely incident photons. We propose a new method to measure the depth of interaction (DOI) using three layers of monolithic scintillators to reduce the parallax error. The detector module consists of three layers of monolithic CsI(Tl) crystals with dimensions of 50.0 × 50.0 × 2.0 mm3, a Hamamatsu H8500 PSPMT and a large-angle pinhole collimator with an acceptance angle of 120°. The 3-dimensional event positions were determined by the maximum-likelihood position-estimation (MLPE) algorithm and the pre-generated look up table (LUT). The spatial resolution (FWHM) of a Co-57 point-like source was measured at different source position with the conventional method (Anger logic) and with DOI information. We proved that high sensitivity can be achieved without degradation of spatial resolution using a large-angle pinhole gamma camera: this system can be used as a small animal imaging tool.

  19. A compact, discrete CsI(Tl) scintillator/Si photodiode gamma camera for breast cancer imaging

    SciTech Connect

    Gruber, Gregory J.

    2000-12-01

    Recent clinical evaluations of scintimammography (radionuclide breast imaging) are promising and suggest that this modality may prove a valuable complement to X-ray mammography and traditional breast cancer detection and diagnosis techniques. Scintimammography, however, typically has difficulty revealing tumors that are less than 1 cm in diameter, are located in the medial part of the breast, or are located in the axillary nodes. These shortcomings may in part be due to the use of large, conventional Anger cameras not optimized for breast imaging. In this thesis I present compact single photon camera technology designed specifically for scintimammography which strives to alleviate some of these limitations by allowing better and closer access to sites of possible breast tumors. Specific applications are outlined. The design is modular, thus a camera of the desired size and geometry can be constructed from an array (or arrays) of individual modules and a parallel hole lead collimator for directional information. Each module consists of: (1) an array of 64 discrete, optically-isolated CsI(Tl) scintillator crystals 3 x 3 x 5 mm{sup 3} in size, (2) an array of 64 low-noise Si PIN photodiodes matched 1-to-1 to the scintillator crystals, (3) an application-specific integrated circuit (ASIC) that amplifies the 64 photodiode signals and selects the signal with the largest amplitude, and (4) connectors and hardware for interfacing the module with a motherboard, thereby allowing straightforward computer control of all individual modules within a camera.

  20. Digital photogrammetric analysis of the IMP camera images: Mapping the Mars Pathfinder landing site in three dimensions

    NASA Astrophysics Data System (ADS)

    Kirk, R. L.; Howington-Kraus, E.; Hare, T.; Dorrer, E.; Cook, D.; Becker, K.; Thompson, K.; Redding, B.; Blue, J.; Galuszka, D.; Lee, E. M.; Gaddis, L. R.; Johnson, J. R.; Soderblom, L. A.; Ward, A. W.; Smith, P. H.; Britt, D. T.

    1999-04-01

    This paper describes our photogrammetric analysis of the Imager for Mars Pathfinder data, part of a broader program of mapping the Mars Pathfinder landing site in support of geoscience investigations. This analysis, carried out primarily with a commercial digital photogrammetric system, supported by our in-house Integrated Software for Imagers and Spectrometers (ISIS), consists of three steps: (1) geometric control: simultaneous solution for refined estimates of camera positions and pointing plus three-dimensional (3-D) coordinates of ~103 features sitewide, based on the measured image coordinates of those features; (2) topographic modeling: identification of ~3×105 closely spaced points in the images and calculation (based on camera parameters from step 1) of their 3-D coordinates, yielding digital terrain models (DTMs); and (3) geometric manipulation of the data: combination of the DTMs from different stereo pairs into a sitewide model, and reprojection of image data to remove parallax between the different spectral filters in the two cameras and to provide an undistorted planimetric view of the site. These processes are described in detail and example products are shown. Plans for combining the photogrammetrically derived topographic data with spectrophotometry are also described. These include photometric modeling using surface orientations from the DTM to study surface microtextures and improve the accuracy of spectral measurements, and photoclinometry to refine the DTM to single-pixel resolution where photometric properties are sufficiently uniform. Finally, the inclusion of rover images in a joint photogrammetric analysis with IMP images is described. This challenging task will provide coverage of areas hidden to the IMP, but accurate ranging of distant features can be achieved only if the lander is also visible in the rover image used.

  1. UVUDF: Ultraviolet imaging of the Hubble ultra deep field with wide-field camera 3

    SciTech Connect

    Teplitz, Harry I.; Rafelski, Marc; Colbert, James W.; Hanish, Daniel J. [Infrared Processing and Analysis Center, MS 100-22, Caltech, Pasadena, CA 91125 (United States); Kurczynski, Peter; Gawiser, Eric [Department of Physics and Astronomy, Rutgers University, Piscataway, NJ 08854 (United States); Bond, Nicholas A.; Gardner, Jonathan P.; De Mello, Duilia F. [Laboratory for Observational Cosmology, Astrophysics Science Division, Code 665, Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Grogin, Norman; Koekemoer, Anton M.; Brown, Thomas M.; Coe, Dan; Ferguson, Henry C. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Atek, Hakim [Laboratoire d'Astrophysique, École Polytechnique Fédérale de Lausanne (EPFL), Observatoire, CH-1290 Sauverny (Switzerland); Finkelstein, Steven L. [Department of Astronomy, The University of Texas at Austin, Austin, TX 78712 (United States); Giavalisco, Mauro [Astronomy Department, University of Massachusetts, Amherst, MA 01003 (United States); Gronwall, Caryl [Department of Astronomy and Astrophysics, The Pennsylvania State University, University Park, PA 16802 (United States); Lee, Kyoung-Soo [Department of Physics, Purdue University, 525 Northwestern Avenue, West Lafayette, IN 47907 (United States); Ravindranath, Swara, E-mail: hit@ipac.caltech.edu [Inter-University Centre for Astronomy and Astrophysics, Pune (India); and others

    2013-12-01

    We present an overview of a 90 orbit Hubble Space Telescope treasury program to obtain near-ultraviolet imaging of the Hubble Ultra Deep Field using the Wide Field Camera 3 UVIS detector with the F225W, F275W, and F336W filters. This survey is designed to: (1) investigate the episode of peak star formation activity in galaxies at 1 < z < 2.5; (2) probe the evolution of massive galaxies by resolving sub-galactic units (clumps); (3) examine the escape fraction of ionizing radiation from galaxies at z ? 2-3; (4) greatly improve the reliability of photometric redshift estimates; and (5) measure the star formation rate efficiency of neutral atomic-dominated hydrogen gas at z ? 1-3. In this overview paper, we describe the survey details and data reduction challenges, including both the necessity of specialized calibrations and the effects of charge transfer inefficiency. We provide a stark demonstration of the effects of charge transfer inefficiency on resultant data products, which when uncorrected, result in uncertain photometry, elongation of morphology in the readout direction, and loss of faint sources far from the readout. We agree with the STScI recommendation that future UVIS observations that require very sensitive measurements use the instrument's capability to add background light through a 'post-flash'. Preliminary results on number counts of UV-selected galaxies and morphology of galaxies at z ? 1 are presented. We find that the number density of UV dropouts at redshifts 1.7, 2.1, and 2.7 is largely consistent with the number predicted by published luminosity functions. We also confirm that the image mosaics have sufficient sensitivity and resolution to support the analysis of the evolution of star-forming clumps, reaching 28-29th magnitude depth at 5? in a 0.''2 radius aperture depending on filter and observing epoch.

  2. Design and development of a position-sensitive ?-camera for SPECT imaging based on PCI electronics

    Microsoft Academic Search

    V. Spanoudakia; N. D Giokaris; A Karabarbounis; G. K Loudos; D Maintas; C. N Papanicolas; P Paschalis; E Stiliaris

    2004-01-01

    A position-sensitive ?-camera is being currently designed at IASA. This camera will be used experimentally (development mode) in order to obtain an integrated knowledge of its function and perhaps to improve its performance in parallel with an existing one, which has shown a very good performance in phantom, small animal, SPECT technique and is currently being tested for clinical applications.

  3. Distributed Image-Based 3-D Localization of Camera Sensor Networks Roberto Tron and Rene Vidal

    E-print Network

    equipped with a camera, and assume that the nodes are able to communicate through a wireless interface are applicable to other localization problems in a more general setting. We also provide synthetic simulations are synchronized) and that the communications between cameras are lossless. Under these assumptions, we consider

  4. Effects of extended camera baseline and image magnification on target detection time and target recognition with a stereoscopic TV system

    NASA Astrophysics Data System (ADS)

    Spain, E. H.

    1986-02-01

    Accomplishments of the first year of a two-year investigation of remote presence with a stereoscopic TV display are summarized. A dual-channel video recording and playback system was constructed, consisting of a synchronized pair of optical video disk recorders under computer control, used to record stereoscopic still video frame-pairs in the field and play them back in a controlled laboratory environment for visual performance data collection. Three experiments were conducted to assess the independent effects of camera interaxial separation, image magnification, and their simultaneous interaction on target detection and recognition. The results of these experiments suggested that both image magnification and increases in camera interaxial separation are useful strategies for enhancing visual performance. The interaction of these two factors did not disrupt performance. Recommendations are made for the conduct of subsequent studies and for the design of stereo TV displays for terrestrial reconnaissance applications.

  5. High resolution Ly-alpha images obtained with the transition region camera (TRC): A comparison with H-alpha observations

    NASA Astrophysics Data System (ADS)

    Wiik, J. E.; Foing, B. H.; Martens, P.; Fleck, B.; Schmieder, B.

    Comparing high spatial resolution (approximately 1 sec) images observed in Ly-alpha with the Transition Region Camera (TRC) and in H-alpha at Sacramento Peak and Meudon Observatories, we notice that some structures are well correlated in the two lines (plages), while others are less correlated (chromospheric network, filaments). This is an indication of the inhomogeneous distribution of physical parameters in these structures.

  6. Weissenberg camera for macromolecules with imaging plate data collection system at the Photon Factory: Present status and future plan (invited)

    Microsoft Academic Search

    N. Sakabe; S. Ikemizu; K. Sakabe; T. Higashi; A. Nakagawa; N. Watanabe; S. Adachi; K. Sasaki

    1995-01-01

    A Weissenberg camera for macromolecules with imaging plate data collection system at BL6A and BL18B stations in the Photon Factory is introduced and evaluated. The special feature of these systems is considered matching for both SR-x rays and protein crystallography. This system is user-friendly and can collect a large amount of data to higher resolution from the crystal with large

  7. High-resolution topomapping of candidate MER landing sites with Mars Orbiter Camera narrow-angle images

    Microsoft Academic Search

    Randolph L. Kirk; Elpitha Howington-Kraus; Bonnie Redding; Donna Galuszka; Trent M. Hare; Brent A. Archinal; Laurence A. Soderblom; Janet M. Barrett

    2003-01-01

    We analyzed narrow-angle Mars Orbiter Camera (MOC-NA) images to produce high-resolution digital elevation models (DEMs) in order to provide topographic and slope information needed to assess the safety of candidate landing sites for the Mars Exploration Rovers (MER) and to assess the accuracy of our results by a variety of tests. The mapping techniques developed also support geoscientific studies and

  8. Medical applications of fast 3D cameras in real-time image-guided radiotherapy (IGRT) of cancer

    NASA Astrophysics Data System (ADS)

    Li, Shidong; Li, Tuotuo; Geng, Jason

    2013-03-01

    Dynamic volumetric medical imaging (4DMI) has reduced motion artifacts, increased early diagnosis of small mobile tumors, and improved target definition for treatment planning. High speed cameras for video, X-ray, or other forms of sequential imaging allow a live tracking of external or internal movement useful for real-time image-guided radiation therapy (IGRT). However, none of 4DMI can track real-time organ motion and no camera has correlated with 4DMI to show volumetric changes. With a brief review of various IGRT techniques, we propose a fast 3D camera for live-video stereovision, an automatic surface-motion identifier to classify body or respiratory motion, a mechanical model for synchronizing the external surface movement with the internal target displacement by combination use of the real-time stereovision and pre-treatment 4DMI, and dynamic multi-leaf collimation for adaptive aiming the moving target. Our preliminary results demonstrate that the technique is feasible and efficient in IGRT of mobile targets. A clinical trial has been initiated for validation of its spatial and temporal accuracies and dosimetric impact for intensity-modulated RT (IMRT), volumetric-modulated arc therapy (VMAT), and stereotactic body radiotherapy (SBRT) of any mobile tumors. The technique can be extended for surface-guided stereotactic needle insertion in biopsy of small lung nodules.

  9. Reconstruction of Indoor Models Using Point Clouds Generated from Single-Lens Reflex Cameras and Depth Images

    NASA Astrophysics Data System (ADS)

    Tsai, F.; Wu, T.-S.; Lee, I.-C.; Chang, H.; Su, A. Y. S.

    2015-05-01

    This paper presents a data acquisition system consisting of multiple RGB-D sensors and digital single-lens reflex (DSLR) cameras. A systematic data processing procedure for integrating these two kinds of devices to generate three-dimensional point clouds of indoor environments is also developed and described. In the developed system, DSLR cameras are used to bridge the Kinects and provide a more accurate ray intersection condition, which takes advantage of the higher resolution and image quality of the DSLR cameras. Structure from Motion (SFM) reconstruction is used to link and merge multiple Kinect point clouds and dense point clouds (from DSLR color images) to generate initial integrated point clouds. Then, bundle adjustment is used to resolve the exterior orientation (EO) of all images. Those exterior orientations are used as the initial values to combine these point clouds at each frame into the same coordinate system using Helmert (seven-parameter) transformation. Experimental results demonstrate that the design of the data acquisition system and the data processing procedure can generate dense and fully colored point clouds of indoor environments successfully even in featureless areas. The accuracy of the generated point clouds were evaluated by comparing the widths and heights of identified objects as well as coordinates of pre-set independent check points against in situ measurements. Based on the generated point clouds, complete and accurate three-dimensional models of indoor environments can be constructed effectively.

  10. First demonstration of neutron resonance absorption imaging using a high-speed video camera in J-PARC

    NASA Astrophysics Data System (ADS)

    Kai, T.; Segawa, M.; Ooi, M.; Hashimoto, E.; Shinohara, T.; Harada, M.; Maekawa, F.; Oikawa, K.; Sakai, T.; Matsubayashi, M.; Kureta, M.

    2011-09-01

    The neutron resonance absorption imaging technique with a high-speed video camera was successfully demonstrated at the beam line NOBORU, J-PARC. Pulsed neutrons were observed through several kinds of metal foils as a function of neutron time-of-flight by utilizing a high-speed neutron radiography system. A set of time-dependent images was obtained for each neutron pulse, and more than a thousand sets of images were recorded in total. The images with the same time frame were summed after the measurement. Then the authors obtained a set of images having enhanced contrast of sample foils around the resonance absorption energies of cobalt (132 eV), cadmium (28 eV), tantalum (4.3 and 10 eV), gold (4.9 eV) and indium (1.5 eV).

  11. Fuzzy logic-based approach to wavelet denoising of 3D images produced by time-of-flight cameras.

    PubMed

    Jovanov, Ljubomir; Pižurica, Aleksandra; Philips, Wilfried

    2010-10-25

    In this paper we present a new denoising method for the depth images of a 3D imaging sensor, based on the time-of-flight principle. We propose novel ways to use luminance-like information produced by a time-of flight camera along with depth images. Firstly, we propose a wavelet-based method for estimating the noise level in depth images, using luminance information. The underlying idea is that luminance carries information about the power of the optical signal reflected from the scene and is hence related to the signal-to-noise ratio for every pixel within the depth image. In this way, we can efficiently solve the difficult problem of estimating the non-stationary noise within the depth images. Secondly, we use luminance information to better restore object boundaries masked with noise in the depth images. Information from luminance images is introduced into the estimation formula through the use of fuzzy membership functions. In particular, we take the correlation between the measured depth and luminance into account, and the fact that edges (object boundaries) present in the depth image are likely to occur in the luminance image as well. The results on real 3D images show a significant improvement over the state-of-the-art in the field. PMID:21164605

  12. Low cost referenced luminescent imaging of oxygen and pH with a 2-CCD colour near infrared camera.

    PubMed

    Ehgartner, Josef; Wiltsche, Helmar; Borisov, Sergey M; Mayr, Torsten

    2014-10-01

    A low cost imaging set-up for optical chemical sensors based on NIR-emitting dyes is presented. It is based on a commercially available 2-CCD colour near infrared camera, LEDs and tailor-made optical sensing materials for oxygen and pH. The set-up extends common ratiometric RGB imaging based on the red, green and blue channels of colour cameras by an additional NIR channel. The hardware and software of the camera were adapted to perform ratiometric imaging. A series of new planar sensing foils were introduced to image oxygen, pH and both parameters simultaneously. The used NIR-emitting indicators are based on benzoporphyrins and aza-BODIPYs for oxygen and pH, respectively. Moreover, a wide dynamic range oxygen sensor is presented. It allows accurate imaging of oxygen from trace levels up to ambient air concentrations. The imaging set-up in combination with the normal range ratiometric oxygen sensor showed a resolution of 4-5 hPa at low oxygen concentrations (<50 hPa) and 10-15 hPa at ambient air oxygen concentrations; the trace range oxygen sensor (<20 hPa) revealed a resolution of about 0.5-1.8 hPa. The working range of the pH-sensor was in the physiological region from pH 6.0 up to pH 8.0 and showed an apparent pKa-value of 7.3 with a resolution of about 0.1 pH units. The performance of the dual parameter oxygen/pH sensor was comparable to the single analyte pH and normal range oxygen sensors. PMID:25096329

  13. Determining Camera Gain in Room Temperature Cameras

    SciTech Connect

    Joshua Cogliati

    2010-12-01

    James R. Janesick provides a method for determining the amplification of a CCD or CMOS camera when only access to the raw images is provided. However, the equation that is provided ignores the contribution of dark current. For CCD or CMOS cameras that are cooled well below room temperature, this is not a problem, however, the technique needs adjustment for use with room temperature cameras. This article describes the adjustment made to the equation, and a test of this method.

  14. LaBr3:Ce small FOV gamma camera with excellent energy resolution for multi-isotope imaging

    NASA Astrophysics Data System (ADS)

    Pani, R.; Fabbri, A.; Cinti, M. N.; Orlandi, C.; Pellegrini, R.; Scafè, R.; Artibani, M.

    2015-06-01

    The simultaneous administration of radiopharmaceuticals labeled with more than one radioisotope is becoming of increasing interest in clinical practice. Because the photon energies of the utilized radioisotopes could be very close (less than 15% difference), a gamma camera with adequate energy resolution is required. The availability of scintillation crystals with high light yield, as lanthanum tri-bromide (LaBr3:Ce), is particularly appealing for these applications. In this work, a new small field of view gamma camera prototype is presented, based on a planar LaBr3:Ce scintillation crystal with surfaces treatment typical of spectrometric devices, in order to enhance energy resolution performances. The crystal has round shape and has been optically coupled with a position sensitive photomultiplier tube with high quantum efficiency. The presented gamma camera shows outstanding energy resolution results in the investigated energy range (32–662 keV). These relevant performances have been obtained through the application of uniformity correction on the raw data, necessary due to the presence of position sensitive phototube, characterized by a spread of anodic gain values. In spite of position linearity degradation at crystal edges, due to reflective treatment of surfaces, intrinsic spatial resolution values are satisfactory on the useful field of view.The characterization of the presented gamma camera, based on a continuous LaBr3:Ce scintillation crystal with reflective surfaces, indicates good performances in multi-isotope imaging due to the excellent energy resolution results, also in comparison with similar detectors.

  15. An Interactive Camera Placement and Visibility Simulator for Image-Based VR Applications

    E-print Network

    Welch, Greg

    : Camera placement, computer vision, coverage, geometry, interactive simulation, occlusion, reconstruction in situations such as trauma surgery, where a non-specialist on-site trauma helper might receive valuable real

  16. Camera Projector

    NSDL National Science Digital Library

    Oakland Discovery Center

    2011-01-01

    In this activity (posted on March 14, 2011), learners follow the steps to construct a camera projector to explore lenses and refraction. First, learners use relatively simple materials to construct the projector. Then, learners discover that lenses project images upside down and backwards. They explore this phenomenon by creating their own slides (must be drawn upside down and backwards to appear normally). Use this activity to also introduce learners to spherical aberration and chromatic aberration.

  17. 3D displacement measurement with a single camera based on digital image correlation technique

    Microsoft Academic Search

    Wei Sun; E.-liang Dong; Xiaoyuan He

    2007-01-01

    In this paper, a simple speckle technique utilizing a charge-coupled device (CCD) camera and frequency and spatial domain combined correlation, is developed to measure 3D rigid-body displacement of an object. Rigid-body displacement in an arbitrary direction in space consists of both in- and out-of-plane components. On the basis of the pinhole perspective camera model, the presence of in-plane component of

  18. AN OPERATIONAL SOLUTION TO ACQUIRE MULTISPECTRAL IMAGES WITH STANDARD LIGHT CAMERAS: CHARACTERIZATION AND ACQUISITION GUIDELINES

    Microsoft Academic Search

    S. Labbé; B. Roux; A. Bégué; V. Lebourgeois; B. Mallavan

    In order to develop a low-cost and easy to implement technical solution to map inside-field spatial variability, and to explore its relationship with crop conditions, several experiments were conducted using ultra-light aircraft and Unmanned Aerial Vehicle (UAV) equipped with visible and infrared cameras. The sensors consisted of a ramp of 3 small format digital cameras (EOS 350D, Canon ® ):

  19. On-Line Detecting Size and Color of Fruit by Fusing Information from Images of Three Color Camera Systems

    NASA Astrophysics Data System (ADS)

    Zou, Xiaobo; Zhao, Jiewen

    On the common systems, the fruits placed on rollers are rotating while moving, they are observed from above by one camera. In this case, the parts of the fruit near the points where the rotation axis crosses its surface (defined as rotational poles) are not observed. Most researchers did not consider how to manage several images representing the whole surface of the fruit, and each image was treated separately and that the fruit was classified according to the worse result of the set of representative images. Machine vision systems which based 3 color cameras are presented in this article regarding the online detection of size and color of fruits. Nine images covering the whole surface of an apple is got at three continuous positions by the system. Solutions of processing the sequential image’s results continuously and saving them into database promptly were provided. In order to fusing information of the nine images, determination of size was properly solved by a multi-linear regression method based on nine apple images’ longitudinal radius and lateral radius, and the correlation coefficient between sorting machine and manual is 0.919, 0.896 for the training set and test set. HSI (hue-saturation-intensity) of nine images was used for apple color discrimination and the hue field in 0o~80o was divided into 8 equal intervals. After counting the pixel in each interval, the total divided by 100 was treated as the apple color feature. Then 8 color features were got. PCA and ANN were used to analysis the 8 color features. There is a little overlapped in the three-dimensional space results of PCA. An ANN was used to build the relationship between 8 color characters and 4 apple classes with classification accuracy for the training/test set 88%/85.6%.

  20. UVUDF: Ultraviolet Imaging of the Hubble Ultra Deep Field with Wide-Field Camera 3

    NASA Astrophysics Data System (ADS)

    Teplitz, Harry I.; Rafelski, Marc; Kurczynski, Peter; Bond, Nicholas A.; Grogin, Norman; Koekemoer, Anton M.; Atek, Hakim; Brown, Thomas M.; Coe, Dan; Colbert, James W.; Ferguson, Henry C.; Finkelstein, Steven L.; Gardner, Jonathan P.; Gawiser, Eric; Giavalisco, Mauro; Gronwall, Caryl; Hanish, Daniel J.; Lee, Kyoung-Soo; de Mello, Duilia F.; Ravindranath, Swara; Ryan, Russell E.; Siana, Brian D.; Scarlata, Claudia; Soto, Emmaris; Voyer, Elysse N.; Wolfe, Arthur M.

    2013-12-01

    We present an overview of a 90 orbit Hubble Space Telescope treasury program to obtain near-ultraviolet imaging of the Hubble Ultra Deep Field using the Wide Field Camera 3 UVIS detector with the F225W, F275W, and F336W filters. This survey is designed to: (1) investigate the episode of peak star formation activity in galaxies at 1 < z < 2.5; (2) probe the evolution of massive galaxies by resolving sub-galactic units (clumps); (3) examine the escape fraction of ionizing radiation from galaxies at z ~ 2-3; (4) greatly improve the reliability of photometric redshift estimates; and (5) measure the star formation rate efficiency of neutral atomic-dominated hydrogen gas at z ~ 1-3. In this overview paper, we describe the survey details and data reduction challenges, including both the necessity of specialized calibrations and the effects of charge transfer inefficiency. We provide a stark demonstration of the effects of charge transfer inefficiency on resultant data products, which when uncorrected, result in uncertain photometry, elongation of morphology in the readout direction, and loss of faint sources far from the readout. We agree with the STScI recommendation that future UVIS observations that require very sensitive measurements use the instrument's capability to add background light through a "post-flash." Preliminary results on number counts of UV-selected galaxies and morphology of galaxies at z ~ 1 are presented. We find that the number density of UV dropouts at redshifts 1.7, 2.1, and 2.7 is largely consistent with the number predicted by published luminosity functions. We also confirm that the image mosaics have sufficient sensitivity and resolution to support the analysis of the evolution of star-forming clumps, reaching 28-29th magnitude depth at 5? in a 0.''2 radius aperture depending on filter and observing epoch. Based on observations made with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are #12534.

  1. Aircraft engine-mounted camera system for long wavelength infrared imaging of in-service thermal barrier coated turbine blades.

    PubMed

    Markham, James; Cosgrove, Joseph; Scire, James; Haldeman, Charles; Agoos, Ian

    2014-12-01

    This paper announces the implementation of a long wavelength infrared camera to obtain high-speed thermal images of an aircraft engine's in-service thermal barrier coated turbine blades. Long wavelength thermal images were captured of first-stage blades. The achieved temporal and spatial resolutions allowed for the identification of cooling-hole locations. The software and synchronization components of the system allowed for the selection of any blade on the turbine wheel, with tuning capability to image from leading edge to trailing edge. Its first application delivered calibrated thermal images as a function of turbine rotational speed at both steady state conditions and during engine transients. In advance of presenting these data for the purpose of understanding engine operation, this paper focuses on the components of the system, verification of high-speed synchronized operation, and the integration of the system with the commercial jet engine test bed. PMID:25554314

  2. Aircraft engine-mounted camera system for long wavelength infrared imaging of in-service thermal barrier coated turbine blades

    NASA Astrophysics Data System (ADS)

    Markham, James; Cosgrove, Joseph; Scire, James; Haldeman, Charles; Agoos, Ian

    2014-12-01

    This paper announces the implementation of a long wavelength infrared camera to obtain high-speed thermal images of an aircraft engine's in-service thermal barrier coated turbine blades. Long wavelength thermal images were captured of first-stage blades. The achieved temporal and spatial resolutions allowed for the identification of cooling-hole locations. The software and synchronization components of the system allowed for the selection of any blade on the turbine wheel, with tuning capability to image from leading edge to trailing edge. Its first application delivered calibrated thermal images as a function of turbine rotational speed at both steady state conditions and during engine transients. In advance of presenting these data for the purpose of understanding engine operation, this paper focuses on the components of the system, verification of high-speed synchronized operation, and the integration of the system with the commercial jet engine test bed.

  3. Autoguiding on the 20-inch Telescope The direct imaging camera on the telescope has a second, smaller, CCD that can be used to

    E-print Network

    Glashausser, Charles

    , smaller, CCD that can be used to autoguide the telescope while exposing an image on the main CCD. Use the imager CCD and the guider CCD, so you can move the telescope to bring a good (i.e. as bright as possibleAutoguiding on the 20-inch Telescope The direct imaging camera on the telescope has a second

  4. A sniffer-camera for imaging of ethanol vaporization from wine: the effect of wine glass shape.

    PubMed

    Arakawa, Takahiro; Iitani, Kenta; Wang, Xin; Kajiro, Takumi; Toma, Koji; Yano, Kazuyoshi; Mitsubayashi, Kohji

    2015-04-21

    A two-dimensional imaging system (Sniffer-camera) for visualizing the concentration distribution of ethanol vapor emitting from wine in a wine glass has been developed. This system provides image information of ethanol vapor concentration using chemiluminescence (CL) from an enzyme-immobilized mesh. This system measures ethanol vapor concentration as CL intensities from luminol reactions induced by alcohol oxidase and a horseradish peroxidase (HRP)-luminol-hydrogen peroxide system. Conversion of ethanol distribution and concentration to two-dimensional CL was conducted using an enzyme-immobilized mesh containing an alcohol oxidase, horseradish peroxidase, and luminol solution. The temporal changes in CL were detected using an electron multiplier (EM)-CCD camera and analyzed. We selected three types of glasses-a wine glass, a cocktail glass, and a straight glass-to determine the differences in ethanol emission caused by the shape effects of the glass. The emission measurements of ethanol vapor from wine in each glass were successfully visualized, with pixel intensity reflecting ethanol concentration. Of note, a characteristic ring shape attributed to high alcohol concentration appeared near the rim of the wine glass containing 13 °C wine. Thus, the alcohol concentration in the center of the wine glass was comparatively lower. The Sniffer-camera was demonstrated to be sufficiently useful for non-destructive ethanol measurement for the assessment of food characteristics. PMID:25756409

  5. On-Line Detecting Size and Color of Fruit by Fusing Information from Images of Three Color Camera Systems

    NASA Astrophysics Data System (ADS)

    Zou, Xiaobo; Zhao, Jiewen

    On the common systems, the fruits placed on rollers are rotating while moving, they are observed from above by one camera. In this case, the parts of the fruit near the points where the rotation axis crosses its surface (defined as rotational poles) are not observed. Most researchers did not consider how to manage several images representing the whole surface of the fruit, and each image was treated separately and that the fruit was classified according to the worse result of the set of representative images. Machine vision systems which based 3 color cameras are presented in this article regarding the online detection of size and color of fruits. Nine images covering the whole surface of an apple is got at three continuous positions by the system. Solutions of processing the sequential image's results continuously and saving them into database promptly were provided. In order to fusing information of the nine images, determination of size was properly solved by a multi-linear regression method based on nine apple images' longitudinal radius and lateral radius, and the correlation coefficient between sorting machine and manual is 0.919, 0.896 for the training set and test set. HSI (hue-saturation-intensity) of nine images was used for apple color discrimination and the hue field in 0o~80o was divided into 8 equal intervals. After counting the pixel in each interval, the total divided by 100 was treated as the apple color feature. Then 8 color features were got. PCA and ANN were used to analysis the 8 color features. There is a little overlapped in the three-dimensional space results of PCA. An ANN was used to build the relationship between 8 color characters and 4 apple classes with classification accuracy for the training/test set 88%/85.6%.

  6. Mars Exploration Rover Engineering Cameras

    Microsoft Academic Search

    J. N. Maki; J. F. Bell; K. E. Herkenhoff; S. W. Squyres; A. Kiely; M. Klimesh; M. Schwochert; T. Litwin; R. Willson; A. Johnson; M. Maimone; E. Baumgartner; A. Collins; M. Wadsworth; S. T. Elliot; A. Dingizian; D. Brown; E. C. Hagerott; L. Scherr; R. Deen; D. Alexander; J. Lorre

    2003-01-01

    NASA's Mars Exploration Rover (MER) Mission will place a total of 20 cameras (10 per rover) onto the surface of Mars in early 2004. Fourteen of the 20 cameras are designated as engineering cameras and will support the operation of the vehicles on the Martian surface. Images returned from the engineering cameras will also be of significant importance to the

  7. Microcomputer-based artificial vision support system for real-time image processing for camera-driven visual prostheses

    NASA Astrophysics Data System (ADS)

    Fink, Wolfgang; You, Cindy X.; Tarbell, Mark A.

    2010-01-01

    It is difficult to predict exactly what blind subjects with camera-driven visual prostheses (e.g., retinal implants) can perceive. Thus, it is prudent to offer them a wide variety of image processing filters and the capability to engage these filters repeatedly in any user-defined order to enhance their visual perception. To attain true portability, we employ a commercial off-the-shelf battery-powered general purpose Linux microprocessor platform to create the microcomputer-based artificial vision support system (?AVS2) for real-time image processing. Truly standalone, ?AVS2 is smaller than a deck of playing cards, lightweight, fast, and equipped with USB, RS-232 and Ethernet interfaces. Image processing filters on ?AVS2 operate in a user-defined linear sequential-loop fashion, resulting in vastly reduced memory and CPU requirements during execution. ?AVS2 imports raw video frames from a USB or IP camera, performs image processing, and issues the processed data over an outbound Internet TCP/IP or RS-232 connection to the visual prosthesis system. Hence, ?AVS2 affords users of current and future visual prostheses independent mobility and the capability to customize the visual perception generated. Additionally, ?AVS2 can easily be reconfigured for other prosthetic systems. Testing of ?AVS2 with actual retinal implant carriers is envisioned in the near future.

  8. High-contrast imaging in wide spectral band with a self-coherent camera and achromatic coronagraphs

    NASA Astrophysics Data System (ADS)

    Delorme, J. R.; Galicher, R.; Baudoz, P.; Rousset, G.; Mazoyer, J.; N'Diaye, M.; Dohlen, K.; Caillat, A.

    2014-07-01

    Direct imaging of exoplanets is very attractive but challenging. It requires high angular resolution and very high-contrast imaging. One solution is the use of coronagraphs behind the adaptive optics of large telescopes. Unfortunately, optics of space telescope and ground telescope introduce quasi-static aberrations which strongly limit the quality of the final image and a dedicated stage of adaptive optics is required. We proposed a self- coherent camera (SCC) in 2006 and we obtained contrast levels of (approximately equal to) 2 10-8 at a few ?0=D at 638 nm in laboratory. In this paper, we explain how to achromatize the SCC. We present laboratory performance in wide spectral band (approximately equal to) 5 - 10 % bandpass.

  9. A real-time ultrasonic field mapping system using a Fabry Pérot single pixel camera for 3D photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Huynh, Nam; Zhang, Edward; Betcke, Marta; Arridge, Simon R.; Beard, Paul; Cox, Ben

    2015-03-01

    A system for dynamic mapping of broadband ultrasound fields has been designed, with high frame rate photoacoustic imaging in mind. A Fabry-Pérot interferometric ultrasound sensor was interrogated using a coherent light single-pixel camera. Scrambled Hadamard measurement patterns were used to sample the acoustic field at the sensor, and either a fast Hadamard transform or a compressed sensing reconstruction algorithm were used to recover the acoustic pressure data. Frame rates of 80 Hz were achieved for 32x32 images even though no specialist hardware was used for the on-the-fly reconstructions. The ability of the system to obtain photocacoustic images with data compressions as low as 10% was also demonstrated.

  10. Infrared imaging spectrometry by the use of bundled chalcogenide glass fibers and a PtSi CCD camera

    NASA Astrophysics Data System (ADS)

    Saito, Mitsunori; Kikuchi, Katsuhiro; Tanaka, Chinari; Sone, Hiroshi; Morimoto, Shozo; Yamashita, Toshiharu T.; Nishii, Junji

    1999-10-01

    A coherent fiber bundle for infrared image transmission was prepared by arranging 8400 chalcogenide (AsS) glass fibers. The fiber bundle, 1 m in length, is transmissive in the infrared spectral region of 1 - 6 micrometer. A remote spectroscopic imaging system was constructed with the fiber bundle and an infrared PtSi CCD camera. The system was used for the real-time observation (frame time: 1/60 s) of gas distribution. Infrared light from a SiC heater was delivered to a gas cell through a chalcogenide fiber, and transmitted light was observed through the fiber bundle. A band-pass filter was used for the selection of gas species. A He-Ne laser of 3.4 micrometer wavelength was also used for the observation of hydrocarbon gases. Gases bursting from a nozzle were observed successfully by a remote imaging system.

  11. Experimental evaluation of an online gamma-camera imaging of permanent seed implantation (OGIPSI) prototype for partial breast irradiation

    SciTech Connect

    Ravi, Ananth; Caldwell, Curtis B.; Pignol, Jean-Philippe [Department of Medical Biophysics, University of Toronto, Toronto, Ontario, M4N 3M5 (Canada); Department of Medical Biophysics and Department of Medical Imaging, University of Toronto, Toronto, Ontario, M4N 3M5 (Canada) and Department of Medical Physics, Sunnybrook Health Sciences Centre, Toronto, Ontario, M4N 3M5 (Canada); Department of Medical Biophysics, University of Toronto, Toronto, Ontario, M4N 3M5 (Canada) and Department of Radiation Oncology, Sunnybrook Health Sciences Centre, Toronto, Ontario, M4N 3M5 (Canada)

    2008-06-15

    Previously, our team used Monte Carlo simulation to demonstrate that a gamma camera could potentially be used as an online image guidance device to visualize seeds during permanent breast seed implant procedures. This could allow for intraoperative correction if seeds have been misplaced. The objective of this study is to describe an experimental evaluation of an online gamma-camera imaging of permanent seed implantation (OGIPSI) prototype. The OGIPSI device is intended to be able to detect a seed misplacement of 5 mm or more within an imaging time of 2 min or less. The device was constructed by fitting a custom built brass collimator (16 mm height, 0.65 mm hole pitch, 0.15 mm septal thickness) on a 64 pixel linear array CZT detector (eValuator-2000, eV Products, Saxonburg, PA). Two-dimensional projection images of seed distributions were acquired by the use of a digitally controlled translation stage. Spatial resolution and noise characteristics of the detector were measured. The ability and time needed for the OGIPSI device to image the seeds and to detect cold spots was tested using an anthropomorphic breast phantom. Mimicking a real treatment plan, a total of 52 {sup 103}Pd seeds of 65.8 MBq each were placed on three different layers at appropriate depths within the phantom. The seeds were reliably detected within 30 s with a median error in localization of 1 mm. In conclusion, an OGIPSI device can potentially be used for image guidance of permanent brachytherapy applications in the breast and, possibly, other sites.

  12. Features of application of image converter cameras for research on lightning and discharges in long air gaps

    NASA Astrophysics Data System (ADS)

    Lebedev, Vitaly B.; Feldman, Grigory G.; Gorin, Boris N.; Shcherbacov, Yuri V.; Syssoev, Vladimir S.; Rakov, Vladimir A.

    2005-03-01

    The present report generalizes materials of publication /1-3/. In doing so /1/ and /3/ were presented at appropriate symposiums only as poster reports and were not widely discussed. Creation of reliable physical and engineering models of sequence of leader-return stroke of lightning (L-RS) and an attachment process is hampered by lack of actual information on the optical picture of low-luminous streamer structures of lightning. Cameras based on an image converter tubes (ICT) /4/ serve as an alternative of traditional optical-mechanical means for recording a lightning image. Such cameras allowed to obtain new reults when investigating streamer processes of a long spark what made it possible to formulate a set of hypothesizes relating to a leader process of lightning /5-7/. Here there are given the characteristics of the image converter instrumentation complex adapted to the work with lightning and a long spark and there are presented the results of its tests in the All-Russian Electro-technical Institute (VEI) named after V.I. Lenin when recording a long spark on the open high-voltage stand in Istra (near Moscow).

  13. Camera based texture mapping: 3D applications for 2D images

    E-print Network

    Bowden, Nathan Charles

    2005-08-29

    . The purpose of this artist??s research is the development of an original method of parenting perspective projections to three-dimensional (3D) cameras, specifically tailored to result in 3D matte paintings. Research includes the demonstration of techniques...

  14. Low-cost camera modifications and methodologies for very-high-resolution digital images

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Aerial color and color-infrared photography are usually acquired at high altitude so the ground resolution of the photographs is < 1 m. Moreover, current color-infrared cameras and manned aircraft flight time are expensive, so the objective is the development of alternative methods for obtaining ve...

  15. A prototype small CdTe gamma camera for radioguided surgery and other imaging applications

    Microsoft Academic Search

    Makoto Tsuchimochi; Harumi Sakahara; Kazuhide Hayama; Minoru Funaki; Ryoichi Ohno; Takashi Shirahata; Terje Orskaug; Gunnar Maehlum; Koki Yoshioka; Einar Nygard

    2003-01-01

    Gamma probes have been used for sentinel lymph node biopsy in melanoma and breast cancer. However, these probes can provide only radioactivity counts and variable pitch audio output based on the intensity of the detected radioactivity. We have developed a small semiconductor gamma camera (SSGC) that allows visualisation of the size, shape and location of the target tissues. This study

  16. Direct image alignment of projector-camera systems with planar surfaces

    Microsoft Academic Search

    Samuel Audet; Masatoshi Okutomi; Masayuki Tanaka

    2010-01-01

    Projector-camera systems use computer vision to analyze their surroundings and display feedback directly onto real world objects, as embodied by spatial augmented reality. To be effective, the display must remain aligned even when the target object moves, but the added illumination causes problems for traditional algorithms. Current solutions consider the displayed content as interference and largely depend on channels orthogonal

  17. Robotic Arm Camera Image of the South Side of the Thermal and Evolved-Gas Analyzer (Door TA4

    NASA Technical Reports Server (NTRS)

    2008-01-01

    The Thermal and Evolved-Gas Analyzer (TEGA) instrument aboard NASA's Phoenix Mars Lander is shown with one set of oven doors open and dirt from a sample delivery. After the 'seventh shake' of TEGA, a portion of the dirt sample entered the oven via a screen for analysis. This image was taken by the Robotic Arm Camera on Sol 18 (June 13, 2008), or 18th Martian day of the mission.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  18. Camera Obscura

    NSDL National Science Digital Library

    Mr. Engelman

    2008-10-28

    Before photography was invented there was the camera obscura, useful for studying the sun, as an aid to artists, and for general entertainment. What is a camera obscura and how does it work ??? Camera = Latin for room Obscura = Latin for dark But what is a Camera Obscura? The Magic Mirror of Life What is a camera obscura? A French drawing camera with supplies A French drawing camera with supplies Drawing Camera Obscuras with Lens at the top Drawing Camera Obscuras with Lens at the top Read the first three paragraphs of this article. Under the portion Early Observations and Use in Astronomy you will find the answers to the ...

  19. Spartan Infrared Camera, a High-Resolution Imager for the SOAR Telescope: Design, Tests, and On-Telescope Performance

    NASA Astrophysics Data System (ADS)

    Loh, Edwin D.; Biel, Jason D.; Davis, Michael W.; Laporte, Renéé; Loh, Owen Y.; Verhanovitz, Nathan J.

    2012-04-01

    The Spartan Infrared Camera provides tip-tilt corrected imaging for the SOAR Telescope in the 900--2500 nm spectral range with four 2048 × 2048 HAWAII-2 detectors. The camera has two plate scales: high-resolution (40 mas pixel-1) for future diffraction-limited sampling in the and bands and wide-field (66 mas pixel-1) to cover a 5 × 5 field, over which tip-tilt correction is substantial. The design is described in detail. Except for CaF field-flattening lenses, the optics are aluminum mirrors to thermally match the aluminum cryogenic-optical box in which the optics mount. The design minimizes the tilt of the optics as the instrument rotates on the Nasmyth port of the telescope. Two components of the gravitational torque on an optic are eliminated by symmetry, and the third component is minimized by balancing the optic. The optics (including the off-axis aspherical mirrors) were aligned with precise metrology. For the detector assembly, Henein pivots are used to provide frictionless, thermally compliant, lubricant-free, and thermally conducting rotation of the detectors. The heat load is 14 W for an ambient temperature of 10°C. Cooling down takes 40 hr. An activated-charcoal getter controls permeation through the large Viton O-ring for at least nine months. We present maps of the image distortion, which amount to tens of pixels at the greatest. The wavelength of the narrowband filters shift with position in the sky. The measured Strehl ratio of the camera itself is 0.8-0.84 at ?1650 nm. The width of the best -band image was 260 mas in unexceptional seeing measured after tuning the telescope and before moving the telescope. Since images are normally taken after pointing the telescope to a different field, this supports the idea that the image quality could be improved by better control of the focus and the shape of the primary mirror. The instrument has proved to be capable of producing images that can be stitched together to measure faint, extended features and to produce photometry that agree internally to better than 0.01 mag and are well calibrated to 2MASS stars in the range of 12 < K < 16.

  20. [Performance of gamma camera collimators used for single photon emission computed tomography imaging with 123I-isopropyl iodoamphetamine].

    PubMed

    Saegusa, K; Uno, K; Arimizu, N; Iba, S; Uematsu, S

    1986-05-01

    123I Produced by 124Te(p, 2n)123I reaction is contaminated with 124I (less than 5%) and 126I (less than 0.3%). High energy photons from these mixed radioiodine compromise seriously image quality due to scattered photons and to septal penetration in the collimator. Four collimators of LEAP (for low energy all purpose), LEHR (for low energy high resolution), MESI (for medium energy made by Siemens) and MENU (for medium energy made by nuclear technology) mounted on a rotating gamma camera (Siemens, ZLC-7500), were examined in order to select a suitable collimator for 123I SPECT (single photon emission computed tomography) imaging. Sensitivities were measured with a plane source (5 X 5 X 0.5 cm) at the collimator face and distances from 2 to 30 cm in air. And, spatial resolutions in FWHM (full width at half maximum) and FWTM (full width at tenth maximum) were determined from line spread functions with planar and SPECT imaging. From the comparison of collimator performances with 99mTc and 123I, both collimators for low energy were not useful for 123I imaging. In other two collimators for medium energy, however, apparently the effect of septal penetration by the higher energy photons were also recognized, MENU with high geometrical resolution was more suitable for 123I SPECT imaging compared with MESI. And, it is important to perform the SPECT imaging with radius as short as possible. PMID:3489250

  1. In vivo imaging of scattering and absorption properties of exposed brain using a digital red-green-blue camera

    NASA Astrophysics Data System (ADS)

    Nishidate, Izumi; Yoshida, Keiichiro; Kawauchi, Satoko; Sato, Shunichi; Sato, Manabu

    2014-03-01

    We investigate a method to estimate the spectral images of reduced scattering coefficients and the absorption coefficients of in vivo exposed brain tissues in the range from visible to near-infrared wavelength (500-760 nm) based on diffuse reflectance spectroscopy using a digital RGB camera. In the proposed method, the multi-spectral reflectance images of in vivo exposed brain are reconstructed from the digital red, green blue images using the Wiener estimation algorithm. The Monte Carlo simulation-based multiple regression analysis for the absorbance spectra is then used to specify the absorption and scattering parameters of brain tissue. In this analysis, the concentration of oxygenated hemoglobin and that of deoxygenated hemoglobin are estimated as the absorption parameters whereas the scattering amplitude a and the scattering power b in the expression of ?s'=a?-b as the scattering parameters, respectively. The spectra of absorption and reduced scattering coefficients are reconstructed from the absorption and scattering parameters, and finally, the spectral images of absorption and reduced scattering coefficients are estimated. The estimated images of absorption coefficients were dominated by the spectral characteristics of hemoglobin. The estimated spectral images of reduced scattering coefficients showed a broad scattering spectrum, exhibiting larger magnitude at shorter wavelengths, corresponding to the typical spectrum of brain tissue published in the literature. In vivo experiments with exposed brain of rats during CSD confirmed the possibility of the method to evaluate both hemodynamics and changes in tissue morphology due to electrical depolarization.

  2. Gamma ray camera

    SciTech Connect

    Robbins, C.D.; Wang, S.

    1980-09-09

    An anger gamma ray camera is improved by the substitution of a gamma ray sensitive, proximity type image intensifier tube for the scintillator screen in the anger camera, the image intensifier tube having a negatively charged flat scintillator screen and a flat photocathode layer and a grounded, flat output phosphor display screen all of the same dimension (Unity image magnification) and all within a grounded metallic tube envelope and having a metallic, inwardly concaved input window between the scintillator screen and the collimator.

  3. Imaging of lung cancer with fluorine-18 fluorodeoxyglucose: comparison of a dual-head gamma camera in coincidence mode with a full-ring positron emission tomography system

    Microsoft Academic Search

    W. A. Weber; J. Neverve; J. Sklarek; S. I. Ziegler; P. Bartenstein; B. King; T. Treumann; A. Enterrottacher; M. Krapf; K. E. Häußinger; H. Lichte; H. W. Präuer; O. Thetter; M. Schwaiger

    1999-01-01

    .   Dual-head gamma cameras operated in coincidence mode are a new approach for tumour imaging using fluorine-18 fluorodeoxyglucose\\u000a (FDG). The aim of this study was to assess the diagnostic accuracy of such a camera system in comparison with a full-ring\\u000a positron emission tomography (PET) system in patients with lung cancer. Twenty-seven patients (1 female, 26 males, age 62±9\\u000a years) with

  4. Measurement system based on a high-speed camera and image processing techniques for testing of spacecraft subsystems

    NASA Astrophysics Data System (ADS)

    Casarosa, Gianluca; Sarti, Bruno

    2005-03-01

    In the frame of the development of new Electrical Ground Support Equipment (EGSE) for the testing phase of a spacecraft and its subsystems, the Engineering Services Section, within the Testing Division, Mechanical Systems Department, at the European Space and Technology Centre (ESTEC), has started an investigation aiming to verify the performances of a contact-less measurement system based on a high-speed camera and image processing techniques. This shall be used as an additional tool during the future test campaigns to be held at ESTEC, every time a non-intrusive GSE is required. The system is based on a PhotronTM High Speed System, composed of a High Speed camera connected to its frame-grabber via a Panel LinkTM bus, and a SW interface for the camera control. Derivative Filters and techniques for edge detection, such as the Sobel, Prewitt and Laplace algorithms, have been used for the image enhancement and processing during several tests campaigns, which have been held to evaluate the measurement system. The improvement of the detection of the movement of the specimen has been achieved by sticking, where possible, one or more optical targets over the surface of the test article. The targets are of two types: for ambient and vacuum qualified. The performances of the measuring system have been evaluated and are summarized in this paper. The limitations of the proposed tool have been assessed, together with the identifications of the possible scenarios where this system would be useful and could be applied to increase the effectiveness of the verification phase of a spacecraft-subsystem.

  5. Multi-frame x-ray imaging with a large area 40ps camera

    SciTech Connect

    Bell, P.M.; Kilkenny, J.D.; Landen, O.L.; Hanks, R.L.; Wiedwald, J.D. [Lawrence Livermore National Lab., CA (US); Bradley, D.K. [Univ. of Rochester, NY (US). Lab. for Laser Energetics

    1992-11-09

    The authors have developed a large area short pulse framing camera that is capable of sixteen frames and shutter times of 40ps per frame. This is accomplished with a high fidelity electrical circuit and a L/D = 20 microchannel plate, driven by a short pulse (80ps) high amplitude electrical driver. They show results of this work they have done to support this type of shutter time and the difficulties associated with large area high speed shuttering.

  6. Gamma-ray imaging with a large micro-TPC and a scintillation camera

    NASA Astrophysics Data System (ADS)

    Hattori, K.; Kabuki, S.; Kubo, H.; Kurosawa, S.; Miuchi, K.; Nagayoshi, T.; Nishimura, H.; Okada, Y.; Orito, R.; Sekiya, H.; Takada, A.; Takeda, A.; Tanimori, T.; Ueno, K.

    2007-10-01

    We report on the development of a large Compton camera with the full reconstruction of the Compton process based on a prototype. This camera consists of two kinds of detectors. One is a gaseous time projection chamber (micro-TPC) for measuring the energy and the track of a Compton recoil electron. The micro-TPC is based on a ?-PIC and a GEM, which are micro-pattern gas detectors (MPGDs). The size of the micro-TPC was 10 cm×10 cm×8 cm in the case of the prototype, and we enlarged it to 23 cm×28 cm×15 cm. The other detector part is a NaI (Tl) Anger camera for measuring the scattered gamma-ray. With these informations, we can completely reconstruct a Compton event, and determine the direction of the incident gamma-ray, event by event. We succeeded in reconstructing events of incident 662 keV gamma-rays. The measured angular resolutions of the "angular resolution measure" (ARM) and the "scatter plane deviation" (SPD) were 9.3? and 158? (FWHM), respectively.

  7. Assessment of the metrological performance of an in situ storage image sensor ultra-high speed camera for full-field deformation measurements

    NASA Astrophysics Data System (ADS)

    Rossi, Marco; Pierron, Fabrice; Forquin, Pascal

    2014-02-01

    Ultra-high speed (UHS) cameras allow us to acquire images typically up to about 1 million frames s-1 for a full spatial resolution of the order of 1 Mpixel. Different technologies are available nowadays to achieve these performances, an interesting one is the so-called in situ storage image sensor architecture where the image storage is incorporated into the sensor chip. Such an architecture is all solid state and does not contain movable devices as occurs, for instance, in the rotating mirror UHS cameras. One of the disadvantages of this system is the low fill factor (around 76% in the vertical direction and 14% in the horizontal direction) since most of the space in the sensor is occupied by memory. This peculiarity introduces a series of systematic errors when the camera is used to perform full-field strain measurements. The aim of this paper is to develop an experimental procedure to thoroughly characterize the performance of such kinds of cameras in full-field deformation measurement and identify the best operative conditions which minimize the measurement errors. A series of tests was performed on a Shimadzu HPV-1 UHS camera first using uniform scenes and then grids under rigid movements. The grid method was used as full-field measurement optical technique here. From these tests, it has been possible to appropriately identify the camera behaviour and utilize this information to improve actual measurements.

  8. Cameras Would Withstand High Accelerations

    NASA Technical Reports Server (NTRS)

    Meinel, Aden B.; Meinel, Marjorie P.; Macenka, Steven A.; Puerta, Antonio M.

    1992-01-01

    Very rugged cameras with all-reflective optics proposed for use in presence of high accelerations. Optics consist of four coaxial focusing mirrors in Cassegrain configuration. Mirrors are conics or aspherics. Optics achromatic,and imaging system overall passes light from extreme ultraviolet to far infrared. Charge-coupled-device video camera, film camera, or array of photodetectors placed at focal plane. Useful as portable imagers subject to rough handling, or instrumentation cameras mounted on severely vibrating or accelerating vehicles.

  9. A posteriori correction of camera characteristics from large image data sets.

    PubMed

    Afanasyev, Pavel; Ravelli, Raimond B G; Matadeen, Rishi; De Carlo, Sacha; van Duinen, Gijs; Alewijnse, Bart; Peters, Peter J; Abrahams, Jan-Pieter; Portugal, Rodrigo V; Schatz, Michael; van Heel, Marin

    2015-01-01

    Large datasets are emerging in many fields of image processing including: electron microscopy, light microscopy, medical X-ray imaging, astronomy, etc. Novel computer-controlled instrumentation facilitates the collection of very large datasets containing thousands of individual digital images. In single-particle cryogenic electron microscopy ("cryo-EM"), for example, large datasets are required for achieving quasi-atomic resolution structures of biological complexes. Based on the collected data alone, large datasets allow us to precisely determine the statistical properties of the imaging sensor on a pixel-by-pixel basis, independent of any "a priori" normalization routinely applied to the raw image data during collection ("flat field correction"). Our straightforward "a posteriori" correction yields clean linear images as can be verified by Fourier Ring Correlation (FRC), illustrating the statistical independence of the corrected images over all spatial frequencies. The image sensor characteristics can also be measured continuously and used for correcting upcoming images. PMID:26068909

  10. A posteriori correction of camera characteristics from large image data sets

    PubMed Central

    Afanasyev, Pavel; Ravelli, Raimond B. G.; Matadeen, Rishi; De Carlo, Sacha; van Duinen, Gijs; Alewijnse, Bart; Peters, Peter J.; Abrahams, Jan-Pieter; Portugal, Rodrigo V.; Schatz, Michael; van Heel, Marin

    2015-01-01

    Large datasets are emerging in many fields of image processing including: electron microscopy, light microscopy, medical X-ray imaging, astronomy, etc. Novel computer-controlled instrumentation facilitates the collection of very large datasets containing thousands of individual digital images. In single-particle cryogenic electron microscopy (“cryo-EM”), for example, large datasets are required for achieving quasi-atomic resolution structures of biological complexes. Based on the collected data alone, large datasets allow us to precisely determine the statistical properties of the imaging sensor on a pixel-by-pixel basis, independent of any “a priori” normalization routinely applied to the raw image data during collection (“flat field correction”). Our straightforward “a posteriori” correction yields clean linear images as can be verified by Fourier Ring Correlation (FRC), illustrating the statistical independence of the corrected images over all spatial frequencies. The image sensor characteristics can also be measured continuously and used for correcting upcoming images. PMID:26068909

  11. Spatial resolution limit study of a CCD camera and scintillator based neutron imaging system according to MTF determination and analysis.

    PubMed

    Kharfi, F; Denden, O; Bourenane, A; Bitam, T; Ali, A

    2012-01-01

    Spatial resolution limit is a very important parameter of an imaging system that should be taken into consideration before examination of any object. The objectives of this work are the determination of a neutron imaging system's response in terms of spatial resolution. The proposed procedure is based on establishment of the Modulation Transfer Function (MTF). The imaging system being studied is based on a high sensitivity CCD neutron camera (2×10(-5)lx at f1.4). The neutron beam used is from the horizontal beam port (H.6) of the Algerian Es-Salam research reactor. Our contribution is on the MTF determination by proposing an accurate edge identification method and a line spread function undersampling problem-resolving procedure. These methods and procedure are integrated into a MatLab code. The methods, procedures and approaches proposed in this work are available for any other neutron imaging system and allow for judging the ability of a neutron imaging system to produce spatial (internal details) properties of any object under examination. PMID:22014891

  12. Dual cameras acquisition and display system of retina-like sensor camera and rectangular sensor camera

    NASA Astrophysics Data System (ADS)

    Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu

    2015-04-01

    For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.

  13. A Practical Enhanced-Resolution Integrated Optical-Digital Imaging Camera (PERIODIC)

    E-print Network

    Plemmons, Robert J.

    polarization information for applications including dehazing be integrated into a thin form-factor imaging of New Mexico, Albuquerque, NM 87131 ABSTRACT An integrated array computational imaging system, dubbed sub-pixel displacements, phase, polarization, intensity, and wavelength. Several applications

  14. Multi-modality imaging using a handheld gamma camera and MRI for tumor localization

    NASA Astrophysics Data System (ADS)

    Dika, Cheryl; Georgian-Smith, Dianne

    2015-03-01

    While the methods for diagnostic and screening imaging for breast cancer are numerous, each method has its limitations. Multimodality imaging has increasingly been shown to improve the effectiveness of these imaging. Imaging of dense breast tissue has its own set of challenges. Combining MR and gamma for imaging of breast lesions may increase the sensitivity and specificity in theory especially with dense breasts. This experiment was designed as a proof-of-concept for combining MR and gamma images in a pre-clinical setting using an ex vivo bovine tissue model. Keeping the tissue in the same orientation for both imaging modalities was deemed important to increase accuracy. Using the information of the combined images could assist with localization for biopsy.

  15. Color Binarization for Complex Camera-based Images Celine Thillou and Bernard Gosselin

    E-print Network

    Dupont, Stéphane

    automatic color thresholding based on wavelet denoising and color clustering with K-means in order several image-processing tasks and the thresholding one can affect quite critically the performance Usually, for color thresholding images, most papers convert the RGB image into a gray-level one and apply

  16. Noise estimation from a single image taken by specific digital camera using a priori information

    Microsoft Academic Search

    Hitomi Ito; Kenji Kamimura; Norimichi Tsumura; Toshiya Nakaguchi; Hideto Motomura; Yoichi Miyake

    2008-01-01

    It is important to estimate the noise of digital image quantitatively and efficiently for many applications such as noise removal, compression, feature extraction, pattern recognition, and also image quality assessment. For these applications, it is necessary to estimate the noise accurately from a single image. Ce et al proposed a method to use a Bayesian MAP for the estimation of

  17. Validation of the GATE Monte Carlo simulation platform for modelling a CsI(Tl) scintillation camera dedicated to small animal imaging

    E-print Network

    Lazaro, D; Loudos, G; Strul, D; Santin, G; Giokaris, N; Donnarieix, D; Maigne, L; Spanoudaki, V; Styliaris, S; Staelens, S; Breton, V

    2004-01-01

    Monte Carlo simulations are increasingly used in scintigraphic imaging to model imaging systems and to develop and assess tomographic reconstruction algorithms and correction methods for improved image quantitation. GATE (GEANT 4 Application for Tomographic Emission) is a new Monte Carlo simulation platform based on GEANT4 dedicated to nuclear imaging applications. This paper describes the GATE simulation of a prototype of scintillation camera dedicated to small animal imaging and consisting of a CsI(Tl) crystal array coupled to a position sensitive photomultiplier tube. The relevance of GATE to model the camera prototype was assessed by comparing simulated 99mTc point spread functions, energy spectra, sensitivities, scatter fractions and image of a capillary phantom with the corresponding experimental measurements. Results showed an excellent agreement between simulated and experimental data: experimental spatial resolutions were predicted with an error less than 100 mu m. The difference between experimental...

  18. Imaging performance comparison between a LaBr3: Ce scintillator based and a CdTe semiconductor based photon counting compact gamma camera.

    PubMed

    Russo, P; Mettivier, G; Pani, R; Pellegrini, R; Cinti, M N; Bennati, P

    2009-04-01

    The authors report on the performance of two small field of view, compact gamma cameras working in single photon counting in planar imaging tests at 122 and 140 keV. The first camera is based on a LaBr3: Ce scintillator continuous crystal (49 x 49 x 5 mm3) assembled with a flat panel multianode photomultiplier tube with parallel readout. The second one belongs to the class of semiconductor hybrid pixel detectors, specifically, a CdTe pixel detector (14 x 14 x 1 mm3) with 256 x 256 square pixels and a pitch of 55 microm, read out by a CMOS single photon counting integrated circuit of the Medipix2 series. The scintillation camera was operated with selectable energy window while the CdTe camera was operated with a single low-energy detection threshold of about 20 keV, i.e., without energy discrimination. The detectors were coupled to pinhole or parallel-hole high-resolution collimators. The evaluation of their overall performance in basic imaging tasks is presented through measurements of their detection efficiency, intrinsic spatial resolution, noise, image SNR, and contrast recovery. The scintillation and CdTe cameras showed, respectively, detection efficiencies at 122 keV of 83% and 45%, intrinsic spatial resolutions of 0.9 mm and 75 microm, and total background noises of 40.5 and 1.6 cps. Imaging tests with high-resolution parallel-hole and pinhole collimators are also reported. PMID:19472638

  19. Multi-Focus Raw Bayer Pattern Image Fusion for Single-Chip Camera

    NASA Astrophysics Data System (ADS)

    Yang, Bin; Chen, Jibin

    2015-12-01

    In this paper, an efficient patch-based image fusion approach for raw images of single-chip imaging devices incorporated with the Bayer CFA pattern is presented. The multi-source raw Bayer pattern images are firstly parted into half overlapped patches. Then, the patches with maximum clarity measurement defined for raw Bayer pattern image are selected as the fused patches. Next, all the fused local patches are merged with weighted average method in order to reduce the blockness effect of fused raw Bayer pattern image. Finally, the real color fused image is obtained by gradient based demosaicing technology. The multi-source raw Bayer pattern data is fused before demosaicing, so that the multi-sensor system will be more efficient and the artifacts introduced in demosaicing processing do not accumulate in image fusion processing. For comparison, the raw images are also interpolated firstly, and then various image fusion methods are used to get the fused color images. Experimental results show that the proposed algorithm is valid and very effective.

  20. Photoreceptor counting and montaging of en-face retinal images from an adaptive optics fundus camera

    NASA Astrophysics Data System (ADS)

    Xue, Bai; Choi, Stacey S.; Doble, Nathan; Werner, John S.

    2007-05-01

    A fast and efficient method for quantifying photoreceptor density in images obtained with an en-face flood-illuminated adaptive optics (AO) imaging system is described. To improve accuracy of cone counting, en-face images are analyzed over extended areas. This is achieved with two separate semiautomated algorithms: (1) a montaging algorithm that joins retinal images with overlapping common features without edge effects and (2) a cone density measurement algorithm that counts the individual cones in the montaged image. The accuracy of the cone density measurement algorithm is high, with >97% agreement for a simulated retinal image (of known density, with low contrast) and for AO images from normal eyes when compared with previously reported histological data. Our algorithms do not require spatial regularity in cone packing and are, therefore, useful for counting cones in diseased retinas, as demonstrated for eyes with Stargardt's macular dystrophy and retinitis pigmentosa.

  1. Ringfield lithographic camera

    DOEpatents

    Sweatt, William C. (Albuquerque, NM)

    1998-01-01

    A projection lithography camera is presented with a wide ringfield optimized so as to make efficient use of extreme ultraviolet radiation from a large area radiation source (e.g., D.sub.source .apprxeq.0.5 mm). The camera comprises four aspheric mirrors optically arranged on a common axis of symmetry with an increased etendue for the camera system. The camera includes an aperture stop that is accessible through a plurality of partial aperture stops to synthesize the theoretical aperture stop. Radiation from a mask is focused to form a reduced image on a wafer, relative to the mask, by reflection from the four aspheric mirrors.

  2. Development of a solid-state multi-sensor array camera for real time imaging of magnetic fields

    NASA Astrophysics Data System (ADS)

    Benitez, D.; Gaydecki, P.; Quek, S.; Torres, V.

    2007-07-01

    The development of a real-time magnetic field imaging camera based on solid-state sensors is described. The final laboratory comprises a 2D array of 33 x 33 solid state, tri-axial magneto-inductive sensors, and is located within a large current-carrying coil. This may be excited to produce either a steady or time-varying magnetic field. Outputs from several rows of sensors are routed to a sub-master controller and all sub-masters route to a master-controller responsible for data coordination and signal pre-processing. The data are finally streamed to a host computer via a USB interface and the image generated and displayed at a rate of several frames per second. Accurate image generation is predicated on a knowledge of the sensor response, magnetic field perturbations and the nature of the target respecting permeability and conductivity. To this end, the development of the instrumentation has been complemented by extensive numerical modelling of field distribution patterns using boundary element methods. Although it was originally intended for deployment in the nondestructive evaluation (NDE) of reinforced concrete, it was soon realised during the course of the work that the magnetic field imaging system had many potential applications, for example, in medicine, security screening, quality assurance (such as the food industry), other areas of nondestructive evaluation (NDE), designs associated with magnetic fields, teaching and research.

  3. A Nonparametric Approach to 3D Shape Analysis from Digital Camera Images - I. in Memory of W.P. Dayawansa

    E-print Network

    Patrangenaru, V; Sugathadasa, S

    2008-01-01

    In this article, for the first time, one develops a nonparametric methodology for an analysis of shapes of configurations of landmarks on real 3D objects from regular camera photographs, thus making 3D shape analysis very accessible. A fundamental result in computer vision by Faugeras (1992), Hartley, Gupta and Chang (1992) is that generically, a finite 3D configuration of points can be retrieved up to a projective transformation, from corresponding configurations in a pair of camera images. Consequently, the projective shape of a 3D configuration can be retrieved from two of its planar views. Given the inherent registration errors, the 3D projective shape can be estimated from a sample of photos of the scene containing that configuration. Projective shapes are here regarded as points on projective shape manifolds. Using large sample and nonparametric bootstrap methodology for extrinsic means on manifolds, one gives confidence regions and tests for the mean projective shape of a 3D configuration from its 2D c...

  4. Time-resolved pinhole camera imaging and extreme ultraviolet spectrometry on a hollow cathode discharge in xenon.

    PubMed

    Kieft, E R; van der Mullen, J J A M; Kroesen, G M W; Banine, V

    2003-11-01

    A pinhole camera, an extreme ultraviolet (EUV) spectrometer, a fast gatable multichannel plate EUV detector, and a digital camera have been installed on the ASML EUV laboratory setup to perform time-resolved pinhole imaging and EUV spectroscopy on a copy of the Philips EUV hollow cathode discharge plasma source. The main properties of the setup have been characterized. Time-resolved measurements within the plasma pulse in the EUV have been performed on this source. Specific features of the plasma, such as a ring shape in the initiation phase and a propagating sphere during the pinch phase, have either been discovered or confirmed experimentally. Relative populations of various ionization stages in the pinch plasma have been estimated on the basis of line intensities and calculated transition probabilities. The changes in relative line intensities of a single ionization stage can be explained by a combination of temperature and excitation/deexcitation balance effects. Experiments with argon dilution on a newer version of the source show considerable effect on the shape of the xenon EUV spectrum. PMID:14682890

  5. Theoretical studies of image artifacts and counting losses for different photon fluence rates and pulse-height distributions in single-crystal NaI(Tl) scintillation cameras

    SciTech Connect

    Strand, S.E.; Lamm, I.L.

    1980-03-01

    Using computer simulations, we have developed a theoretical model to explain the correlation between counting losses and image artifacts in single-crystal NaI(Tl) scintillation cameras. The theory, valid for scintillation cameras of the Anger type, is based on the physical properties of the NaI(Tl) crystal. Based on a statistical model using random numbers, pulse trains of the light pulses from scintillations were simulated. Pulse-height distributions for different event rates were calculated, with various Compton distributions. Immages of point sources and line sources were generated. Counting losses and image artifacts were dependent on the shape of the pulse-height distribution. The calculated counting losses decreased with larger Compton distributions, due to increasing numbers of pileup events in the energy window; this also caused severe image distortion. The improvement of the spatial resolution with pileup rejection was demonstrated. The theoretical results are in good agreement with experimental results obtained previously. It is concluded that, in modern cameras, the decay time of the scintillation determines the amount of pileup, and the resolving time of the electronics governs the count rates. The results indicate that in some modern cameras the limits of the count-rate capacity in Anger cameras may be reached.

  6. Geminids 2002, 2009, and 2010: a brief report on an experiment with visual and photographic observations and images of all-sky cameras

    NASA Astrophysics Data System (ADS)

    Bryukhanov, I. S.; Korotkiy, S. A.; Lapitsky, Z.; Molchanov, L.; Ushakov, K.; Gain, A.; Grabovsky, R.; Starovoytov, D.; Chernyavsky, M.; Chernik, A.; Nazaruk, M.; Nazaruk, I.; Poluyanova, S.; Tumash, L.; Semenkov-von Zdorrfe, A.; Prokopovich, A.; Akulich, D.; Zaritskaya, E.

    2012-01-01

    In 2009 and 2010, an experiment to search for radiants of meteor showers on images of online all-sky cameras was attempted. In 2009, only two cameras of the Tzec Maun project were used: one near Pingelly in Australia and one in New Mexico in the United States. In 2010, two more cameras, one all-sky camera at the SAO in Nizhny Arkhyz, Russia, and the one in Kiruna, Finland, brought the total to four. For comparison purposes, photographic meteor images made by Stanislav Korotkiy at the SAO during the Geminids' maximum in 2010, as well as visual observations carried out by a group of observers from the town of Maryina Horka in Belarus in 2002, were used. The goal of this attempt was to find out whether meteor images of all-sky cameras are suitable in practice for the determination of the radiants of meteor showers. This was a new astronomical project called All-Sky Beobachter, "Beobachter" being the German word for "observer".

  7. Hybrid stereo camera: an IBR approach for synthesis of very high resolution stereoscopic image sequences

    Microsoft Academic Search

    Harpreet S. Sawhney; Yanlin Guo; Keith J. Hanna; Rakesh Kumar; Sean Adkins; Samuel Zhou

    2001-01-01

    This paper introduces a novel application of IBR technology for efficient rendering of high quality CG and live action stereoscopic sequences. Traditionally, IBR has been applied to render novel views using image and depth based representations of the plenoptic functions. In this work, we present a restricted form of IBR in which lower resolution images for the views to be

  8. Imaging performance of a-PET: a small animal PET camera

    Microsoft Academic Search

    Suleman Surti; Joel S. Karp; Amy E. Perkins; Chris A. Cardi; Margaret E. Daube-witherspoon; Austin Kuhn; Gerd Muehllehner

    2005-01-01

    The evolution of positron emission tomography (PET) imaging for small animals has led to the development of dedicated PET scanner designs with high resolution and sensitivity. The animal PET scanner achieves these goals for imaging small animals such as mice and rats. The scanner uses a pixelated Anger-logic detector for discriminating 2 × 2 × 10 mm3 crystals with 19-mm-diameter

  9. Image/Video Deblurring using a Hybrid Camera Yu-Wing Tai

    E-print Network

    Kim, Dae-Shik

    -resolution. Deconvolution of motion blurred, high-resolution images yields high frequency details, but with ringing low-resolution images recovers artifact-free low-frequency results that lack high- frequency detail. These works clearly demonstrated that a (presumably cheap) auxiliary low-resolution device with fast temporal

  10. Improving super-resolution image reconstruction by in-plane camera rotation

    Microsoft Academic Search

    Stefan Bonchev; Kiril Alexiev

    2010-01-01

    In a digital optical imaging system, image resolution is constrained by several factors, including focus plane array pitch and optics. Super-resolution approaches aim at overcoming some of these limits by incorporating additional information of the object and\\/or combining several pictures of the same object, taken with some displacements between each other. This paper considers the second class of methods. The

  11. Timothy York, timothy.york@gmail.com (A paper written under the guidance of Prof. Raj Jain) Download Image sensors are everywhere. They are present in single shot digital cameras, digital video cameras, embedded in cellular phones, and many more places. W

    E-print Network

    Jain, Raj

    of Silicon Image Sensors 1.2 Measuring Light Intensity 1.3 CCD Image Sensors 1.4 CMOS Image Sensors 2) Download Abstract: Image sensors are everywhere. They are present in single shot digital cameras, digital the fundamentals of how a digital image sensor works, focusing on how photons are converted into electrical signals

  12. Comparing Cosmic Cameras

    NSDL National Science Digital Library

    Learners will take and then compare the images taken by a camera - to learn about focal length (and its effects on field of view), resolution, and ultimately how cameras take close-up pictures of far away objects. Finally, they will apply this knowledge to the images of comet Tempel 1 taken by two different spacecraft with three different cameras, in this case Deep Impact and those expected/obtained from Stardust-NExT. This lesson could easily be adapted for use with other NASA missions.

  13. The Mars observer camera

    NASA Technical Reports Server (NTRS)

    Malin, M. C.; Danielson, G. E.; Ingersoll, A. P.; Masursky, H.; Veverka, J.; Soulanille, T.; Ravine, M.

    1987-01-01

    A camera designed to operate under the extreme constraints of the Mars Observer Mission was selected by NASA in April, 1986. Contingent upon final confirmation in mid-November, the Mars Observer Camera (MOC) will begin acquiring images of the surface and atmosphere of Mars in September-October 1991. The MOC incorporates both a wide angle system for low resolution global monitoring and intermediate resolution regional targeting, and a narrow angle system for high resolution selective surveys. Camera electronics provide control of image clocking and on-board, internal editing and buffering to match whatever spacecraft data system capabilities are allocated to the experiment. The objectives of the MOC experiment follow.

  14. An algorithm for automated detection, localization and measurement of local calcium signals from camera-based imaging.

    PubMed

    Ellefsen, Kyle L; Settle, Brett; Parker, Ian; Smith, Ian F

    2014-09-01

    Local Ca(2+) transients such as puffs and sparks form the building blocks of cellular Ca(2+) signaling in numerous cell types. They have traditionally been studied by linescan confocal microscopy, but advances in TIRF microscopy together with improved electron-multiplied CCD (EMCCD) cameras now enable rapid (>500 frames s(-1)) imaging of subcellular Ca(2+) signals with high spatial resolution in two dimensions. This approach yields vastly more information (ca. 1 Gb min(-1)) than linescan imaging, rendering visual identification and analysis of local events imaged both laborious and subject to user bias. Here we describe a routine to rapidly automate identification and analysis of local Ca(2+) events. This features an intuitive graphical user-interfaces and runs under Matlab and the open-source Python software. The underlying algorithm features spatial and temporal noise filtering to reliably detect even small events in the presence of noisy and fluctuating baselines; localizes sites of Ca(2+) release with sub-pixel resolution; facilitates user review and editing of data; and outputs time-sequences of fluorescence ratio signals for identified event sites along with Excel-compatible tables listing amplitudes and kinetics of events. PMID:25047761

  15. Interferometer-based structured-illumination microscopy utilizing complementary phase relationship through constructive and destructive image detection by two cameras.

    PubMed

    Shao, L; Winoto, L; Agard, D A; Gustafsson, M G L; Sedat, J W

    2012-06-01

    In an interferometer-based fluorescence microscope, a beam splitter is often used to combine two emission wavefronts interferometrically. There are two perpendicular paths along which the interference fringes can propagate and normally only one is used for imaging. However, the other path also contains useful information. Here we introduced a second camera to our interferometer-based three-dimensional structured-illumination microscope (I(5)S) to capture the fringes along the normally unused path, which are out of phase by ? relative to the fringes along the other path. Based on this complementary phase relationship and the well-defined phase interrelationships among the I(5)S data components, we can deduce and then computationally eliminate the path length errors within the interferometer loop using the simultaneously recorded fringes along the two imaging paths. This self-correction capability can greatly relax the requirement for eliminating the path length differences before and maintaining that status during each imaging session, which are practically challenging tasks. Experimental data is shown to support the theory. PMID:22472010

  16. Interferometer-based structured-illumination microscopy utilizing complementary phase relationship through constructive and destructive image detection by two cameras

    PubMed Central

    Shao, L; Winoto, L; Agard, DA; Gustafsson, MGL; Sedat, JW

    2012-01-01

    In an interferometer-based fluorescence microscope, a beam splitter is often used to combine two emission wavefronts interferometrically. There are two perpendicular paths along which the interference fringes can propagate and normally only one is used for imaging. However, the other path also contains useful information. Here we introduced a second camera to our interferometer-based three-dimensional structured-illumination microscope (I5S) to capture the fringes along the normally unused path, which are out of phase by ? relative to the fringes along the other path. Based on this complementary phase relationship and the well-defined phase interrelationships among the I5S data components, we can deduce and then computationally eliminate the path length errors within the interferometer loop using the simultaneously recorded fringes along the two imaging paths. This self-correction capability can greatly relax the requirement for eliminating the path length differences before and maintaining that status during each imaging session, which are practically challenging tasks. Experimental data is shown to support the theory. PMID:22472010

  17. 2000 FPS digital airborne camera

    NASA Astrophysics Data System (ADS)

    Balch, Kris S.

    1998-11-01

    For many years 16 mm film cameras have been used in severe environments. These film cameras are used on Hy-G automotive sleds, airborne weapon testing, range tracking, and other hazardous environments. The companies and government agencies using these cameras are in need of replacing them with a more cost-effective solution. Film-based cameras still produce the best resolving capability. However, film development time, chemical disposal, non-optimal lighting conditions, recurring media cost, and faster digital analysis are factors influencing the desire for a 16 mm film camera replacement. This paper will describe a new imager from Kodak that has been designed to replace 16 mm high- speed film cameras. Also included is a detailed configuration, operational scenario, and cost analysis of Kodak's imager for airborne applications. The KODAK EKTAPRO HG Imager, Model 2000 is a high-resolution color or monochrome CCD camera especially designed for replacement of rugged high-speed film cameras. The HG Imager is a self-contained camera. It features a high-resolution [512x384], light-sensitive CCD sensor with an electronic shutter. This shutter provides blooming protection that prevents "smearing" of bright light sources, e.g., camera looking into a bright sun reflection. The HG Imager is a very rugged camera packaged in a highly integrated housing. This imager operates from +22 to 42 VDC. The HG Imager has a similar interface and form factor is that of high-speed film cameras, e.g., Photosonics 1B. However, the HG also has the digital interfaces such as 100BaseT Ethernet and RS-485 that enable control and image transfer. The HG Imager is designed to replace 16 mm film cameras that support rugged testing applications.

  18. QUOTA: the prototype camera for the WIYN one degree imager (ODI)

    NASA Astrophysics Data System (ADS)

    Jacoby, George H.; Howell, Steve B.; Harbeck, Daniel R.; Sawyer, David G.

    2010-07-01

    QUOTA is an 8Kx8K (16'x16') optical imager using four 4Kx4K orthogonal transfer CCDs arrays (OTAs). Each OTA has 64 nearly independent CCDs having 480x494 12?m pixels. By reading out several of the CCDs rapidly (20 Hz), the centroids of the stars in those CCDs can be used to measure image motion due to atmospheric effects, telescope shake, and guide errors. Motions are fed back to the remaining 250 CCDs that continue to integrate normally, allowing a shift of the collecting charge packets so that they always fall under the moving star images, thereby effecting low order adaptive optics tip/tilt correction in the silicon to improve image quality. As a bonus, the stars that are read rapidly can be studied for high speed photometric variability. QUOTA was conceived to be a prototype for WIYN's 32Kx32K One Degree Imager (ODI), providing a means to test and advance the technical developments for the larger imager (e.g., detectors, controllers, optics, coatings, cooling, and software). QUOTA will have been to the WIYN 3.5-m telescope only twice in its current configuration, but it provided a wealth of information that has been useful to the engineering of ODI. We focus on the areas in which ODI has benefited from QUOTA in this report.

  19. Evaluation of list-mode ordered subset expectation maximization image reconstruction for pixelated solid-state compton gamma camera with large number of channels

    NASA Astrophysics Data System (ADS)

    Kolstein, M.; De Lorenzo, G.; Chmeissani, M.

    2014-04-01

    The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For Compton camera, especially with a large number of readout channels, image reconstruction presents a big challenge. In this work, results are presented for the List-Mode Ordered Subset Expectation Maximization (LM-OSEM) image reconstruction algorithm on simulated data with the VIP Compton camera design. For the simulation, all realistic contributions to the spatial resolution are taken into account, including the Doppler broadening effect. The results show that even with a straightforward implementation of LM-OSEM, good images can be obtained for the proposed Compton camera design. Results are shown for various phantoms, including extended sources and with a distance between the field of view and the first detector plane equal to 100 mm which corresponds to a realistic nuclear medicine environment.

  20. Pro and con of using GenIcam based standard interfaces (GEV, U3V, CXP and CLHS) in a camera or image processing design

    NASA Astrophysics Data System (ADS)

    Feith, Werner

    2015-02-01

    When design image processing applications in hard and software today, as cameras, FPGA embedded processing or PC applications it is a big task select the right interfaces in function and bandwidth for the complex system. This paper shall present existing specifications which are well established in the market and can help in building the system.

  1. One- and two-dimensional fast x-ray imaging of laser-driven implosion dynamics with x-ray streak cameras

    SciTech Connect

    Shiraga, H.; Heya, M.; Nakasuji, M.; Miyanaga, N.; Azechi, H.; Takabe, H.; Yamanaka, T.; Mima, K. [Institute of Laser Engineering, Osaka University, 2-6 Yamada-Oka, Suita, Osaka 565 (Japan)] [Institute of Laser Engineering, Osaka University, 2-6 Yamada-Oka, Suita, Osaka 565 (Japan)

    1997-01-01

    One- (1D) and two-dimensional (2D) x-ray imaging techniques with x-ray streak cameras have been developed and utilized for investigating implosion dynamics of laser fusion targets. Conventional streaked 1D images of the shell motion of the imploding target was recorded together with the time-resolved 2D multi-imaging x-ray streak images of the core shapes on the same x-ray streak camera. Precise comparison of the core dynamics between the experimental and simulation results was performed with an accuracy of 30 ps by fitting the trajectories of the x-ray emission from the imploding shell. {copyright} {ital 1997 American Institute of Physics.}

  2. Probing of two-dimensional grid patterns by means of camera-based image processing

    NASA Astrophysics Data System (ADS)

    Schroeck, Martin; Doiron, Theodore D.

    2000-03-01

    Camera based probes and machine vision have found increased use in coordinate measuring machines over the past years and the calibration of artifacts for these probes has become an important task for NIST. Until recently these artifacts have been calibrated using one or two dimensional measuring machines with electro-optic microscopes or scanning devices as probes. These sensors evaluate only a small section of the edge of a grid mark, and irregularities in this particular spot from local deformations or contamination influence the measurement result. Since these measurements result in a single number based on the entire field of view, the influence of small irregularities are not easily detected. Since different probes scan different parts of the grid mark edge they may give systematically different positions of the mark. The conversion to video based sensors has allowed more flexibility it edge detection, although most instruments still use least squares fits as the substitute geometry of straight edges. This method is very susceptible to noise and edge irregularities. We present some experiments for finding the sub-pixel edge point locations and fitting the set of edge points to a line using a fairly simple least sum of absolute deviations fit. Data from a high accuracy 2D measuring machine is used to show the strengths of the algorithms.

  3. Turbulence studies by fast camera imaging experiments in the TJII stellarator

    NASA Astrophysics Data System (ADS)

    Carralero, D.; de la Cal, E.; de Pablos, J. L.; de Coninck, A.; Alonso, J. A.; Hidalgo, C.; van Milligen, B. Ph.; Pedrosa, M. A.

    2009-06-01

    Experimental studies of turbulence and plasma-wall interaction by fast visible imaging in the TJII stellarator are presented. Visualization of low intensity phenomena was possible by the installation of image intensifiers, allowing the direct observation of turbulent transport events in the scrape off layer (SOL). The observation of turbulent structures propagating poloidally at the plasma edge has shown enough periodicity to allow correlation between them with repetition rates around 10 kHz, as a quasicoherent mode. The analysis of plasma-wall interaction by filtered imaging have shown that there is a change in the 'preferred' plasma-wall interaction area, going from a hardcore toroidally limited plasma to a poloidally locally limited one in the transition from ECRH to NBI heated plasmas.

  4. The SO2 camera: A simple, fast and cheap method for ground-based imaging of SO2 in volcanic plumes

    Microsoft Academic Search

    Toshiya Mori; Mike Burton

    2006-01-01

    SO2 flux is widely monitored on active volcanoes as it gives a window into the hidden, subsurface magma dynamics. We present here a new approach to SO2 flux monitoring using ultraviolet imaging of the volcanic plume through carefully chosen filters to produce images of SO2 column amount. The SO2 camera heralds a breakthrough in both our ability to measure SO2

  5. A 2 million pixel FIT-CCD image sensor for HDTV camera system

    Microsoft Academic Search

    K. Yonemoto; T. Iizuku; S. Nakamura; K. Harada; K. Wada; M. Negishi; H. Yamada; T. Tsunakawa; K. Shinohara; T. Ishimaru; Y. Kamide; T. Yamasaki; M. Yamagishi

    1990-01-01

    The image area of the frame FIT (frame-interline-transfer)-CCD (charge-coupled-device) image sensor is 14.0 mm (H)*7.9 mm (V), the effective number of pixels is 1920 (H)*1036 (V) and the unit cell size of a pixel is 7.3 mu m (H)*7.6 mu m (V). These specifications are for the high-definition-television (HDTV) format. The horizontal shift register consists of dual-channel, two-phase CCDs driven

  6. Observation of Geocorona using Lyman Alpha Imaging CAmera (LAICA) onboard the very small deep space explorer PROCYON

    NASA Astrophysics Data System (ADS)

    Kameda, Shingo; Yoshikawa, Ichiro; Taguchi, Makoto; Sato, Masaki; Kuwabara, Masaki

    Exospheric hydrogen atoms resonantly scatter solar ultraviolet radiation at the wavelength of 121.567nm causing an ultraviolet glow. It is so called "geocorona". The past observational results suggest that geocorona extends to an altitude of about 20R _{E}, where the intensity of geocoronal emission is comparable with that of interplanetary hydrogen emission. Recently, Bailey and Gruntman (2013) newly reported abrupt temporary increases (from 6% to 17%) in the total number of hydrogen atoms in the spherical shell from a geocentric distance of 3 to 8 R _{E} during geomagnetic storms. However, the relation between hydrogen exosphere at high altitude and geomagnetic activity is still unclear. Past observation of the geocorona has mainly been performed using earth orbiters. Its altitude, e. g., 8 R _{E} is not adequately high for the observation of geocorona at high altitude. Observation of geocorona from deep space has been conducted in the Mariner 5, Apollo 16, and Nozomi mission. Among them, only Apollo 16 has a 2-D imager. Its FOV was about 10RE and was not wide enough for imaging the whole geocorona expanding to 20R _{E}. In June 2013, we proposed the LAICA (Lyman Alpha Imaging CAmera) instrument onboard the very small deep space explorer PROCYON, which is planned to be launched in Dec 2014. Its FOV ( 25R _{E}) is wide enough for imaging of the whole geocoronal distribution. We are planning to observe geocorona for more than one week with the temporal resolution of 2h. LAICA was approved in Oct 2013 and its development is now on-going. In this presentation, we will introduce the scientific objectives of LAICA and report the test result of the flight model.

  7. Assessment of a Monte-Carlo simulation of SPECT recordings from a new-generation heart-centric semiconductor camera: from point sources to human images.

    PubMed

    Imbert, Laetitia; Galbrun, Ernest; Odille, Freddy; Poussier, Sylvain; Noel, Alain; Wolf, Didier; Karcher, Gilles; Marie, Pierre-Yves

    2015-02-01

    Geant4 application for tomographic emission (GATE), a Monte-Carlo simulation platform, has previously been used for optimizing tomoscintigraphic images recorded with scintillation Anger cameras but not with the new-generation heart-centric cadmium-zinc-telluride (CZT) cameras. Using the GATE platform, this study aimed at simulating the SPECT recordings from one of these new CZT cameras and to assess this simulation by direct comparison between simulated and actual recorded data, ranging from point sources to human images. Geometry and movement of detectors, as well as their respective energy responses, were modeled for the CZT 'D.SPECT' camera in the GATE platform. Both simulated and actual recorded data were obtained from: (1) point and linear sources of (99m)Tc for compared assessments of detection sensitivity and spatial resolution, (2) a cardiac insert filled with a (99m)Tc solution for compared assessments of contrast-to-noise ratio and sharpness of myocardial borders and (3) in a patient with myocardial infarction using segmented cardiac magnetic resonance imaging images. Most of the data from the simulated images exhibited high concordance with the results of actual images with relative differences of only: (1) 0.5% for detection sensitivity, (2) 6.7% for spatial resolution, (3) 2.6% for contrast-to-noise ratio and 5.0% for sharpness index on the cardiac insert placed in a diffusing environment. There was also good concordance between actual and simulated gated-SPECT patient images for the delineation of the myocardial infarction area, although the quality of the simulated images was clearly superior with increases around 50% for both contrast-to-noise ratio and sharpness index. SPECT recordings from a new heart-centric CZT camera can be simulated with the GATE software with high concordance relative to the actual physical properties of this camera. These simulations may be conducted up to the stage of human SPECT-images even if further refinement is needed in this setting. PMID:25574814

  8. Re-Examination of Lunokhod Sites: Panoramas and Aims for LROC Investigations

    NASA Astrophysics Data System (ADS)

    Abdrakhimov, A. M.; Basilevsky, A. T.

    2010-03-01

    Lunokhod 1 and 2 panoramas were digitized. Analyses of panoramas could define three lunar landscape types: mare plain, highland, joint fissure area. High resolution LRO camera will help us to locate Lunokhod 1 site and refine both rover traverses.

  9. UVUDF: Ultraviolet Imaging of the Hubble Ultra Deep Field with Wide-Field Camera 3

    NASA Astrophysics Data System (ADS)

    Teplitz, Harry I.; Rafelski, Marc; Kurczynski, Peter; Bond, Nicholas A.; Soto, Emmaris; Grogin, Norman A.; Koekemoer, Anton M.; Atek, Hakim; Brown, Thomas M.; Coe, Dan A.; Colbert, James W.; Dai, Yu Sophia; Ferguson, Henry Closson; Finkelstein, Steven L.; Gardner, Jonathan P.; Gawiser, Eric J.; Giavalisco, Mauro; Gronwall, Caryl; Hanish, Daniel; Lee, Kyoung-Soo; Levay, Zoltan G.; De Mello, Duilia F.; Ravindranath, Swara; Ryan, Russell E.; Siana, Brian D.; Scarlata, Claudia; Voyer, Elysse; Windhorst, Rogier A.

    2014-06-01

    We present recent science results from the Ultraviolet Coverage of the Hubble Ultradeep Field (UVUDF) project, in which we obtained images of the HUDF with WFC3/UVIS in the F225W, F275W, and F336W filters (30 orbits per filter; half with UVIS post-flash). The UVUDF completes the Hubble Space Telescope wavelength coverage of the most studied field in the extragalactic sky in the major imaging bands (FUV through NIR). As illustrated by a new 13-band image of the HUDF, these data give us the best view yet obtained of the last 9-10 billion years of cosmic star-formation.The UVUDF data enable the analysis of star-forming galaxies at 2 (Lyman break galaxies at the peak era of star formation) that would be inaccessible from the ground, including measurement of the slope of their UV continuum. The high spatial resolution of the UVUDF images are an unprecedented resource for studying the UV structure of galaxies at 1 and understanding how galaxies are formed from clumps of hot stars.

  10. A low-cost single-camera imaging system for aerial applicators

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Agricultural aircraft provide a readily available and versatile platform for airborne remote sensing. Although various airborne imaging systems are available, most of these systems are either too expensive or too complex to be of practical use for aerial applicators. The objective of this study was ...

  11. DYNAMIC ILM AN APPROACH TO INFRARED-CAMERA BASED DYNAMICAL LIFETIME IMAGING

    E-print Network

    work and was performed in [7]. 2 PRINCIPLE OF DYNAMIC ILM In the dynamic ILM approach charge carriers. The time de- pendence of the carrier density for a non-injection de- pendent carrier lifetime, eff is given present a calibration-free dynamic carrier lifetime imaging approach, based on the infrared lifetime

  12. Airborne Digital Camera Image Semivariance for Evaluation of Forest Structural Damage at an Acid Mine Site

    Microsoft Academic Search

    Josée Lévesque; Douglas J King

    1999-01-01

    A forest downstream of a heavy metal acid tailings area at the KamKotia mine site near Timmins, Ontario shows visible signs of damage which include varied leaf size, leaf discoloration, dead branches, and increased individual tree crown and forest canopy openness. High resolution remote sensing has potential for providing means to spatially and temporally evaluate such damage. In particular, image

  13. Status of an Atmospheric Cherenkov Imaging Camera for the CANGAROO-III

    E-print Network

    Enomoto, Ryoji

    Distance Image centroidalpha Target center gamma proton 10km Shower developments 1TeV Gamma-ray : proton) CANGAROO-II CANGAROO-III Photo Multiplier Tube(PMT) #12;8 Real data Photons of Cherenkov light à Shower matter?? #12;11 Dark matter search ­ Galactic Center n Concentration of Cold Dark Matter toward

  14. Skin hydration imaging using a long-wavelength near-infrared digital camera

    Microsoft Academic Search

    Michael Attas; Trevor Posthumus; Bernhard J. Schattka; Michael G. Sowa; Henry H. Mantsch; Shuliang L. Zhang

    2001-01-01

    Skin hydration is a key factor in skin health. Hydration measurements can provide diagnostic information on the condition of skin and can indicate the integrity of the skin barrier function. Near-infrared spectroscopy measures the water content of living tissue by its effect on tissue reflectance at a particular wavelength. Imaging has the important advantage of showing the degree of hydration

  15. High-accuracy spectropolarimetric imaging using photoelastic modulator-based cameras with low-polarization coatings

    Microsoft Academic Search

    Anna-Britt Mahler; Paula Smith; Russell Chipman; Greg Smith; Nasrat Raouf; Ab Davis; Bruce Hancock; Gary Gutt; David J. Diner

    Under NASA's Instrument Incubator Program (IIP), we are developing an electro-optical imaging approach to enable multiangle, multispectral, and polarimetric measurements of tropospheric aerosol column abundances and microphysical properties. From low Earth orbit, the measurements would be acquired from the ultraviolet to shortwave infrared at ~1 km spatial resolution over a broad swath. To achieve a degree of linear polarization (DoLP)

  16. Compact, rugged, and intuitive thermal imaging cameras for homeland security and law enforcement applications

    Microsoft Academic Search

    Charles M. Hanson

    2005-01-01

    Low cost, small size, low power uncooled thermal imaging sensors have completely changed the way the world views commercial law enforcement and military applications. Key applications include security, medical, automotive, power generation monitoring, manufacturing and process control, aerospace application, defense, environmental and resource monitoring, maintenance monitoring and night vision. Commercial applications also include law enforcement and military special operations. Each

  17. Implementing PET-guided biopsy: integrating functional imaging data with digital x-ray mammography cameras

    Microsoft Academic Search

    Irving N. Weinberg; Valera Zawarzin; Roberto Pani; Rodney C. Williams; Rita L. Freimanis; Nadia M. Lesko; E. A. Levine; N. Perrier; Wendie A. Berg; Lee P. Adler

    2001-01-01

    Purpose: Phantom trials using the PET data for localization of hot spots have demonstrated positional accuracies in the millimeter range. We wanted to perform biopsy based on information from both anatomic and functional imaging modalities, however we had a communication challenge. Despite the digital nature of DSM stereotactic X-ray mammography devices, and the large number of such devices in Radiology

  18. Performance of the low light level CCD camera for speckle imaging

    E-print Network

    S. K. Saha; V. Chinnappan

    2002-09-20

    A new generation CCD detector called low light level CCD (L3CCD) that performs like an intensified CCD without incorporating a micro channel plate (MCP) for light amplification was procured and tested. A series of short exposure images with millisecond integration time has been obtained. The L3CCD is cooled to about $-80^\\circ$ C by Peltier cooling.

  19. High-resolution topomapping of candidate MER landing sites with Mars Orbiter Camera narrow-angle images

    USGS Publications Warehouse

    Kirk, R.L.; Howington-Kraus, E.; Redding, B.; Galuszka, D.; Hare, T.M.; Archinal, B.A.; Soderblom, L.A.; Barrett, J.M.

    2003-01-01

    We analyzed narrow-angle Mars Orbiter Camera (MOC-NA) images to produce high-resolution digital elevation models (DEMs) in order to provide topographic and slope information needed to assess the safety of candidate landing sites for the Mars Exploration Rovers (MER) and to assess the accuracy of our results by a variety of tests. The mapping techniques developed also support geoscientific studies and can be used with all present and planned Mars-orbiting scanner cameras. Photogrammetric analysis of MOC stereopairs yields DEMs with 3-pixel (typically 10 m) horizontal resolution, vertical precision consistent with ???0.22 pixel matching errors (typically a few meters), and slope errors of 1-3??. These DEMs are controlled to the Mars Orbiter Laser Altimeter (MOLA) global data set and consistent with it at the limits of resolution. Photoclinometry yields DEMs with single-pixel (typically ???3 m) horizontal resolution and submeter vertical precision. Where the surface albedo is uniform, the dominant error is 10-20% relative uncertainty in the amplitude of topography and slopes after "calibrating" photoclinometry against a stereo DEM to account for the influence of atmospheric haze. We mapped portions of seven candidate MER sites and the Mars Pathfinder site. Safety of the final four sites (Elysium, Gusev, Isidis, and Meridiani) was assessed by mission engineers by simulating landings on our DEMs of "hazard units" mapped in the sites, with results weighted by the probability of landing on those units; summary slope statistics show that most hazard units are smooth, with only small areas of etched terrain in Gusev crater posing a slope hazard.

  20. Digital camera focus assessment using a camera flange-mounted fiber optic probe

    Microsoft Academic Search

    Michael A. Marcus; Jiann-Rong Lee; Stanley Gross; T. Trembley

    1999-01-01

    During the assembly of high-end digital cameras, it is necessary to determine the location and orientation of the imager plane in order to assess the camera's focusing capability. An apparatus based on non-coherent light interferometry has been developed, which performs these test immediately after the digital imager is installed into the camera body. The instrument includes a camera lens flange-