Sample records for camera lroc images

  1. Lunar Reconnaissance Orbiter Camera (LROC) instrument overview

    USGS Publications Warehouse

    Robinson, M.S.; Brylow, S.M.; Tschimmel, M.; Humm, D.; Lawrence, S.J.; Thomas, P.C.; Denevi, B.W.; Bowman-Cisneros, E.; Zerr, J.; Ravine, M.A.; Caplinger, M.A.; Ghaemi, F.T.; Schaffner, J.A.; Malin, M.C.; Mahanti, P.; Bartels, A.; Anderson, J.; Tran, T.N.; Eliason, E.M.; McEwen, A.S.; Turtle, E.; Jolliff, B.L.; Hiesinger, H.

    2010-01-01

    The Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) and Narrow Angle Cameras (NACs) are on the NASA Lunar Reconnaissance Orbiter (LRO). The WAC is a 7-color push-frame camera (100 and 400 m/pixel visible and UV, respectively), while the two NACs are monochrome narrow-angle linescan imagers (0.5 m/pixel). The primary mission of LRO is to obtain measurements of the Moon that will enable future lunar human exploration. The overarching goals of the LROC investigation include landing site identification and certification, mapping of permanently polar shadowed and sunlit regions, meter-scale mapping of polar regions, global multispectral imaging, a global morphology base map, characterization of regolith properties, and determination of current impact hazards.

  2. Lunar Reconnaissance Orbiter Camera (LROC) Instrument Overview

    Microsoft Academic Search

    M. S. Robinson; S. M. Brylow; M. Tschimmel; D. Humm; S. J. Lawrence; P. C. Thomas; B. W. Denevi; E. Bowman-Cisneros; J. Zerr; M. A. Ravine; M. A. Caplinger; F. T. Ghaemi; J. A. Schaffner; M. C. Malin; P. Mahanti; A. Bartels; J. Anderson; T. N. Tran; E. M. Eliason; A. S. McEwen; E. Turtle; B. L. Jolliff; H. Hiesinger

    2010-01-01

    The Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) and Narrow Angle Cameras (NACs) are on the NASA Lunar\\u000a Reconnaissance Orbiter (LRO). The WAC is a 7-color push-frame camera (100 and 400 m\\/pixel visible and UV, respectively), while\\u000a the two NACs are monochrome narrow-angle linescan imagers (0.5 m\\/pixel). The primary mission of LRO is to obtain measurements\\u000a of the Moon that

  3. Characterization of previously unidentified lunar pyroclastic deposits using Lunar Reconnaissance Orbiter Camera (LROC) data

    USGS Publications Warehouse

    Gustafson, J. Olaf; Bell, James F.; Gaddis, Lisa R.R.; Hawke, B. Ray Ray; Giguere, Thomas A.

    2012-01-01

    We used a Lunar Reconnaissance Orbiter Camera (LROC) global monochrome Wide-angle Camera (WAC) mosaic to conduct a survey of the Moon to search for previously unidentified pyroclastic deposits. Promising locations were examined in detail using LROC multispectral WAC mosaics, high-resolution LROC Narrow Angle Camera (NAC) images, and Clementine multispectral (ultraviolet-visible or UVVIS) data. Out of 47 potential deposits chosen for closer examination, 12 were selected as probable newly identified pyroclastic deposits. Potential pyroclastic deposits were generally found in settings similar to previously identified deposits, including areas within or near mare deposits adjacent to highlands, within floor-fractured craters, and along fissures in mare deposits. However, a significant new finding is the discovery of localized pyroclastic deposits within floor-fractured craters Anderson E and F on the lunar farside, isolated from other known similar deposits. Our search confirms that most major regional and localized low-albedo pyroclastic deposits have been identified on the Moon down to ~100 m/pix resolution, and that additional newly identified deposits are likely to be either isolated small deposits or additional portions of discontinuous, patchy deposits.

  4. Preliminary Mapping of Permanently Shadowed and Sunlit Regions Using the Lunar Reconnaissance Orbiter Camera (LROC)

    NASA Astrophysics Data System (ADS)

    Speyerer, E.; Koeber, S.; Robinson, M. S.

    2010-12-01

    The spin axis of the Moon is tilted by only 1.5° (compared with the Earth's 23.5°), leaving some areas near the poles in permanent shadow while other nearby regions remain sunlit for a majority of the year. Theory, radar data, neutron measurements, and Lunar CRater Observation and Sensing Satellite (LCROSS) observations suggest that volatiles may be present in the cold traps created inside these permanently shadowed regions. While areas of near permanent illumination are prime locations for future lunar outposts due to benign thermal conditions and near constant solar power. The Lunar Reconnaissance Orbiter (LRO) has two imaging systems that provide medium and high resolution views of the poles. During almost every orbit the LROC Wide Angle Camera (WAC) acquires images at 100 m/pixel of the polar region (80° to 90° north and south latitude). In addition, the LROC Narrow Angle Camera (NAC) targets selected regions of interest at 0.7 to 1.5 m/pixel [Robinson et al., 2010]. During the first 11 months of the nominal mission, LROC acquired almost 6,000 WAC images and over 7,300 NAC images of the polar region (i.e., within 2° of pole). By analyzing this time series of WAC and NAC images, regions of permanent shadow and permanent, or near-permanent illumination can be quantified. The LROC Team is producing several reduced data products that graphically illustrate the illumination conditions of the polar regions. Illumination movie sequences are being produced that show how the lighting conditions change over a calendar year. Each frame of the movie sequence is a polar stereographic projected WAC image showing the lighting conditions at that moment. With the WAC’s wide field of view (~100 km at an altitude of 50 km), each frame has repeat coverage between 88° and 90° at each pole. The same WAC images are also being used to develop multi-temporal illumination maps that show the percent each 100 m × 100 m area is illuminated over a period of time. These maps are derived by stacking all the WAC frames, selecting a threshold to determine if the surface is illuminated, and summing the resulting binary images. In addition, mosaics of NAC images are also being produced for regions of interest at a scale of 0.7 to 1.5 m/pixel. The mosaics produced so far have revealed small illuminated surfaces on the tens of meters scale that were previously thought to be shadowed during that time. The LROC dataset of the polar regions complements previous illumination analysis of Clementine images [Bussey et al., 1999], Kaguya topography [Bussey et al., 2010], and the current efforts underway by the Lunar Orbiter Laser Altimeter (LOLA) Team [Mazarico et al., 2010] and provide an important new dataset for science and exploration. References: Bussey et al. (1999), Illumination conditions at the lunar south pole, Geophysical Research Letters, 26(9), 1187-1190. Bussey et al. (2010), Illumination conditions of the south pole of the Moon derived from Kaguya topography, Icarus, 208, 558-564. Mazarico et al. (2010), Illumination of the lunar poles from the Lunar Orbiter Laser Altimeter (LOLA) Topography Data, paper presented at 41st LPSC, Houston, TX. Robinson et al. (2010), Lunar Reconnaissance Orbiter Camera (LROC) Instrument Overview, Space Sci Rev, 150, 81-124.

  5. LROC NAC Stereo Anaglyphs

    NASA Astrophysics Data System (ADS)

    Mattson, S.; McEwen, A. S.; Speyerer, E.; Robinson, M. S.

    2012-12-01

    The Lunar Reconnaissance Orbiter Camera Narrow Angle Camera (LROC NAC) acquires high resolution (50 to 200 cm pixel scale) images of the Moon. In operation since June 2009, LROC NAC acquires geometric stereo pairs by rolling off-nadir on subsequent orbits. A new automated processing system currently in development will produce anaglyphs from most of the NAC geometric stereo pairs. An anaglyph is an image formed by placing one image from the stereo pair in the red channel, and the other image from the stereo pair in the green and blue channels, so that together with red-blue or red-cyan glasses, the 3D information in the pair can be readily viewed. These new image products will make qualitative interpretation of the lunar surface in 3D more accessible, without the need for intensive computational resources or special equipment. The LROC NAC is composed of two separate pushbroom CCD cameras (NAC L and R) aligned to increase the full swath width to 5 km from an altitude of 50 km. Development of the anaglyph processing system incorporates stereo viewing geometry, proper alignment of the NAC L and R frames, and optimal contrast normalization of the stereo pair to minimize extreme brightness differences, which can make stereo viewing difficult in an anaglyph. The LROC NAC anaglyph pipeline is based on a similar automated system developed for the HiRISE camera, on the Mars Reconnaissance Orbiter. Improved knowledge of camera pointing and spacecraft position allows for the automatic registration of the L and R frames by map projecting them to a polar stereographic projection. One half of the stereo pair must then be registered to the other so there is no offset in the vertical (y) direction. Stereo viewing depends on parallax only in the horizontal (x) direction. High resolution LROC NAC anaglyphs will be made available to the lunar science community and to the public on the LROC web site (http://lroc.sese.asu.edu).

  6. Characterization of Previously Unidentified Lunar Pyroclastic Deposits using Lunar Reconnaissance Orbiter Camera (LROC) Data

    NASA Astrophysics Data System (ADS)

    Gustafson, O.; Bell, J. F.; Gaddis, L. R.; Hawke, B. R.; Giguere, T.

    2011-12-01

    We used a Lunar Reconnaissance Orbiter Cameras (LROC) global monochrome Wide Angle Camera (WAC) mosaic to conduct a survey of the Moon to search for previously unidentified pyroclastic deposits. Promising locations were examined in detail using LROC multispectral WAC mosaics, high-resolution LROC Narrow Angle Camera (NAC) images, and Clementine multispectral (ultraviolet-visible, or UVVIS) data. Out of 47 potential deposits chosen for closer examination, 13 were selected as probable newly identified pyroclastic deposits. Potential pyroclastic deposits were generally found in settings similar to previously identified deposits, including areas within or near mare deposits adjacent to highlands, within floor-fractured craters, and along fissures in mare deposits. A significant new finding is the discovery of localized pyroclastic deposits within floor-fractured craters Anderson E & F on the lunar farside, isolated from other known similar deposits. These appear to be Alphonsus-type deposits erupted from a series of small vents aligned with the fracture system, suggesting their origin in volatile-rich vulcanian-style eruptions containing relatively small amounts of juvenile material. The presence of such volcanic features on the lunar farside outside of the major basins such as Moscoviense, Orientale, and South Pole - Aitken indicates that magma ascent and eruption have occurred even in the central farside highlands, despite the thicker farside crust. However, this is the only such occurrence that we have located, and it appears to represent an endpoint in the continuum of eruption styles where the eruption was just barely able to reach the surface but could not transport enough magma to the surface to form an effusive deposit. Many of the 47 potential locations screened were eliminated from consideration based on inconclusive evidence regarding their mode of emplacement. Additional optical imaging, or analyses of other data sets such as radar, imaging spectroscopy, or thermal inertia, could result in identification of additional pyroclastic deposits, especially lighter-toned deposits. However, our search also confirms that most major regional and localized pyroclastic deposits have likely been identified on the Moon down to ~100 m/pix resolution, and that additional newly identified pyroclastic deposits are likely to be either isolated small deposits or additional portions of discontinuous, patchy deposits. Based on the locations where we identified previously unidentified pyroclastic deposits, the greatest potential for identification of additional pyroclastic deposits is likely to be in regions with other volcanic constructs associated with mare deposits, highland locations along the margins of maria, and smaller floor-fractured craters that have not yet been thoroughly imaged at higher resolution, particularly on the farside (such as Anderson E & F).

  7. Exploring the Moon at High-Resolution: First Results From the Lunar Reconnaissance Orbiter Camera (LROC)

    NASA Astrophysics Data System (ADS)

    Robinson, Mark; Hiesinger, Harald; McEwen, Alfred; Jolliff, Brad; Thomas, Peter C.; Turtle, Elizabeth; Eliason, Eric; Malin, Mike; Ravine, A.; Bowman-Cisneros, Ernest

    The Lunar Reconnaissance Orbiter (LRO) spacecraft was launched on an Atlas V 401 rocket from the Cape Canaveral Air Force Station Launch Complex 41 on June 18, 2009. After spending four days in Earth-Moon transit, the spacecraft entered a three month commissioning phase in an elliptical 30×200 km orbit. On September 15, 2009, LRO began its planned one-year nominal mapping mission in a quasi-circular 50 km orbit. A multi-year extended mission in a fixed 30×200 km orbit is optional. The Lunar Reconnaissance Orbiter Camera (LROC) consists of a Wide Angle Camera (WAC) and two Narrow Angle Cameras (NACs). The WAC is a 7-color push-frame camera, which images the Moon at 100 and 400 m/pixel in the visible and UV, respectively, while the two NACs are monochrome narrow-angle linescan imagers with 0.5 m/pixel spatial resolution. LROC was specifically designed to address two of the primary LRO mission requirements and six other key science objectives, including 1) assessment of meter-and smaller-scale features in order to select safe sites for potential lunar landings near polar resources and elsewhere on the Moon; 2) acquire multi-temporal synoptic 100 m/pixel images of the poles during every orbit to unambiguously identify regions of permanent shadow and permanent or near permanent illumination; 3) meter-scale mapping of regions with permanent or near-permanent illumination of polar massifs; 4) repeat observations of potential landing sites and other regions to derive high resolution topography; 5) global multispectral observations in seven wavelengths to characterize lunar resources, particularly ilmenite; 6) a global 100-m/pixel basemap with incidence angles (60° -80° ) favorable for morphological interpretations; 7) sub-meter imaging of a variety of geologic units to characterize their physical properties, the variability of the regolith, and other key science questions; 8) meter-scale coverage overlapping with Apollo-era panoramic images (1-2 m/pixel) to document the number of small impacts since 1971-1972. LROC allows us to determine the recent impact rate of bolides in the size range of 0.5 to 10 meters, which is currently not well known. Determining the impact rate at these sizes enables engineering remediation measures for future surface operations and interplanetary travel. The WAC has imaged nearly the entire Moon in seven wavelengths. A preliminary global WAC stereo-based topographic model is in preparation [1] and global color processing is underway [2]. As the mission progresses repeat global coverage will be obtained as lighting conditions change providing a robust photometric dataset. The NACs are revealing a wealth of morpho-logic features at the meter scale providing the engineering and science constraints needed to support future lunar exploration. All of the Apollo landing sites have been imaged, as well as the majority of robotic landing and impact sites. Through the use of off-nadir slews a collection of stereo pairs is being acquired that enable 5-m scale topographic mapping [3-7]. Impact mor-phologies (terraces, impact melt, rays, etc) are preserved in exquisite detail at all Copernican craters and are enabling new studies of impact mechanics and crater size-frequency distribution measurements [8-12]. Other topical studies including, for example, lunar pyroclastics, domes, and tectonics are underway [e.g., 10-17]. The first PDS data release of LROC data will be in March 2010, and will include all images from the commissioning phase and the first 3 months of the mapping phase. [1] Scholten et al. (2010) 41st LPSC, #2111; [2] Denevi et al. (2010a) 41st LPSC, #2263; [3] Beyer et al. (2010) 41st LPSC, #2678; [4] Archinal et al. (2010) 41st LPSC, #2609; [5] Mattson et al. (2010) 41st LPSC, #1871; [6] Tran et al. (2010) 41st LPSC, #2515; [7] Oberst et al. (2010) 41st LPSC, #2051; [8] Bray et al. (2010) 41st LPSC, #2371; [9] Denevi et al. (2010b) 41st LPSC, #2582; [10] Hiesinger et al. (2010a) 41st LPSC, #2278; [11] Hiesinger et al. (2010b) 41st LPSC, #2304; [12] van der Bogert et al. (2010) 41st LPSC, #2165;

  8. Secondary Craters and the Size-Velocity Distribution of Ejected Fragments around Lunar Craters Measured Using LROC Images

    NASA Astrophysics Data System (ADS)

    Singer, K. N.; Jolliff, B. L.; McKinnon, W. B.

    2013-12-01

    Title: Secondary Craters and the Size-Velocity Distribution of Ejected Fragments around Lunar Craters Measured Using LROC Images Authors: Kelsi N. Singer1, Bradley L. Jolliff1, and William B. McKinnon1 Affiliations: 1. Earth and Planetary Sciences, Washington University in St Louis, St. Louis, MO, United States. We report results from analyzing the size-velocity distribution (SVD) of secondary crater forming fragments from the 93 km diameter Copernicus impact. We measured the diameters of secondary craters and their distances from Copernicus using LROC Wide Angle Camera (WAC) and Narrow Angle Camera (NAC) image data. We then estimated the velocity and size of the ejecta fragment that formed each secondary crater from the range equation for a ballistic trajectory on a sphere and Schmidt-Holsapple scaling relations. Size scaling was carried out in the gravity regime for both non-porous and porous target material properties. We focus on the largest ejecta fragments (dfmax) at a given ejection velocity (?ej) and fit the upper envelope of the SVD using quantile regression to an equation of the form dfmax = A*?ej ^- ?. The velocity exponent, ?, describes how quickly fragment sizes fall off with increasing ejection velocity during crater excavation. For Copernicus, we measured 5800 secondary craters, at distances of up to 700 km (15 crater radii), corresponding to an ejecta fragment velocity of approximately 950 m/s. This mapping only includes secondary craters that are part of a radial chain or cluster. The two largest craters in chains near Copernicus that are likely to be secondaries are 6.4 and 5.2 km in diameter. We obtained a velocity exponent, ?, of 2.2 × 0.1 for a non-porous surface. This result is similar to Vickery's [1987, GRL 14] determination of ? = 1.9 × 0.2 for Copernicus using Lunar Orbiter IV data. The availability of WAC 100 m/pix global mosaics with illumination geometry optimized for morphology allows us to update and extend the work of Vickery [1986, Icarus 67, and 1987], who compared secondary crater SVDs for craters on the Moon, Mercury, and Mars. Additionally, meter-scale NAC images enable characterization of secondary crater morphologies and fields around much smaller primary craters than were previously investigated. Combined results from all previous studies of ejecta fragment SVDs from secondary crater fields show that ? ranges between approximately 1 and 3. First-order spallation theory predicts a ? of 1 [Melosh 1989, Impact Cratering, Oxford Univ. Press]. Results in Vickery [1987] for the Moon exhibit a generally decreasing ? with increasing primary crater size (5 secondary fields mapped). In the same paper, however, this trend is flat for Mercury (3 fields mapped) and opposite for Mars (4 fields mapped). SVDs for craters on large icy satellites (Ganymede and Europa), with gravities not too dissimilar to lunar gravity, show generally low velocity exponents (? between 1 and 1.5), except for the very largest impactor measured: the 585-km-diameter Gilgamesh basin on Ganymede (? = 2.6 × 0.4) [Singer et al., 2013, Icarus 226]. The present work, focusing initially on lunar craters using LROC data, will attempt to confirm or clarify these trends, and expand the number of examples under a variety of impact conditions and surface materials to evaluate possible causes of variations.

  9. Depths, Diameters, and Profiles of Small Lunar Craters From LROC NAC Stereo Images

    NASA Astrophysics Data System (ADS)

    Stopar, J. D.; Robinson, M.; Barnouin, O. S.; Tran, T.

    2010-12-01

    Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC) images (pixel scale ~0.5 m) provide new 3-D views of small craters (40m>D>200m). We extracted topographic profiles from 85 of these craters in mare and highland terrains between 18.1-19.1°N and 5.2-5.4°E to investigate relationships among crater shape, age, and target. Obvious secondary craters (e.g., clustered) and moderately- to heavily-degraded craters were excluded. The freshest craters included in the study have crisp rims, bright ejecta, and no superposed craters. The depth, diameter, and profiles of each crater were determined from a NAC-derived DTM (M119808916/M119815703) tied to LOLA topography with better than 1 m vertical resolution (see [1]). Depth/diameter ratios for the selected craters are generally between 0.12 and 0.2. Crater profiles were classified into one of 3 categories: V-shaped, U-shaped, or intermediate (craters on steep slopes were excluded). Craters were then morphologically classified according to [2], where crater shape is determined by changes in material strength between subsurface layers, resulting in bowl-shaped, flat-bottomed, concentric, or central-mound crater forms. In this study, craters with U-shaped profiles tend to be small (<60 m) and flat-bottomed, while V-shaped craters have steep slopes (~20°), little to no floor, and a range of diameters. Both fresh and relatively degraded craters display the full range of profile shapes (from U to V and all stages in between). We found it difficult to differentiate U-shaped craters from V-shaped craters without the DTM, and we saw no clear correlation between morphologic and profile classification. Further study is still needed to increase our crater statistics and expand on the relatively small population of craters included here. For the craters in this study, we found that block abundances correlate with relative crater degradation state as defined by [3], where abundant blocks signal fresher craters; however, block abundances do not correlate with U- or V-shaped profiles. The craters examined here show that profile shape cannot be used to determine the relative age or degradation state as might be inferred from [4, for example]. The observed variability in crater profiles may be explained by local variations in regolith thickness [e.g., 2, 5], impactor velocity, and/or possibly bolide density. Ongoing efforts will quantify the possible effects of solitary secondary craters and investigate whether or not depth/diameter ratios and crater profiles vary between different regions of the Moon (thick vs thin regolith, highlands vs mare, and old vs young mare). References: [1] Tran T. et al. (2010) LPSC XXXXI, Abstract 2515. [2] Quaide W. L. and V. R. Oberbeck (1968) JGR, 73: 5247-5270. [3] Basilevsky A. T. (1976) Proc LPSC 7th, p. 1005-1020. [4] Soderblom L. A. and L. A. Lebofsky (1972) JGR, 77: 279-296. [5] Wilcox B. B. et al. (2005) Met. Planet. Sci., 40: 695-710.

  10. LROC Advances in Lunar Science

    NASA Astrophysics Data System (ADS)

    Robinson, M. S.

    2012-12-01

    Since entering orbit in 2009 the Lunar Reconnaissance Orbiter Camera (LROC) has acquired over 700,000 Wide Angle Camera (WAC) and Narrow Angle Camera (NAC) images of the Moon. This new image collection is fueling research into the origin and evolution of the Moon. NAC images revealed a volcanic complex 35 x 25 km (60N, 100E), between Compton and Belkovich craters (CB). The CB terrain sports volcanic domes and irregular depressed areas (caldera-like collapses). The volcanic complex corresponds to an area of high-silica content (Diviner) and high Th (Lunar Prospector). A low density of impact craters on the CB complex indicates a relatively young age. The LROC team mapped over 150 volcanic domes and 90 volcanic cones in the Marius Hills (MH), many of which were not previously identified. Morphology and compositional estimates (Diviner) indicate that MH domes are silica poor, and are products of low-effusion mare lavas. Impact melt deposits are observed with Copernican impact craters (>10 km) on exterior ejecta, the rim, inner wall, and crater floors. Preserved impact melt flow deposits are observed around small craters (25 km diam.), and estimated melt volumes exceed predictions. At these diameters the amount of melt predicted is small, and melt that is produced is expected to be ejected from the crater. However, we observe well-defined impact melt deposits on the floor of highland craters down to 200 m diameter. A globally distributed population of previously undetected contractional structures were discovered. Their crisp appearance and associated impact crater populations show that they are young landforms (<1 Ga). NAC images also revealed small extensional troughs. Crosscutting relations with small-diameter craters and depths as shallow as 1 m indicate ages <50 Ma. These features place bounds on the amount of global radial contraction and the level of compressional stress in the crust. WAC temporal coverage of the poles allowed quantification of highly illuminated regions, including one site that remains lit for 94% of a year (longest eclipse period of 43 hours). Targeted NAC images provide higher resolution characterization of key sites with permanent shadow and extended illumination. Repeat WAC coverage provides an unparalleled photometric dataset allowing spatially resolved solutions (currently 1 degree) to Hapke's photometric equation - data invaluable for photometric normalization and interpreting physical properties of the regolith. The WAC color also provides the means to solve for titanium, and distinguish subtle age differences within Copernican aged materials. The longevity of the LRO mission allows follow up NAC and WAC observations of previously known and newly discovered targets over a range of illumination and viewing geometries. Of particular merit is the acquisition of NAC stereo pairs and oblique sequences. With the extended SMD phase, the LROC team is working towards imaging the whole Moon with pixel scales of 50 to 200 cm.

  11. Dry imaging cameras.

    PubMed

    Indrajit, Ik; Alam, Aftab; Sahni, Hirdesh; Bhatia, Mukul; Sahu, Samaresh

    2011-04-01

    Dry imaging cameras are important hard copy devices in radiology. Using dry imaging camera, multiformat images of digital modalities in radiology are created from a sealed unit of unexposed films. The functioning of a modern dry camera, involves a blend of concurrent processes, in areas of diverse sciences like computers, mechanics, thermal, optics, electricity and radiography. Broadly, hard copy devices are classified as laser and non laser based technology. When compared with the working knowledge and technical awareness of different modalities in radiology, the understanding of a dry imaging camera is often superficial and neglected. To fill this void, this article outlines the key features of a modern dry camera and its important issues that impact radiology workflow. PMID:21799589

  12. Dry imaging cameras

    PubMed Central

    Indrajit, IK; Alam, Aftab; Sahni, Hirdesh; Bhatia, Mukul; Sahu, Samaresh

    2011-01-01

    Dry imaging cameras are important hard copy devices in radiology. Using dry imaging camera, multiformat images of digital modalities in radiology are created from a sealed unit of unexposed films. The functioning of a modern dry camera, involves a blend of concurrent processes, in areas of diverse sciences like computers, mechanics, thermal, optics, electricity and radiography. Broadly, hard copy devices are classified as laser and non laser based technology. When compared with the working knowledge and technical awareness of different modalities in radiology, the understanding of a dry imaging camera is often superficial and neglected. To fill this void, this article outlines the key features of a modern dry camera and its important issues that impact radiology workflow. PMID:21799589

  13. Spacecraft camera image registration

    NASA Technical Reports Server (NTRS)

    Kamel, Ahmed A. (Inventor); Graul, Donald W. (Inventor); Chan, Fred N. T. (Inventor); Gamble, Donald W. (Inventor)

    1987-01-01

    A system for achieving spacecraft camera (1, 2) image registration comprises a portion external to the spacecraft and an image motion compensation system (IMCS) portion onboard the spacecraft. Within the IMCS, a computer (38) calculates an image registration compensation signal (60) which is sent to the scan control loops (84, 88, 94, 98) of the onboard cameras (1, 2). At the location external to the spacecraft, the long-term orbital and attitude perturbations on the spacecraft are modeled. Coefficients (K, A) from this model are periodically sent to the onboard computer (38) by means of a command unit (39). The coefficients (K, A) take into account observations of stars and landmarks made by the spacecraft cameras (1, 2) themselves. The computer (38) takes as inputs the updated coefficients (K, A) plus synchronization information indicating the mirror position (AZ, EL) of each of the spacecraft cameras (1, 2), operating mode, and starting and stopping status of the scan lines generated by these cameras (1, 2), and generates in response thereto the image registration compensation signal (60). The sources of periodic thermal errors on the spacecraft are discussed. The system is checked by calculating measurement residuals, the difference between the landmark and star locations predicted at the external location and the landmark and star locations as measured by the spacecraft cameras (1, 2).

  14. Satellite camera image navigation

    NASA Technical Reports Server (NTRS)

    Kamel, Ahmed A. (Inventor); Graul, Donald W. (Inventor); Savides, John (Inventor); Hanson, Charles W. (Inventor)

    1987-01-01

    Pixels within a satellite camera (1, 2) image are precisely located in terms of latitude and longitude on a celestial body, such as the earth, being imaged. A computer (60) on the earth generates models (40, 50) of the satellite's orbit and attitude, respectively. The orbit model (40) is generated from measurements of stars and landmarks taken by the camera (1, 2), and by range data. The orbit model (40) is an expression of the satellite's latitude and longitude at the subsatellite point, and of the altitude of the satellite, as a function of time, using as coefficients (K) the six Keplerian elements at epoch. The attitude model (50) is based upon star measurements taken by each camera (1, 2). The attitude model (50) is a set of expressions for the deviations in a set of mutually orthogonal reference optical axes (x, y, z) as a function of time, for each camera (1, 2). Measured data is fit into the models (40, 50) using a walking least squares fit algorithm. A transformation computer (66 ) transforms pixel coordinates as telemetered by the camera (1, 2) into earth latitude and longitude coordinates, using the orbit and attitude models (40, 50).

  15. Neutron Imaging Camera

    NASA Technical Reports Server (NTRS)

    Hunter, Stanley; deNolfo, G. A.; Barbier, L. M.; Link, J. T.; Son, S.; Floyd, S. R.; Guardala, N.; Skopec, M.; Stark, B.

    2008-01-01

    The Neutron Imaging Camera (NIC) is based on the Three-dimensional Track Imager (3DTI) technology developed at GSFC for gamma-ray astrophysics applications. The 3-DTI, a large volume time-projection chamber, provides accurate, approximately 0.4 mm resolution, 3-D tracking of charged particles. The incident direction of fast neutrons, En > 0.5 MeV, are reconstructed from the momenta and energies of the proton and triton fragments resulting from (sup 3)He(n,p) (sup 3)H interactions in the 3-DTI volume. The performance of the NIC from laboratory and accelerator tests is presented.

  16. Image Sensors Enhance Camera Technologies

    NASA Technical Reports Server (NTRS)

    2010-01-01

    In the 1990s, a Jet Propulsion Laboratory team led by Eric Fossum researched ways of improving complementary metal-oxide semiconductor (CMOS) image sensors in order to miniaturize cameras on spacecraft while maintaining scientific image quality. Fossum s team founded a company to commercialize the resulting CMOS active pixel sensor. Now called the Aptina Imaging Corporation, based in San Jose, California, the company has shipped over 1 billion sensors for use in applications such as digital cameras, camera phones, Web cameras, and automotive cameras. Today, one of every three cell phone cameras on the planet feature Aptina s sensor technology.

  17. LROC Observations of Geologic Features in the Marius Hills

    NASA Astrophysics Data System (ADS)

    Lawrence, S.; Stopar, J. D.; Hawke, R. B.; Denevi, B. W.; Robinson, M. S.; Giguere, T.; Jolliff, B. L.

    2009-12-01

    Lunar volcanic cones, domes, and their associated geologic features are important objects of study for the LROC science team because they represent possible volcanic endmembers that may yield important insights into the history of lunar volcanism and are potential sources of lunar resources. Several hundred domes, cones, and associated volcanic features are currently targeted for high-resolution LROC Narrow Angle Camera [NAC] imagery[1]. The Marius Hills, located in Oceanus Procellarum (centered at ~13.4°N, -55.4°W), represent the largest concentration of these volcanic features on the Moon including sinuous rilles, volcanic cones, domes, and depressions [e.g., 2-7]. The Marius region is thus a high priority for future human lunar exploration, as signified by its inclusion in the Project Constellation list of notional future human lunar exploration sites [8], and will be an intense focus of interest for LROC science investigations. Previous studies of the Marius Hills have utilized telescopic, Lunar Orbiter, Apollo, and Clementine imagery to study the morphology and composition of the volcanic features in the region. Complementary LROC studies of the Marius region will focus on high-resolution NAC images of specific features for studies of morphology (including flow fronts, dome/cone structure, and possible layering) and topography (using stereo imagery). Preliminary studies of the new high-resolution images of the Marius Hills region reveal small-scale features in the sinuous rilles including possible outcrops of bedrock and lobate lava flows from the domes. The observed Marius Hills are characterized by rough surface textures, including the presence of large boulders at the summits (~3-5m diameter), which is consistent with the radar-derived conclusions of [9]. Future investigations will involve analysis of LROC stereo photoclinometric products and coordinating NAC images with the multispectral images collected by the LROC WAC, especially the ultraviolet data, to enable measurements of color variations within and amongst deposits and provide possible compositional insights, including the location of possibly related pyroclastic deposits. References: [1] J. D. Stopar et al. (2009), LRO Science Targeting Meeting, Abs. 6039 [2] Greeley R (1971) Moon, 3, 289-314 [3] Guest J. E. (1971) Geol. and Phys. of the Moon, p. 41-53. [4] McCauley J. F. (1967) USGS Geologic Atlas of the Moon, Sheet I-491 [5] Weitz C. M. and Head J. W. (1999) JGR, 104, 18933-18956 [6] Heather D. J. et al. (2003) JGR, doi:10.1029/2002JE001938 [7] Whitford-Stark, J. L., and J. W. Head (1977) Proc. LSC 8th, 2705-2724 [8] Gruener J. and Joosten B. K. (2009) LRO Science Targeting Meeting, Abs. 6036 [9] Campbell B. A. et al. (2009) JGR, doi:10.1029/2008JE003253.

  18. LRO Camera Imaging of Constellation Sites

    NASA Astrophysics Data System (ADS)

    Gruener, J.; Jolliff, B. L.; Lawrence, S.; Robinson, M. S.; Plescia, J. B.; Wiseman, S. M.; Li, R.; Archinal, B. A.; Howington-Kraus, A. E.

    2009-12-01

    One of the top priorities for Lunar Reconnaissance Orbiter Camera (LROC) imaging during the "exploration" phase of the mission is thorough coverage of 50 sites selected to represent a wide variety of terrain types and geologic features that are of interest for human exploration. These sites, which are broadly distributed around the Moon and include locations at or near both poles, will provide the Constellation Program with data for a set of targets that represent a diversity of scientific and resource opportunities, thus forming a basis for planning for scientific exploration, resource development, and mission operations including traverse and habitation zone planning. Identification of the Constellation targets is not intended to be a site-selection activity. Sites include volcanic terrains (surfaces with young and old basalt flows, pyroclastic deposits, vents, fissures, domes, low shields, rilles, wrinkle ridges, and lava tubes), impact craters and basins (crater floors, central peaks, terraces and walls; impact-melt and ejecta deposits, basin ring structures; and antipodal terrain), and contacts of geologic features in areas of complex geology. Sites at the poles represent different lighting conditions and include craters with areas of permanent shadow. Sites were also chosen that represent typical feldspathic highlands terrain, areas in the highlands with anomalous compositions, and unusual features such as magnetic anomalies. These sites were reviewed by the Lunar Exploration Analysis Group (LEAG). These sites all have considerable scientific and exploration interest and were derived from previous studies of potential lunar landing sites, supplemented with areas that capitalize on discoveries from recent orbital missions. Each site consists of nested regions of interest (ROI), including 10×10 km, 20×20 km, and 40×40 km areas. Within the 10×10 and 20×20 ROIs, the goal is to compile a set of narrow-angle-camera (NAC) observations for a controlled mosaic, photometric and geometric stereo, and images taken at low and high sun to enhance morphology and albedo, respectively. These data will provide the basis for topographic maps, digital elevation models, and slope and boulder hazard maps that could be used to establish landing or habitation zones. Within the 40×40 ROIs, images will be taken to achieve the best possible high-resolution mosaics. All ROIs will have wide-angle-camera context images covering the sites and surrounding areas. At the time of writing (prior to the end of the LRO commissioning phase), over 500 individual NAC frames have been acquired for 47 of the 50 sites. Because of the polar orbit, the majority of repeat coverage occurs for the polar and high latitude sites. Analysis of the environment for several representative Constellation site ROIs will be presented.

  19. Computer-Assisted Detection of Collapse Pits in LROC NAC Images

    NASA Astrophysics Data System (ADS)

    Wagner, R. V.; Robinson, M. S.

    2012-12-01

    Pits in mare basalts and impact melt deposits provide unique environments for human shelters and preservation of geologic information. Due to their steep walls, pits are most distinguishable when the Sun is high (pit walls are casting shadows and impact crater walls are not). Because of the large number of NAC images acquired every day (>350), each typically with 5000 samples and 52,224 lines, it is not feasible to carefully search each image manually, so we developed a shadow detection algorithm (Pitscan) which analyzes an image in thirty seconds. It locates blocks of pixels that are below a digital number (DN) cutoff value, indicating that the block of pixels is "in shadow", and then runs a DN profile in the direction of solar lighting, comparing average DN values of the up-Sun and down-Sun sides. If the up-Sun average DN is higher than the down-Sun average, the shadow is assumed to be from a positive relief feature, and ignored. Otherwise, Pitscan saves a 200 x 200 pixel sub-image for later manual review. The algorithm currently generates ~150 false positives for each successful pit identification. This number would be unacceptable for an algorithm designed to catalog a common feature, but since the logic is merely intended to assist humans in locating an unusual type of feature, the false alarm rate is acceptable, and the current version allows a human to effectively check 10,000 NAC images for pits (over 2500 gigapixels) per hour. The false negative rate is not yet known, however Pitscan detected every pit in a test on a small subset of the images known to contain pits. Pitscan is only effective when the Sun is within 50° of the zenith. When the Sun is closer to the horizon crater walls often cast shadows, resulting in unacceptable numbers of false positives. Due to the Sun angle limit, only regions within 50° latitude of the equator are searchable. To date, 25.42% of the Moon has been imaged within this constraint. Early versions of Pitscan found more than 150 small (average diameter 15m) pits in impact melt deposits of Copernican craters [1]. More recently, improvements to the algorithm revealed two new large mare pits, similar to the three pits discovered in Kaguya images [2]. One is in Schlüter crater, a mare-filled crater near Orientale basin, with a 20 x 40m opening, approximately 60 m deep. The second new pit is in Lacus Mortis (44.96°N, 25.61°E) in a tectonically complex region west of Burg crater, This pit is the largest mare pit found to date, with an opening approximately 100 x 150 m, and a floor more than 90 m below the surrounding terrain. Most interesting from an exploration point of view is the fact that the east wall appears to have collapsed, leaving a relatively smooth ~22° slope from the surrounding mare down to the pit floor. Computer-assisted feature detection is an effective method of locating rare features in the extremely large high-resolution NAC dataset. Pitscan enabled the discovery of unknown collapse pits both in the mare and highlands. These pits are an important resource for future surface exploration, both by providing access to pristine cross-sections of the near-surface and by providing radiation and micrometorite shielding for human outposts. [1] Wagner, R.V. et al. (2012), LPSC XLIII, #2266 [2] Haruyama, J. et al. (2010), LPSC XLI, #1285

  20. Neutron Imaging Camera

    NASA Technical Reports Server (NTRS)

    Hunter, Stanley D.; DeNolfo, Georgia; Floyd, Sam; Krizmanic, John; Link, Jason; Son, Seunghee; Guardala, Noel; Skopec, Marlene; Stark, Robert

    2008-01-01

    We describe the Neutron Imaging Camera (NIC) being developed for DTRA applications by NASA/GSFC and NSWC/Carderock. The NIC is based on the Three-dimensional Track Imager (3-DTI) technology developed at GSFC for gamma-ray astrophysics applications. The 3-DTI, a large volume time-projection chamber, provides accurate, approximately 0.4 mm resolution. 3-D tracking of charged particles. The incident direction of fast neutrons, E(sub N) > 0.5 MeV. arc reconstructed from the momenta and energies of the proton and triton fragments resulting from 3He(n,p)3H interactions in the 3-DTI volume. We present angular and energy resolution performance of the NIC derived from accelerator tests.

  1. Camera Identification from Printed Images Miroslav Goljan*

    E-print Network

    Fridrich, Jessica

    Camera Identification from Printed Images Miroslav Goljan* , Jessica Fridrich, and Jan Lukás picture. Keywords: Camera identification, Photo-Response Non-Uniformity, printed images identification. 1. INTRODUCTION With the proliferation of digital imaging technology, the problem of establishing a link between

  2. On an assessment of surface roughness estimates from lunar laser altimetry pulse-widths for the Moon from LOLA using LROC narrow-angle stereo DTMs.

    NASA Astrophysics Data System (ADS)

    Muller, Jan-Peter; Poole, William

    2013-04-01

    Neumann et al. [1] proposed that laser altimetry pulse-widths could be employed to derive "within-footprint" surface roughness as opposed to surface roughness estimated from between laser altimetry pierce-points such as the example for Mars [2] and more recently from the 4-pointed star-shaped LOLA (Lunar reconnaissance Orbiter Laser Altimeter) onboard the NASA-LRO [3]. Since 2009, the LOLA has been collecting extensive global laser altimetry data with a 5m footprint and ?25m between the 5 points in a star-shape. In order to assess how accurately surface roughness (defined as simple RMS after slope correction) derived from LROC matches with surface roughness derived from LOLA footprints, publicly released LROC-NA (LRO Camera Narrow Angle) 1m Digital Terrain Models (DTMs) were employed to measure the surface roughness directly within each 5m footprint. A set of 20 LROC-NA DTMs were examined. Initially the match-up between the LOLA and LROC-NA orthorectified images (ORIs) is assessed visually to ensure that the co-registration is better than the LOLA footprint resolution. For each LOLA footprint, the pulse-width geolocation is then retrieved and this is used to "cookie-cut" the surface roughness and slopes derived from the LROC-NA DTMs. The investigation which includes data from a variety of different landforms shows little, if any correlation between surface roughness estimated from DTMs with LOLA pulse-widths at sub-footprint scale. In fact there is only any perceptible correlation between LOLA and LROC-DTMs at baselines of 40-60m for surface roughness and 20m for slopes. [1] Neumann et al. Mars Orbiter Laser Altimeter pulse width measurements and footprint-scale roughness. Geophysical Research Letters (2003) vol. 30 (11), paper 1561. DOI: 10.1029/2003GL017048 [2] Kreslavsky and Head. Kilometer-scale roughness of Mars: results from MOLA data analysis. J Geophys Res (2000) vol. 105 (E11) pp. 26695-26711. [3] Rosenburg et al. Global surface slopes and roughness of the Moon from the Lunar Orbiter Laser Altimeter. Journal of Geophysical Research (2011) vol. 116, paper E02001. DOI: 10.1029/2010JE003716 [4] Chin et al. Lunar Reconnaissance Orbiter Overview: The Instrument Suite and Mission. Space Science Reviews (2007) vol. 129 (4) pp. 391-419

  3. LROC analysis of detector-response compensation in SPECT

    Microsoft Academic Search

    Howard C. Gifford; Michael A. King; R. Glenn Wells; William G. Hawkins; Manoj V. Narayanan; P. Hendrik Pretorius

    2000-01-01

    Localization ROC (LROC) observer studies examined whether detector response compensation (DRC) in ordered-subset, expectation-maximization (OSEM) reconstructions helps in the detection and localization of hot tumors. Simulated gallium (Ga-67) images of the thoracic region were used in the study. The projection data modeled the acquisition of attenuated 93- and 185-keV photons with a medium-energy parallel-hole collimator, but scatter was not modeled.

  4. 3D camera tracking from disparity images

    NASA Astrophysics Data System (ADS)

    Kim, Kiyoung; Woo, Woontack

    2005-07-01

    In this paper, we propose a robust camera tracking method that uses disparity images computed from known parameters of 3D camera and multiple epipolar constraints. We assume that baselines between lenses in 3D camera and intrinsic parameters are known. The proposed method reduces camera motion uncertainty encountered during camera tracking. Specifically, we first obtain corresponding feature points between initial lenses using normalized correlation method. In conjunction with matching features, we get disparity images. When the camera moves, the corresponding feature points, obtained from each lens of 3D camera, are robustly tracked via Kanade-Lukas-Tomasi (KLT) tracking algorithm. Secondly, relative pose parameters of each lens are calculated via Essential matrices. Essential matrices are computed from Fundamental matrix calculated using normalized 8-point algorithm with RANSAC scheme. Then, we determine scale factor of translation matrix by d-motion. This is required because the camera motion obtained from Essential matrix is up to scale. Finally, we optimize camera motion using multiple epipolar constraints between lenses and d-motion constraints computed from disparity images. The proposed method can be widely adopted in Augmented Reality (AR) applications, 3D reconstruction using 3D camera, and fine surveillance systems which not only need depth information, but also camera motion parameters in real-time.

  5. Hierarchical Image Gathering Technique for Browsing Surveillance Camera Images

    Microsoft Academic Search

    Wataru Akutsu; Tadasuke Furuya; Hiroko Nakamura Miyamura; Takafumi Saito

    2007-01-01

    We propose an image gathering and display method for efficient browsing of surveillance camera images. The proposed method\\u000a requires large cost to inspect lengthy image sequences taken by a surveillance camera. The proposed method involves generating\\u000a a still image by gathering the moving parts from image sequences captured by a fixed camera. The gathered images are generated\\u000a for several intervals

  6. Multiplex imaging with multiple-pinhole cameras

    NASA Technical Reports Server (NTRS)

    Brown, C.

    1974-01-01

    When making photographs in X rays or gamma rays with a multiple-pinhole camera, the individual images of an extended object such as the sun may be allowed to overlap. Then the situation is in many ways analogous to that in a multiplexing device such as a Fourier spectroscope. Some advantages and problems arising with such use of the camera are discussed, and expressions are derived to describe the relative efficacy of three exposure/postprocessing schemes using multiple-pinhole cameras.

  7. Coherent infrared imaging camera (CIRIC)

    SciTech Connect

    Hutchinson, D.P.; Simpson, M.L.; Bennett, C.A.; Richards, R.K.; Emery, M.S.; Crutcher, R.I.; Sitter, D.N. Jr.; Wachter, E.A.; Huston, M.A.

    1995-07-01

    New developments in 2-D, wide-bandwidth HgCdTe (MCT) and GaAs quantum-well infrared photodetectors (QWIP) coupled with Monolithic Microwave Integrated Circuit (MMIC) technology are now making focal plane array coherent infrared (IR) cameras viable. Unlike conventional IR cameras which provide only thermal data about a scene or target, a coherent camera based on optical heterodyne interferometry will also provide spectral and range information. Each pixel of the camera, consisting of a single photo-sensitive heterodyne mixer followed by an intermediate frequency amplifier and illuminated by a separate local oscillator beam, constitutes a complete optical heterodyne receiver. Applications of coherent IR cameras are numerous and include target surveillance, range detection, chemical plume evolution, monitoring stack plume emissions, and wind shear detection.

  8. Single-Camera Panoramic-Imaging Systems

    NASA Technical Reports Server (NTRS)

    Lindner, Jeffrey L.; Gilbert, John

    2007-01-01

    Panoramic detection systems (PDSs) are developmental video monitoring and image-data processing systems that, as their name indicates, acquire panoramic views. More specifically, a PDS acquires images from an approximately cylindrical field of view that surrounds an observation platform. The main subsystems and components of a basic PDS are a charge-coupled- device (CCD) video camera and lens, transfer optics, a panoramic imaging optic, a mounting cylinder, and an image-data-processing computer. The panoramic imaging optic is what makes it possible for the single video camera to image the complete cylindrical field of view; in order to image the same scene without the benefit of the panoramic imaging optic, it would be necessary to use multiple conventional video cameras, which have relatively narrow fields of view.

  9. The European Photon Imaging Camera on XMM-Newton: The MOS cameras : The MOS cameras

    Microsoft Academic Search

    M. J. L. Turner; A. Abbey; M. Arnaud; M. Balasini; M. Barbera; E. Belsole; P. J. Bennie; J. P. Bernard; G. F. Bignami; M. Boer; U. Briel; I. Butler; C. Cara; C. Chabaud; R. Cole; A. Collura; M. Conte; A. Cros; M. Denby; P. Dhez; G. Di Coco; J. Dowson; P. Ferrando; S. Ghizzardi; F. Gianotti; C. V. Goodall; L. Gretton; R. G. Griffiths; O. Hainaut; J. F. Hochedez; A. D. Holland; E. Jourdain; E. Kendziorra; A. Lagostina; R. Laine; N. La Palombara; M. Lortholary; D. Lumb; P. Marty; S. Molendi; C. Pigot; E. Poindron; K. A. Pounds; J. N. Reeves; C. Reppin; R. Rothenflug; P. Salvetat; J. L. Sauvageot; D. Schmitt; S. Sembay; A. D. T. Short; J. Spragg; J. Stephen; L. Strüder; A. Tiengo; M. Trifoglio; J. Trümper; S. Vercellone; L. Vigroux; G. Villa; M. J. Ward; S. Whitehead; E. Zonca

    2001-01-01

    The EPIC focal plane imaging spectrometers on XMM-Newton use CCDs to record the images and spectra of celestial X-ray sources focused by the three X-ray mirrors. There is one camera at the focus of each mirror; two of the cameras contain seven MOS CCDs, while the third uses twelve PN CCDs, defining a circular field of view of 30' diameter

  10. Generating Stereoscopic Television Images With One Camera

    NASA Technical Reports Server (NTRS)

    Coan, Paul P.

    1996-01-01

    Straightforward technique for generating stereoscopic television images involves use of single television camera translated laterally between left- and right-eye positions. Camera acquires one of images (left- or right-eye image), and video signal from image delayed while camera translated to position where it acquires other image. Length of delay chosen so both images displayed simultaneously or as nearly simultaneously as necessary to obtain stereoscopic effect. Technique amenable to zooming in on small areas within broad scenes. Potential applications include three-dimensional viewing of geological features and meteorological events from spacecraft and aircraft, inspection of workpieces moving along conveyor belts, and aiding ground and water search-and-rescue operations. Also used to generate and display imagery for public education and general information, and possible for medical purposes.

  11. Development of gamma ray imaging cameras

    SciTech Connect

    Wehe, D.K.; Knoll, G.F.

    1992-05-28

    In January 1990, the Department of Energy initiated this project with the objective to develop the technology for general purpose, portable gamma ray imaging cameras useful to the nuclear industry. The ultimate goal of this R D initiative is to develop the analog to the color television camera where the camera would respond to gamma rays instead of visible photons. The two-dimensional real-time image would be displayed would indicate the geometric location of the radiation relative to the camera's orientation, while the brightness and color'' would indicate the intensity and energy of the radiation (and hence identify the emitting isotope). There is a strong motivation for developing such a device for applications within the nuclear industry, for both high- and low-level waste repositories, for environmental restoration problems, and for space and fusion applications. At present, there are no general purpose radiation cameras capable of producing spectral images for such practical applications. At the time of this writing, work on this project has been underway for almost 18 months. Substantial progress has been made in the project's two primary areas: mechanically-collimated (MCC) and electronically-collimated camera (ECC) designs. We present developments covering the mechanically-collimated design, and then discuss the efforts on the electronically-collimated camera. The renewal proposal addresses the continuing R D efforts for the third year effort. 8 refs.

  12. Camera lens adapter magnifies image

    NASA Technical Reports Server (NTRS)

    Moffitt, F. L.

    1967-01-01

    Polaroid Land camera with an illuminated 7-power magnifier adapted to the lens, photographs weld flaws. The flaws are located by inspection with a 10-power magnifying glass and then photographed with this device, thus providing immediate pictorial data for use in remedial procedures.

  13. Lroc Observations of Permanently Shadowed Regions: Seeing into the Dark

    NASA Astrophysics Data System (ADS)

    Koeber, S. D.; Robinson, M. S.

    2013-12-01

    Permanently shadowed regions (PSRs) near the lunar poles that receive secondary illumination from nearby Sun facing slopes were imaged by the Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Cameras (NAC). Typically secondary lighting is optimal in polar areas around respective solstices and when the LRO orbit is nearly coincident with the sub-solar point (low spacecraft beta angles). NAC PSR images provide the means to search for evidence of surface frosts and unusual morphologies from ice rich regolith, and aid in planning potential landing sites for future in-situ exploration. Secondary illumination imaging in PSRs requires NAC integration times typically more than ten times greater than nominal imaging. The increased exposure time results in downtrack smear that decreases the spatial resolution of the NAC PSR images. Most long exposure NAC images of PSRs were acquired with exposure times of 24.2-ms (1-m by 40-m pixels, sampled to 20-m) and 12-ms (1-m by 20-m, sampled to 10-m). The initial campaign to acquire long exposure NAC images of PSRs in the north pole region ran from February 2013 to April 2013. Relative to the south polar region, PSRs near the north pole are generally smaller (D<24-km) and located in simple craters. Long exposure NAC images of PSRs in simple craters are often well illuminated by secondary light reflected from Sun-facing crater slopes during the northern summer solstice, allowing many PSRs to be imaged with the shorter exposure time of 12-ms (resampled to 10-m). With the exception of some craters in Peary crater, most northern PSRs with diameters >6-km were successfully imaged (ex. Whipple, Hermite A, and Rozhestvenskiy U). The third PSR south polar campaign began in April 2013 and will continue until October 2013. The third campaign will expand previous NAC coverage of PSRs and follow up on discoveries with new images of higher signal to noise ratio (SNR), higher resolution, and varying secondary illumination conditions. Utilizing previous campaign images and Sun's position, an individualized approach for targeting each crater drives this campaign. Secondary lighting within the PSRs, though somewhat diffuse, is at low incidence angles and coupled with nadir NAC imaging results in large phase angles. Such conditions tend to reduce albedo contrasts, complicating identification of patchy frost or ice deposits. Within the long exposure PSR images, a few small craters (D<200-m) with highly reflective ejecta blankets have been identified and interpreted as small fresh impact craters. Sylvester N and Main L are Copernican-age craters with PSRs; NAC images reveal debris flows, boulders, and morphologically fresh interior walls indicative of their young age. The identifications of albedo anomalies associated with these fresh craters and debris flows indicate that strong albedo contrasts (~2x) associated with small fresh impact craters can be distinguished in PSRs. Lunar highland material has an albedo of ~0.2, while pure water frost has an albedo of ~0.9. If features in PSRs have an albedo similar to lunar highlands, significant surface frost deposits could result in detectable reflective anomalies in the NAC images. However, no reflective anomalies have thus far been identified in PSRs attributable to frost.

  14. IMAGE-BASED PAN-TILT CAMERA CONTROL IN A MULTI-CAMERA SURVEILLANCE ENVIRONMENT

    E-print Network

    Davis, Larry

    IMAGE-BASED PAN-TILT CAMERA CONTROL IN A MULTI-CAMERA SURVEILLANCE ENVIRONMENT Ser-Nam Lim, Ahmed,Elgammal,lsd}@umiacs.umd.edu ABSTRACT In automated surveillance systems with multiple cameras, the sys- tem must be able to position In a surveillance environment with multiple cameras monitoring a scene, the first task is to position the cameras

  15. Prediction of Viking lander camera image quality

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Burcher, E. E.; Jobson, D. J.; Wall, S. D.

    1976-01-01

    Formulations are presented that permit prediction of image quality as a function of camera performance, surface radiance properties, and lighting and viewing geometry. Predictions made for a wide range of surface radiance properties reveal that image quality depends strongly on proper camera dynamic range command and on favorable lighting and viewing geometry. Proper camera dynamic range commands depend mostly on the surface albedo that will be encountered. Favorable lighting and viewing geometries depend mostly on lander orientation with respect to the diurnal sun path over the landing site, and tend to be independent of surface albedo and illumination scattering function. Side lighting with low sun elevation angles (10 to 30 deg) is generally favorable for imaging spatial details and slopes, whereas high sun elevation angles are favorable for measuring spectral reflectances.

  16. Investigation of Layered Lunar Mare Lava flows through LROC Imagery and Terrestrial Analogs

    NASA Astrophysics Data System (ADS)

    Needham, H.; Rumpf, M.; Sarah, F.

    2013-12-01

    High resolution images of the lunar surface have revealed layered deposits in the walls of impact craters and pit craters in the lunar maria, which are interpreted to be sequences of stacked lava flows. The goal of our research is to establish quantitative constraints and uncertainties on the thicknesses of individual flow units comprising the layered outcrops, in order to model the cooling history of lunar lava flows. The underlying motivation for this project is to identify locations hosting intercalated units of lava flows and paleoregoliths, which may preserve snapshots of the ancient solar wind and other extra-lunar particles, thereby providing potential sampling localities for future missions to the lunar surface. Our approach involves mapping layered outcrops using high-resolution imagery acquired by the Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC), with constraints on flow unit dimensions provided by Lunar Orbiter Laser Altimeter (LOLA) data. We have measured thicknesses of ~ 2 to > 20 m. However, there is considerable uncertainty in the definition of contacts between adjacent units, primarily because talus commonly obscures contacts and/or prevents lateral tracing of the flow units. In addition, flows may have thicknesses or geomorphological complexity at scales approaching the limit of resolution of the data, which hampers distinguishing one unit from another. To address these issues, we have undertaken a terrestrial analog study using World View 2 satellite imagery of layered lava sequences on Oahu, Hawaii. These data have a resolution comparable to LROC NAC images of 0.5 m. The layered lava sequences are first analyzed in ArcGIS to obtain an initial estimate of the number and thicknesses of flow units identified in the images. We next visit the outcrops in the field to perform detailed measurements of the individual units. We have discovered that the number of flow units identified in the remote sensing data is fewer compared to the field analysis, because the resolution of the data precludes identification of subtle flow contacts and the identified 'units' are in fact multiple compounded units. Other factors such as vegetation and shadows may alter the view in the satellite imagery. This means that clarity in the lunar study may also be affected by factors such as lighting angle and amount of debris overlaying the lava sequence. The compilation of field and remote sensing measurements allows us to determine the uncertainty on unit thicknesses, which can be modeled to establish the uncertainty on the calculated depths of penetration of the resulting heat pulse into the underlying regolith. This in turn provides insight into the survivability of extra-lunar particles in paleoregolith layers sandwiched between lava flows.

  17. Improvement of passive THz camera images

    NASA Astrophysics Data System (ADS)

    Kowalski, Marcin; Piszczek, Marek; Palka, Norbert; Szustakowski, Mieczyslaw

    2012-10-01

    Terahertz technology is one of emerging technologies that has a potential to change our life. There are a lot of attractive applications in fields like security, astronomy, biology and medicine. Until recent years, terahertz (THz) waves were an undiscovered, or most importantly, an unexploited area of electromagnetic spectrum. The reasons of this fact were difficulties in generation and detection of THz waves. Recent advances in hardware technology have started to open up the field to new applications such as THz imaging. The THz waves can penetrate through various materials. However, automated processing of THz images can be challenging. The THz frequency band is specially suited for clothes penetration because this radiation does not point any harmful ionizing effects thus it is safe for human beings. Strong technology development in this band have sparked with few interesting devices. Even if the development of THz cameras is an emerging topic, commercially available passive cameras still offer images of poor quality mainly because of its low resolution and low detectors sensitivity. Therefore, THz image processing is very challenging and urgent topic. Digital THz image processing is a really promising and cost-effective way for demanding security and defense applications. In the article we demonstrate the results of image quality enhancement and image fusion of images captured by a commercially available passive THz camera by means of various combined methods. Our research is focused on dangerous objects detection - guns, knives and bombs hidden under some popular types of clothing.

  18. Camera Trajectory fromWide Baseline Images

    NASA Astrophysics Data System (ADS)

    Havlena, M.; Torii, A.; Pajdla, T.

    2008-09-01

    Camera trajectory estimation, which is closely related to the structure from motion computation, is one of the fundamental tasks in computer vision. Reliable camera trajectory estimation plays an important role in 3D reconstruction, self localization, and object recognition. There are essential issues for a reliable camera trajectory estimation, for instance, choice of the camera and its geometric projection model, camera calibration, image feature detection and description, and robust 3D structure computation. Most of approaches rely on classical perspective cameras because of the simplicity of their projection models and ease of their calibration. However, classical perspective cameras offer only a limited field of view, and thus occlusions and sharp camera turns may cause that consecutive frames look completely different when the baseline becomes longer. This makes the image feature matching very difficult (or impossible) and the camera trajectory estimation fails under such conditions. These problems can be avoided if omnidirectional cameras, e.g. a fish-eye lens convertor, are used. The hardware which we are using in practice is a combination of Nikon FC-E9 mounted via a mechanical adaptor onto a Kyocera Finecam M410R digital camera. Nikon FC-E9 is a megapixel omnidirectional addon convertor with 180° view angle which provides images of photographic quality. Kyocera Finecam M410R delivers 2272×1704 images at 3 frames per second. The resulting combination yields a circular view of diameter 1600 pixels in the image. Since consecutive frames of the omnidirectional camera often share a common region in 3D space, the image feature matching is often feasible. On the other hand, the calibration of these cameras is non-trivial and is crucial for the accuracy of the resulting 3D reconstruction. We calibrate omnidirectional cameras off-line using the state-of-the-art technique and Mi?ušík's two-parameter model, that links the radius of the image point r to the angle ? of its corresponding rays w.r.t. the optical axis as ? = ar 1+br2 . After a successful calibration, we know the correspondence of the image points to the 3D optical rays in the coordinate system of the camera. The following steps aim at finding the transformation between the camera and the world coordinate systems, i.e. the pose of the camera in the 3D world, using 2D image matches. For computing 3D structure, we construct a set of tentative matches detecting different affine covariant feature regions including MSER, Harris Affine, and Hessian Affine in acquired images. These features are alternative to popular SIFT features and work comparably in our situation. Parameters of the detectors are chosen to limit the number of regions to 1-2 thousands per image. The detected regions are assigned local affine frames (LAF) and transformed into standard positions w.r.t. their LAFs. Discrete Cosine Descriptors are computed for each region in the standard position. Finally, mutual distances of all regions in one image and all regions in the other image are computed as the Euclidean distances of their descriptors and tentative matches are constructed by selecting the mutually closest pairs. Opposed to the methods using short baseline images, simpler image features which are not affine covariant cannot be used because the view point can change a lot between consecutive frames. Furthermore, feature matching has to be performed on the whole frame because no assumptions on the proximity of the consecutive projections can be made for wide baseline images. This is making the feature detection, description, and matching much more time-consuming than it is for short baseline images and limits the usage to low frame rate sequences when operating in real-time. Robust 3D structure can be computed by RANSAC which searches for the largest subset of the set of tentative matches which is, within a predefined threshold ", consistent with an epipolar geometry. We use ordered sampling as suggested in to draw 5-tuples from the list of tentative matches ordered ascendin

  19. Imaging characteristics of photogrammetric camera systems

    USGS Publications Warehouse

    Welch, R.; Halliday, J.

    1973-01-01

    In view of the current interest in high-altitude and space photographic systems for photogrammetric mapping, the United States Geological Survey (U.S.G.S.) undertook a comprehensive research project designed to explore the practical aspects of applying the latest image quality evaluation techniques to the analysis of such systems. The project had two direct objectives: (1) to evaluate the imaging characteristics of current U.S.G.S. photogrammetric camera systems; and (2) to develop methodologies for predicting the imaging capabilities of photogrammetric camera systems, comparing conventional systems with new or different types of systems, and analyzing the image quality of photographs. Image quality was judged in terms of a number of evaluation factors including response functions, resolving power, and the detectability and measurability of small detail. The limiting capabilities of the U.S.G.S. 6-inch and 12-inch focal length camera systems were established by analyzing laboratory and aerial photographs in terms of these evaluation factors. In the process, the contributing effects of relevant parameters such as lens aberrations, lens aperture, shutter function, image motion, film type, and target contrast procedures for analyzing image quality and predicting and comparing performance capabilities. ?? 1973.

  20. COMPUTER ANALYSIS OF PLANAR GAMMA CAMERA IMAGES

    EPA Science Inventory

    COMPUTER ANALYSIS OF PLANAR GAMMA CAMERA IMAGES T Martonen1 and J Schroeter2 1Experimental Toxicology Division, National Health and Environmental Effects Research Laboratory, U.S. EPA, Research Triangle Park, NC 27711 USA and 2Curriculum in Toxicology, Unive...

  1. Imaging of gamma emitters using scintillation cameras

    NASA Astrophysics Data System (ADS)

    Ricard, Marcel

    2004-07-01

    Since their introduction by Hal Anger in the late 1950s, the gamma cameras have been widely used in the field of nuclear medicine. The original concept is based on the association of a large field of view scintillator optically coupled with an array of photomultiplier tubes (PMTs), in order to locate the position of interactions inside the crystal. Using a dedicated accessory, like a parallel hole collimator, to focus the field of view toward a predefined direction, it is possible to built up an image of the radioactive distribution. In terms of imaging performances, three main characteristics are commonly considered: uniformity, spatial resolution and energy resolution. Major improvements were mainly due to progress in terms of industrial process regarding analogical electronic, crystal growing or PMTs manufacturing. Today's gamma camera is highly digital, from the PMTs to the display. All the corrections are applied "on the fly" using up to date signal processing techniques. At the same time some significant progresses have been achieved in the field of collimators. Finally, two new technologies have been implemented, solid detectors like CdTe or CdZnTe, and pixellized scintillators plus photodiodes or position sensitive photomultiplier tubes. These solutions are particularly well adapted to build dedicated gamma camera for breast or intraoperative imaging.

  2. The painting camera: an abstract rendering system with camera-control images

    E-print Network

    Meadows, Scott Harrison

    2000-01-01

    to be developed. This thesis describes a new method for generating abstract computer images using a simple and intuitive rendering technique. This technique, inspired by cubism, is based on ray tracing and uses images to define camera parameters. These images...

  3. Enhancement of document images from cameras

    NASA Astrophysics Data System (ADS)

    Taylor, Michael J.; Dance, Christopher R.

    1998-04-01

    As digital cameras become cheaper and more powerful, driven by the consumer digital photography market, we anticipate significant value in extending their utility as a general office peripheral by adding a paper scanning capability. The main technical challenges in realizing this new scanning interface are insufficient resolution, blur and lighting variations. We have developed an efficient technique for the recovery of text from digital camera images, which simultaneously treats these three problems, unlike other local thresholding algorithms which do not cope with blur and resolution enhancement. The technique first performs deblurring by deconvolution, and then resolution enhancement by linear interpolation. We compare the performance of a threshold derived from the local mean and variance of all pixel values within a neighborhood with a threshold derived from the local mean of just those pixels with high gradient. We assess performance using OCR error scores.

  4. Cervical SPECT Camera for Parathyroid Imaging

    SciTech Connect

    None

    2012-08-31

    Primary hyperparathyroidism characterized by one or more enlarged parathyroid glands has become one of the most common endocrine diseases in the world affecting about 1 per 1000 in the United States. Standard treatment is highly invasive exploratory neck surgery called ���¢��������Parathyroidectomy���¢�������. The surgery has a notable mortality rate because of the close proximity to vital structures. The move to minimally invasive parathyroidectomy is hampered by the lack of high resolution pre-surgical imaging techniques that can accurately localize the parathyroid with respect to surrounding structures. We propose to develop a dedicated ultra-high resolution (~ 1 mm) and high sensitivity (10x conventional camera) cervical scintigraphic imaging device. It will be based on a multiple pinhole-camera SPECT system comprising a novel solid state CZT detector that offers the required performance. The overall system will be configured to fit around the neck and comfortably image a patient.

  5. Anger camera image generation with microcomputers

    E-print Network

    Williams, Karl Morgan

    1988-01-01

    -dimensional tomographical marvels of physiological represention and interpretatior seen today. As the advancements in insr. rumentation continue to bring pcwer, speed, and quali' y to computer. sys. terna, greater avenu s of applicat!on to !ed'cal imag... is then called a positron. After a short t. ime, the positron is annihilated and two opposing 511 keV gamma-rays are emitted simultaneously. These two simultaneuosly opposing gemma-rays are extremely useful in tomographic studies. Due to the advanced camera...

  6. Methods for identification of images acquired with digital cameras

    NASA Astrophysics Data System (ADS)

    Geradts, Zeno J.; Bijhold, Jurrien; Kieft, Martijn; Kurosawa, Kenji; Kuroki, Kenro; Saitoh, Naoki

    2001-02-01

    From the court we were asked whether it is possible to determine if an image has been made with a specific digital camera. This question has to be answered in child pornography cases, where evidence is needed that a certain picture has been made with a specific camera. We have looked into different methods of examining the cameras to determine if a specific image has been made with a camera: defects in CCDs, file formats that are used, noise introduced by the pixel arrays and watermarking in images used by the camera manufacturer.

  7. Rapid Optimization of SPECT Scatter Correction Using Model LROC Observers

    PubMed Central

    Kulkarni, Santosh; Khurd, Parmeshwar; Zhou, Lili; Gindi, Gene

    2010-01-01

    The problem we address is the optimization and comparison of window-based scatter correction (SC) methods in SPECT for maximum a posteriori reconstructions. While sophisticated reconstruction-based SC methods are available, the commonly used window-based SC methods are fast, easy to use, and perform reasonably well. Rather than subtracting a scatter estimate from the measured sinogram and then reconstructing, we use an ensemble approach and model the mean scatter sinogram in the likelihood function. This mean scatter sinogram estimate, computed from satellite window data, is itself inexact (noisy). Therefore two sources of noise, that due to Poisson noise of unscattered photons and that due to the model error in the scatter estimate, are propagated into the reconstruction. The optimization and comparison is driven by a figure of merit, the area under the LROC curve (ALROC) that gauges performance in a signal detection plus localization task. We use model observers to perform the task. This usually entails laborious generation of many sample reconstructions, but in this work, we instead develop a theoretical approach that allows one to rapidly compute ALROC given known information about the imaging system and the scatter correction scheme. A critical step in the theory approach is to predict additional (above that due to to the propagated Poisson noise of the primary photons) contributions to the reconstructed image covariance due to scatter (model error) noise. Simulations show that our theory method yields, for a range of search tolerances, LROC curves and ALROC values in close agreement to that obtained using model observer responses obtained from sample reconstruction methods. This opens the door to rapid comparison of different window-based SC methods and to optimizing the parameters (including window placement and size, scatter sinogram smoothing kernel) of the SC method. PMID:20589227

  8. New insight into lunar impact melt mobility from the LRO camera

    USGS Publications Warehouse

    Bray, Veronica J.; Tornabene, Livio L.; Keszthelyi, Laszlo P.; McEwen, Alfred S.; Hawke, B. Ray; Giguere, Thomas A.; Kattenhorn, Simon A.; Garry, William B.; Rizk, Bashar; Caudill, C.M.; Gaddis, Lisa R.; van der Bogert, Carolyn H.

    2010-01-01

    The Lunar Reconnaissance Orbiter Camera (LROC) is systematically imaging impact melt deposits in and around lunar craters at meter and sub-meter scales. These images reveal that lunar impact melts, although morphologically similar to terrestrial lava flows of similar size, exhibit distinctive features (e.g., erosional channels). Although generated in a single rapid event, the post-impact mobility and morphology of lunar impact melts is surprisingly complex. We present evidence for multi-stage influx of impact melt into flow lobes and crater floor ponds. Our volume and cooling time estimates for the post-emplacement melt movements noted in LROC images suggest that new flows can emerge from melt ponds an extended time period after the impact event.

  9. Efficient height measurement method of surveillance camera image

    Microsoft Academic Search

    Joong Lee; Eung-Dae Lee; Hyun-Oh Tark; Jin-Woo Hwang; Do-Young Yoon

    2008-01-01

    As surveillance cameras are increasingly installed, their films are often submitted as evidence of crime, but very scant detailed information such as features and clothes is obtained due to the limited camera performance. Height, however, is relatively not significantly influenced by the camera performance. This paper studied the height measurement method using images from a CCTV. The information on the

  10. Fast Camera Imaging of Hall Thruster Ignition

    SciTech Connect

    C.L. Ellison, Y. Raitses and N.J. Fisch

    2011-02-24

    Hall thrusters provide efficient space propulsion by electrostatic acceleration of ions. Rotating electron clouds in the thruster overcome the space charge limitations of other methods. Images of the thruster startup, taken with a fast camera, reveal a bright ionization period which settles into steady state operation over 50 ?s. The cathode introduces azimuthal asymmetry, which persists for about 30 ?s into the ignition. Plasma thrusters are used on satellites for repositioning, orbit correction and drag compensation. The advantage of plasma thrusters over conventional chemical thrusters is that the exhaust energies are not limited by chemical energy to about an electron volt. For xenon Hall thrusters, the ion exhaust velocity can be 15-20 km/s, compared to 5 km/s for a typical chemical thruster

  11. Controlling Camera and Lights for Intelligent Image Acquisition and Merging

    E-print Network

    Jenkin, Michael R. M.

    IndiGolog (IG) agent programming language to create an intelligent controller. · Combine imagesControlling Camera and Lights for Intelligent Image Acquisition and Merging Lights and Camera,lesperan,jenkin}@cs.yorku.ca Other Research Team Members: Mark Obsniuk, Andrew German, Wei Xu, Arjun Chopra. Controllable Lights

  12. Denighting: Enhancement of nighttime images for a surveillance camera

    Microsoft Academic Search

    Akito Yamasaki; Hidenori Takauji; Shun'ichi Kaneko; Takeo Kanade; Hidehiro Ohki

    2008-01-01

    Nighttime images of a scene from a surveillance camera have lower contrast and higher noise than their corresponding daytime images of the same scene due to low illumination. Denighting is an image enhancement method for improving nighttime images, so that they are closer to those that would have been taken during daytime. The method exploits the fact that background images

  13. The European Photon Imaging Camera on XMM-Newton: The pn-CCD camera

    Microsoft Academic Search

    L. Strüder; U. Briel; K. Dennerl; R. Hartmann; E. Kendziorra; N. Meidinger; E. Pfeffermann; C. Reppin; B. Aschenbach; W. Bornemann; H. Bräuninger; W. Burkert; M. Elender; M. Freyberg; F. Haberl; G. Hartner; F. Heuschmann; H. Hippmann; E. Kastelic; S. Kemmer; G. Kettenring; W. Kink; N. Krause; S. Müller; A. Oppitz; W. Pietsch; M. Popp; P. Predehl; A. Read; K. H. Stephan; D. Stötter; J. Trümper; P. Holl; J. Kemmer; H. Soltau; R. Stötter; U. Weber; U. Weichert; C. von Zanthier; D. Carathanassis; G. Lutz; R. H. Richter; P. Solc; H. Böttcher; M. Kuster; R. Staubert; A. Abbey; A. Holland; M. Turner; M. Balasini; G. F. Bignami; N. La Palombara; G. Villa; W. Buttler; F. Gianini; R. Lainé; D. Lumb; P. Dhez

    2001-01-01

    The European Photon Imaging Camera (EPIC) consortium has provided the focal plane instruments for the three X-ray mirror systems on XMM-Newton. Two cameras with a reflecting grating spectrometer in the optical path are equipped with MOS type CCDs as focal plane detectors (Turner \\\\cite{mturner}), the telescope with the full photon flux operates the novel pn-CCD as an imaging X-ray spectrometer.

  14. Geometric rectification of camera-captured document images.

    PubMed

    Liang, Jian; DeMenthon, Daniel; Doermann, David

    2008-04-01

    Compared to typical scanners, handheld cameras offer convenient, flexible, portable, and non-contact image capture, which enables many new applications and breathes new life into existing ones. However, camera-captured documents may suffer from distortions caused by non-planar document shape and perspective projection, which lead to failure of current OCR technologies. We present a geometric rectification framework for restoring the frontal-flat view of a document from a single camera-captured image. Our approach estimates 3D document shape from texture flow information obtained directly from the image without requiring additional 3D/metric data or prior camera calibration. Our framework provides a unified solution for both planar and curved documents and can be applied in many, especially mobile, camera-based document analysis applications. Experiments show that our method produces results that are significantly more OCR compatible than the original images. PMID:18276966

  15. Photorealistic image synthesis and camera validation from 2D images

    NASA Astrophysics Data System (ADS)

    Santos Ferrer, Juan C.; González Chévere, David; Manian, Vidya

    2014-06-01

    This paper presents a new 3D scene reconstruction technique using the Unity 3D game engine. The method presented here allow us to reconstruct the shape of simple objects and more complex ones from multiple 2D images, including infrared and digital images from indoor scenes and only digital images from outdoor scenes and then add the reconstructed object to the simulated scene created in Unity 3D, these scenes are then validated with real world scenes. The method used different cameras settings and explores different properties in the reconstructions of the scenes including light, color, texture, shapes and different views. To achieve the highest possible resolution, it was necessary the extraction of partial textures from visible surfaces. To recover the 3D shapes and the depth of simple objects that can be represented by the geometric bodies, there geometric characteristics were used. To estimate the depth of more complex objects the triangulation method was used, for this the intrinsic and extrinsic parameters were calculated using geometric camera calibration. To implement the methods mentioned above the Matlab tool was used. The technique presented here also let's us to simulate small simple videos, by reconstructing a sequence of multiple scenes of the video separated by small margins of time. To measure the quality of the reconstructed images and video scenes the Fast Low Band Model (FLBM) metric from the Video Quality Measurement (VQM) software was used. Low bandwidth perception based features include edges and motion.

  16. Camera and visual veiling glare in HDR images

    Microsoft Academic Search

    John J. McCann; Alessandro Rizzi

    2007-01-01

    High-dynamic-range (HDR) images are superior to conventional images. The experiments in this paper measure camera and human responses to calibrated HDR test targets. We calibrated a 4.3-log-unit test target, with minimal and maximal glare from a changeable surround. Glare is an uncontrolled spread of an image-dependent fraction of scene luminance in cameras and in the eye. We use this standard

  17. Mars Imaging Camera (MIC) on board PLANET-B

    Microsoft Academic Search

    Keiken Ninomiya; Tatsuaki Hashimoto; Akikom Nakamura; Tadashi Mukai; Masato Nakamura; Masahiro Ogasawara; Naoki Yoshizawa; Juro Ishida; Yasuhiko Mizushima; Hiroto Hosoda; Masayo Takano

    1999-01-01

    Mars Imaging Camera (MIC) on board PLANET-B, Japanese Mars mission, is a small, compact and lightweight imager. It features three-color linear CCD aligned with the spacecraft's spin axis and is designed to take two-dimensional images of Mars and its satellites using the spacecraft's spin.The total field of view (FOV) of the camera is 360 degree (around the spin axis) ×

  18. Evaluation of mobile phone camera benchmarking using objective camera speed and image quality metrics

    NASA Astrophysics Data System (ADS)

    Peltoketo, Veli-Tapani

    2014-11-01

    When a mobile phone camera is tested and benchmarked, the significance of image quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. However, the speed or rapidity metrics of the mobile phone's camera system has not been used with the quality metrics even if the camera speed has become a more and more important camera performance feature. There are several tasks in this work. First, the most important image quality and speed-related metrics of a mobile phone's camera system are collected from the standards and papers and, also, novel speed metrics are identified. Second, combinations of the quality and speed metrics are validated using mobile phones on the market. The measurements are done toward application programming interface of different operating systems. Finally, the results are evaluated and conclusions are made. The paper defines a solution to combine different image quality and speed metrics to a single benchmarking score. A proposal of the combined benchmarking metric is evaluated using measurements of 25 mobile phone cameras on the market. The paper is a continuation of a previous benchmarking work expanded with visual noise measurement and updates of the latest mobile phone versions.

  19. NV-CMOS HD camera for day/night imaging

    NASA Astrophysics Data System (ADS)

    Vogelsong, T.; Tower, J.; Sudol, Thomas; Senko, T.; Chodelka, D.

    2014-06-01

    SRI International (SRI) has developed a new multi-purpose day/night video camera with low-light imaging performance comparable to an image intensifier, while offering the size, weight, ruggedness, and cost advantages enabled by the use of SRI's NV-CMOS HD digital image sensor chip. The digital video output is ideal for image enhancement, sharing with others through networking, video capture for data analysis, or fusion with thermal cameras. The camera provides Camera Link output with HD/WUXGA resolution of 1920 x 1200 pixels operating at 60 Hz. Windowing to smaller sizes enables operation at higher frame rates. High sensitivity is achieved through use of backside illumination, providing high Quantum Efficiency (QE) across the visible and near infrared (NIR) bands (peak QE <90%), as well as projected low noise (<2h+) readout. Power consumption is minimized in the camera, which operates from a single 5V supply. The NVCMOS HD camera provides a substantial reduction in size, weight, and power (SWaP) , ideal for SWaP-constrained day/night imaging platforms such as UAVs, ground vehicles, fixed mount surveillance, and may be reconfigured for mobile soldier operations such as night vision goggles and weapon sights. In addition the camera with the NV-CMOS HD imager is suitable for high performance digital cinematography/broadcast systems, biofluorescence/microscopy imaging, day/night security and surveillance, and other high-end applications which require HD video imaging with high sensitivity and wide dynamic range. The camera comes with an array of lens mounts including C-mount and F-mount. The latest test data from the NV-CMOS HD camera will be presented.

  20. Control Camera and Light Source Positions using Image Gradient Information

    E-print Network

    Paris-Sud XI, Université de

    Control Camera and Light Source Positions using Image Gradient Information Eric Marchand Abstract-- In this paper, we propose an original approach to control camera position and/or lighting conditions servoing framework, we propose solutions to two different issues: maximizing the brightness of the scene

  1. Thermal analysis of the ultraviolet imager camera and electronics

    NASA Technical Reports Server (NTRS)

    Dirks, Gregory J.

    1991-01-01

    The Ultraviolet Imaging experiment has undergone design changes that necessiate updating the reduced thermal models (RTM's) for both the Camera and Electronics. In addition, there are several mission scenarios that need to be evaluated in terms of thermal response of the instruments. The impact of these design changes and mission scenarios on the thermal performance of the Camera and Electronics assemblies is discussed.

  2. Imaging Emission Spectra with Handheld and Cellphone Cameras

    ERIC Educational Resources Information Center

    Sitar, David

    2012-01-01

    As point-and-shoot digital camera technology advances it is becoming easier to image spectra in a laboratory setting on a shoestring budget and get immediate results. With this in mind, I wanted to test three cameras to see how their results would differ. Two undergraduate physics students and I used one handheld 7.1 megapixel (MP) digital Cannon…

  3. Imaging Emission Spectra with Handheld and Cellphone Cameras

    NASA Astrophysics Data System (ADS)

    Sitar, David

    2012-12-01

    As point-and-shoot digital camera technology advances it is becoming easier to image spectra in a laboralory setting on a shoestring budget and get immediale results. With this in mind, I wanted to test three cameras to see how their results would differ. Two undergraduate physics students and I used one handheld 7.1 megapixel (MP) digital Cannon point-and-shoot auto focusing camera and two different cellphone cameras: one at 6.1 MP and the other at 5.1 MP.

  4. Recording Images Using a Simple Pinhole Camera

    NSDL National Science Digital Library

    John Eichinger

    2009-05-30

    In this lesson, students develop and expand their observational skills and technological understanding by building and operating a pinhole camera. The interdisciplinary connections are in the realm of application in this motivating activity. The lesson pr

  5. Lucas-Kanade image registration using camera parameters

    NASA Astrophysics Data System (ADS)

    Cho, Sunghyun; Cho, Hojin; Tai, Yu-Wing; Moon, Young Su; Cho, Junguk; Lee, Shihwa; Lee, Seungyong

    2012-01-01

    The Lucas-Kanade algorithm and its variants have been successfully used for numerous works in computer vision, which include image registration as a component in the process. In this paper, we propose a Lucas-Kanade based image registration method using camera parameters. We decompose a homography into camera intrinsic and extrinsic parameters, and assume that the intrinsic parameters are given, e.g., from the EXIF information of a photograph. We then estimate only the extrinsic parameters for image registration, considering two types of camera motions, 3D rotations and full 3D motions with translations and rotations. As the known information about the camera is fully utilized, the proposed method can perform image registration more reliably. In addition, as the number of extrinsic parameters is smaller than the number of homography elements, our method runs faster than the Lucas-Kanade based registration method that estimates a homography itself.

  6. Multi-camera: interactive rendering of abstract digital images

    E-print Network

    Smith, Jeffrey Statler

    2004-09-30

    MULTI-CAMERA: INTERACTIVE RENDERING OF ABSTRACT DIGITAL IMAGES AThesis by JEFFREY STATLER SMITH Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE... December 2003 Major Subject: Visualization Sciences MULTI-CAMERA: INTERACTIVE RENDERING OF ABSTRACT DIGITAL IMAGES AThesis by JEFFREY STATLER SMITH Submitted to Texas A&M University in partial fulfillment of the requirements for the degree of MASTER...

  7. An Evaluation of Iterative Reconstruction Strategies on Mediastinal Lesion Detection Using Hybrid Ga-67 SPECT Images

    PubMed Central

    Pereira, N. F.; Gifford, H. C.; Pretorius, P. H.; Farncombe, T.; Smyczynski, M.; Licho, R.; Schneider, P.; King, M. A.

    2009-01-01

    Hybrid LROC studies can be used to more realistically assess the impact of reconstruction strategies, compared to those constructed with digital phantoms. This is because hybrid data provides the background variability that is present in clinical imaging, as well as, control over critical imaging parameters, required to conduct meaningful tests. Hybrid data is obtained by adding Monte Carlo simulated lesions to disease free clinical projection data. Due to Ga-67 being a particularly challenging radionuclide for imaging, we use Ga-67 hybrid SPECT data to study the effectiveness of the various correction strategies developed to account for degradations in SPECT imaging. Our data was obtained using GE-VG dual detector SPECT-CT camera. After determining a target lesion contrast we conduct pilot LROC studies to obtain a near-optimal set of reconstruction parameters for the different strategies individually. These near-optimal parameters are then used to reconstruct the final evaluation study sets. All LROC study results reported here were obtained employing human observers only. We use final LROC study results to assess the impact of attenuation compensation, scatter compensation and detector resolution compensation on data reconstructed with the RBI-EM algorithm. We also compare these with FBP reconstructions of the same dataset. Our experiment indicates an improvement in detection accuracy, as various degradations inherent in the image acquisition process are compensated for in the reconstruction process. PMID:19169427

  8. Mars Global Surveyor Mars Orbiter Camera Captioned Image Releases

    NSDL National Science Digital Library

    Malin Space Science Systems

    This Malin Space Science Systems website features captioned image releases from the Mars Orbiter Camera (MOC) on Mars Global Surveyor. Images are grouped by release date and by topic (e.g., volcanoes, craters, gullies, movies and animations, 3-D stereo pictures). The captions are provided for context, and most images can be downloaded at a variety of resolutions.

  9. Onboard image compression for the HST Advanced Camera for Surveys

    E-print Network

    On­board image compression for the HST Advanced Camera for Surveys Richard L. White a and Ira for Surveys (ACS) produces very large 4096 × 4096 pixel images. We will have on­board image compression to the ground. This is the first time on­board compression has been included in an Hubble Space Telescope

  10. Progress in Camera-Based Document Image Analysis

    Microsoft Academic Search

    David S. Doermann; Jian Liang; Huiping Li

    2003-01-01

    The increasing availability of high performance, low priced, portable digital imaging devices has created a tremendous opportunity for supplementing traditional scanning for document image acquisition. Digital cameras attached to cellular phones, PDAs, or as standalone still or video devices are highly mobile and easy to use; they can capture images of any kind of document including very thick books, historical

  11. A time-resolved image sensor for tubeless streak cameras

    NASA Astrophysics Data System (ADS)

    Yasutomi, Keita; Han, SangMan; Seo, Min-Woong; Takasawa, Taishi; Kagawa, Keiichiro; Kawahito, Shoji

    2014-03-01

    This paper presents a time-resolved CMOS image sensor with draining-only modulation (DOM) pixels for tube-less streak cameras. Although the conventional streak camera has high time resolution, the device requires high voltage and bulky system due to the structure with a vacuum tube. The proposed time-resolved imager with a simple optics realize a streak camera without any vacuum tubes. The proposed image sensor has DOM pixels, a delay-based pulse generator, and a readout circuitry. The delay-based pulse generator in combination with an in-pixel logic allows us to create and to provide a short gating clock to the pixel array. A prototype time-resolved CMOS image sensor with the proposed pixel is designed and implemented using 0.11um CMOS image sensor technology. The image array has 30(Vertical) x 128(Memory length) pixels with the pixel pitch of 22.4um. .

  12. Cancer imaging—making the most of your gamma camera

    PubMed Central

    Miles, K A

    2004-01-01

    As MRI threatens the use of bone scintigraphy for skeletal metastases and 18F-fluorodeoxyglucose positron emission tomography (18FDG-PET) emerges as the main focus in nuclear oncology, the future role of the gamma camera in cancer imaging appears unclear. However, there is a range of pre-existing conventional gamma camera techniques that have incremental benefit over CT and other structural imaging techniques, but are yet to be fully exploited in the care of cancer patients. This article reviews some of the more advanced conventional nuclear medicine techniques for cancer imaging. Often gamma camera techniques perform close to 18FDG-PET or provide complementary information. Where 18FDG-PET is diagnostically superior, the incremental cost-effectiveness gain of 18FDG-PET over conventional gamma camera techniques has not always been fully evaluated. PMID:18215970

  13. Digital Image Forensics for Identifying Computer Generated and Digital Camera Images

    Microsoft Academic Search

    Sintayehu Dehnie; Husrev T. Sencar; Nasir D. Memon

    2006-01-01

    We describe a digital image forensics technique to distinguish im- ages captured by a digital camera from computer generated images. Our approach is based on the fact that image acquisition in a digital camera is fundamentally different from the generative algorithms de- ployed by computer generated imagery. This difference is captured in terms of the properties of the residual image

  14. [New medical imaging based on electron tracking Compton camera (ETCC)].

    PubMed

    Tanimori, Toru; Kubo, Hidetoshi; Kabuki, Shigeto; Kimura, Hiroyuki

    2012-01-01

    We have developed an Electron-Tracking Compton Camera (ETCC) for medical imaging due to its wide energy dynamic range (200-1,500keV) and wide field of view (FOV, 3 str). This camera has a potential of developing the new reagents. We have carried out several imaging reagent studies as examples; (1) 18F-FDG and 131I-MIBG simultaneous imaging for double clinical tracer imaging, (2) imaging of some minerals (Mn-54, Zn-65, Fe-59) in mouse and plants. In addition, ETCC has a potential of real-time monitoring of the Bragg peak location by imaging prompt gamma rays for the beam therapy. We carried out the water phantom experiment using 140MeV proton beam, and obtained the images of both 511 keV and high energy gamma rays (800-2,000keV). Here better correlation of the latter image to the Bragg peak has been observed. Another potential of ETCC is to reconstruct the 3D image using only one-head camera without rotations of both the target and camera. Good 3D images of the thyroid grant phantom and the mouse with tumor were observed. In order to advance those features to the practical use, we are improving the all components and then construct the multi-head ETCC system. PMID:24592680

  15. Imaging of gamma emitters using scintillation cameras

    Microsoft Academic Search

    Marcel Ricard

    2004-01-01

    Since their introduction by Hal Anger in the late 1950s, the gamma cameras have been widely used in the field of nuclear medicine. The original concept is based on the association of a large field of view scintillator optically coupled with an array of photomultiplier tubes (PMTs), in order to locate the position of interactions inside the crystal. Using a

  16. Multiview: a novel multispectral IR imaging camera

    NASA Astrophysics Data System (ADS)

    Soel, Michael A.; Rudman, Stanley; Ryan, Robert; Fonneland, Nils J.; Milano, Steve J.

    1997-06-01

    The Surveillance Sciences Directorate of the Northrop Grumman Advanced Systems and Technology organization is developing a novel Multispectral IR camera known as Multiview. This prototype system is capable of simultaneously acquiring 4-color SWIR/MWIR 2D imagery that is both spatially and temporally registered utilizing a single 2562 HgCdTe snapshot IR FPA capable of frame rates in excess of 240 Hz. The patented design offers an extremely compact package that contains the entire optomechanical assembly (lenses, interchangeable filters, and cold shield) in less than a 3 in3 volume. The unique imagery collected with this camera has the potential to significantly improve the effectiveness of clutter suppression algorithms, multi-color target detection and target-background discrimination for a wide variety of mission scenarios. This paper describes the key aspects of the Multiview prototype camera design and operation. Multiview's ability to dynamically manage flux imbalances between the four subbands is discussed. Radiometric performance predictions are presented along with laboratory validation of many of these performance metrics. Several examples of field collected imagery is shown including examples of transient rocket plume data measured at 240 Hz sample rate. The importance and utility of spatio-temporal multi-band imagery is also discussed.

  17. A fast, automatic camera image stabilization benchmarking scheme

    NASA Astrophysics Data System (ADS)

    Yu, Jun; Craver, Scott

    2012-01-01

    While image stabilization(IS ) has become a default functionality for most digital cameras, there is a lack of automatic IS evaluation scheme. Most publicly known camera IS reviews either require human visual assessment or resort to some generic blur metric. The former is slow and inconsistent, and the latter may not be easily scalable with respect to resolution variation and exposure variation when comparing different cameras. We proposed a histogram based automatic IS evaluation scheme, which employs a white noise pattern as shooting target. It is able to produce accurate and consistent IS benchmarks in a very fast manner.

  18. ProxiScan™: A Novel Camera for Imaging Prostate Cancer

    SciTech Connect

    Ralph James

    2009-10-27

    ProxiScan is a compact gamma camera suited for high-resolution imaging of prostate cancer. Developed by Brookhaven National Laboratory and Hybridyne Imaging Technologies, Inc., ProxiScan won a 2009 R&D 100 Award, sponsored by R&D Magazine to recognize t

  19. Acquisition and evaluation of radiography images by digital camera.

    PubMed

    Cone, Stephen W; Carucci, Laura R; Yu, Jinxing; Rafiq, Azhar; Doarn, Charles R; Merrell, Ronald C

    2005-04-01

    To determine applicability of low-cost digital imaging for different radiographic modalities used in consultations from remote areas of the Ecuadorian rainforest with limited resources, both medical and financial. Low-cost digital imaging, consisting of hand-held digital cameras, was used for image capture at a remote location. Diagnostic radiographic images were captured in Ecuador by digital camera and transmitted to a password-protected File Transfer Protocol (FTP) server at VCU Medical Center in Richmond, Virginia, using standard Internet connectivity with standard security. After capture and subsequent transfer of images via low-bandwidth Internet connections, attending radiologists in the United States compared diagnoses to those from Ecuador to evaluate quality of image transfer. Corroborative diagnoses were obtained with the digital camera images for greater than 90% of the plain film and computed tomography studies. Ultrasound (U/S) studies demonstrated only 56% corroboration. Images of radiographs captured utilizing commercially available digital cameras can provide quality sufficient for expert consultation for many plain film studies for remote, underserved areas without access to advanced modalities. PMID:15857253

  20. Sub100g uncooled thermal imaging camera design

    Microsoft Academic Search

    Alistair Brown

    2008-01-01

    There are many applications for thermal imaging systems where low weight, high performance and high durability are at a premium. These include UAV systems, future warrior programs and thermal weapon sights. Thermal imaging camera design is restricted by a number external constraints including, detector packaging, detector performance and optical design. This paper describes how, by combining the latest 25µm pitch

  1. An airborne four-camera imaging system for agricultural applications

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper describes the design and testing of an airborne multispectral digital imaging system for remote sensing applications. The system consists of four high resolution charge coupled device (CCD) digital cameras and a ruggedized PC equipped with a frame grabber and image acquisition software. T...

  2. ProxiScan?: A Novel Camera for Imaging Prostate Cancer

    ScienceCinema

    Ralph James

    2010-01-08

    ProxiScan is a compact gamma camera suited for high-resolution imaging of prostate cancer. Developed by Brookhaven National Laboratory and Hybridyne Imaging Technologies, Inc., ProxiScan won a 2009 R&D 100 Award, sponsored by R&D Magazine to recognize t

  3. Wide-Angle, Wide-Band Camera for Remote Imaging

    NASA Technical Reports Server (NTRS)

    Atcheson, P. D.

    1985-01-01

    Improved ultraviolet-to-infrared camera design combines high resolution and relatively wide field of view in remote-imaging system. Although design intended for satellite-borne system to give information on such Earth features as vegetation, pollution, and land formation mineral deposits, optical principle also useful in ground-based or airborne high-resolution television for imaging objects at great distances.

  4. 2D\\/3D image (facial) comparison using camera matching

    Microsoft Academic Search

    Mirelle I. M. Goos; Ivo B. Alberink; Arnout C. C. Ruifrok

    2006-01-01

    A problem in forensic facial comparison of images of perpetrators and suspects is that distances between fixed anatomical points in the face, which form a good starting point for objective, anthropometric comparison, vary strongly according to the position and orientation of the camera. In case of a cooperating suspect, a 3D image may be taken using e.g. a laser scanning

  5. First Halley multicolour camera imaging results from Giotto

    Microsoft Academic Search

    H. U. Keller; C. Arpigny; C. Barbieri; R. M. Bonnet; S. Cazes; M. Coradini; C. B. Cosmovici; W. A. Delamere; W. F. Huebner; D. W. Hughes; C. Jamar; D. Malaise; H. J. Reitsema; H. U. Schmidt; W. K. H. Schmidt; P. Seige; F. L. Whipple; K. Wilhelm

    1986-01-01

    The first imaging results from the Halley Multicolour Camera during the Giotto fly-by of comet Halley provide images centred on the brightest part of the inner coma which show the silhouette of a large, solid and irregularly shaped cometary nucleus and jet-like dust activity visible in reflected sunlight. The nucleus is at least 15 km long and ?10 km wide;

  6. Mars Global Surveyor Mars Orbiter Camera Image Gallery

    NSDL National Science Digital Library

    Malin Space Science Systems

    This site from Malin Space Science Systems provides access to all of the images acquired by the Mars Orbiter Camera (MOC) during the Mars Global Surveyor mission through March 2005. MOC consists of several cameras: A narrow angle system that provides grayscale high resolution views of the planet's surface (typically, 1.5 to 12 meters/pixel), and red and blue wide angle cameras that provide daily global weather monitoring, context images to determine where the narrow angle views were actually acquired, and regional coverage to monitor variable surface features such as polar frost and wind streaks. Ancillary data for each image is provided and instructions regarding gallery usage are also available on the site.

  7. CMOS Image Sensors: Electronic Camera On A Chip

    NASA Technical Reports Server (NTRS)

    Fossum, E. R.

    1995-01-01

    Recent advancements in CMOS image sensor technology are reviewed, including both passive pixel sensors and active pixel sensors. On- chip analog to digital converters and on-chip timing and control circuits permit realization of an electronic camera-on-a-chip. Highly miniaturized imaging systems based on CMOS image sensor technology are emerging as a competitor to charge-coupled devices for low cost uses.

  8. Wide-Angle, Reflective Strip-Imaging Camera

    NASA Technical Reports Server (NTRS)

    Vaughan, Arthur H.

    1992-01-01

    Proposed camera images thin, striplike portion of field of view of 180 degrees wide. Hemispherical concave reflector forms image onto optical fibers, which transfers it to strip of photodetectors or spectrograph. Advantages include little geometric distortion, achromatism, and ease of athermalization. Uses include surveillance of clouds, coarse mapping of terrain, measurements of bidirectional reflectance distribution functions of aerosols, imaging spectrometry, oceanography, and exploration of planets.

  9. Portal imaging with flat-panel detector and CCD camera

    NASA Astrophysics Data System (ADS)

    Roehrig, Hans; Tang, Chuankun; Cheng, Chee-Wai; Dallas, William J.

    1997-07-01

    This paper provides a comparison of imaging parameters of two portal imaging systems at 6 MV: a flat panel detector and a CCD-camera based portal imaging system. Measurements were made of the signal and noise and consequently of signal-to-noise per pixel as a function of the exposure. Both systems have a linear response with respect to exposure, and the noise is proportional to the square-root of the exposure, indicating photon-noise limitation. The flat-panel detector has a signal- to-noise ratio, which is higher than that observed wit the CCD-camera based portal imaging system. This is expected because most portal imaging systems using optical coupling with a lens exhibit severe quantum-sinks. The paper also presents data on the screen's photon gain (the number of light-photons per interacting x-ray photon), as well as on the magnitude of the Swank-noise, (which describes fluctuation in the screen's photon gain). Images of a Las Vegas-type aluminum contrast detail phantom, located at the ISO-Center, were generated at an exposure of 1 MU. The CCD-camera based system permits detection of aluminum-holes of 0.01194 cm diameter and 0.228 mm depth while the flat-panel detector permits detection of aluminum holes of 0.01194 cm diameter and 0.1626 mm depth, indicating a better signal-to-noise ratio. Rank order filtering was applied to the raw images from the CCD-based system in order to remove the direct hits. These are camera responses to scattered x-ray photons which interact directly with the CCD of the CCD-camera and generate 'salt and pepper type noise,' which interferes severely with attempts to determine accurate estimates of the image noise.

  10. CCD camera response to diffraction patterns simulating particle images.

    PubMed

    Stanislas, M; Abdelsalam, D G; Coudert, S

    2013-07-01

    We present a statistical study of CCD (or CMOS) camera response to small images. Diffraction patterns simulating particle images of a size around 2-3 pixels were experimentally generated and characterized using three-point Gaussian peak fitting, currently used in particle image velocimetry (PIV) for accurate location estimation. Based on this peak-fitting technique, the bias and RMS error between locations of simulated and real images were accurately calculated by using a homemade program. The influence of the intensity variation of the simulated particle images on the response of the CCD camera was studied. The experimental results show that the accuracy of the position determination is very good and brings attention to superresolution PIV algorithms. Some tracks are proposed in the conclusion to enlarge and improve the study. PMID:23842270

  11. Air Pollution Determination Using a Surveillance Internet Protocol Camera Images

    NASA Astrophysics Data System (ADS)

    Chow Jeng, C. J.; Hwee San, Hslim; Matjafri, M. Z.; Abdullah, Abdul, K.

    Air pollution has long been a problem in the industrial nations of the West It has now become an increasing source of environmental degradation in the developing nations of east Asia Malaysia government has built a network to monitor air pollution But the cost of these networks is high and limits the knowledge of pollutant concentration to specific points of the cities A methodology based on a surveillance internet protocol IP camera for the determination air pollution concentrations was presented in this study The objective of this study was to test the feasibility of using IP camera data for estimating real time particulate matter of size less than 10 micron PM10 in the campus of USM The proposed PM10 retrieval algorithm derived from the atmospheric optical properties was employed in the present study In situ data sets of PM10 measurements and sun radiation measurements at the ground surface were collected simultaneously with the IP camera images using a DustTrak meter and a handheld spectroradiometer respectively The digital images were separated into three bands namely red green and blue bands for multispectral algorithm calibration The digital number DN of the IP camera images were converted into radiance and reflectance values After that the reflectance recorded by the digital camera was subtracted by the reflectance of the known surface and we obtained the reflectance caused by the atmospheric components The atmospheric reflectance values were used for regression analysis Regression technique was employed to determine suitable

  12. Digital camera with apparatus for authentication of images produced from an image file

    NASA Technical Reports Server (NTRS)

    Friedman, Gary L. (inventor)

    1993-01-01

    A digital camera equipped with a processor for authentication of images produced from an image file taken by the digital camera is provided. The digital camera processor has embedded therein a private key unique to it, and the camera housing has a public key that is so uniquely based upon the private key that digital data encrypted with the private key by the processor may be decrypted using the public key. The digital camera processor comprises means for calculating a hash of the image file using a predetermined algorithm, and second means for encrypting the image hash with the private key, thereby producing a digital signature. The image file and the digital signature are stored in suitable recording means so they will be available together. Apparatus for authenticating at any time the image file as being free of any alteration uses the public key for decrypting the digital signature, thereby deriving a secure image hash identical to the image hash produced by the digital camera and used to produce the digital signature. The apparatus calculates from the image file an image hash using the same algorithm as before. By comparing this last image hash with the secure image hash, authenticity of the image file is determined if they match, since even one bit change in the image hash will cause the image hash to be totally different from the secure hash.

  13. Establishing imaging sensor specifications for digital still cameras

    NASA Astrophysics Data System (ADS)

    Kriss, Michael A.

    2007-02-01

    Digital Still Cameras, DSCs, have now displaced conventional still cameras in most markets. The heart of a DSC is thought to be the imaging sensor, be it Full Frame CCD, and Interline CCD, a CMOS sensor or the newer Foveon buried photodiode sensors. There is a strong tendency by consumers to consider only the number of mega-pixels in a camera and not to consider the overall performance of the imaging system, including sharpness, artifact control, noise, color reproduction, exposure latitude and dynamic range. This paper will provide a systematic method to characterize the physical requirements of an imaging sensor and supporting system components based on the desired usage. The analysis is based on two software programs that determine the "sharpness", potential for artifacts, sensor "photographic speed", dynamic range and exposure latitude based on the physical nature of the imaging optics, sensor characteristics (including size of pixels, sensor architecture, noise characteristics, surface states that cause dark current, quantum efficiency, effective MTF, and the intrinsic full well capacity in terms of electrons per square centimeter). Examples will be given for consumer, pro-consumer, and professional camera systems. Where possible, these results will be compared to imaging system currently on the market.

  14. Removing Image Artifacts Due to Dirty Camera Lenses and Thin Occluders Columbia University

    E-print Network

    Nayar, Shree K.

    - ital cameras, or the front windows of security cameras, often accu- mulate various types outdoor security cameras, underwater cameras or covert surveillance be- hind a fence. Therefore, weRemoving Image Artifacts Due to Dirty Camera Lenses and Thin Occluders Jinwei Gu Columbia

  15. Model observers to predict human performance in LROC studies of SPECT reconstruction using anatomical priors

    NASA Astrophysics Data System (ADS)

    Lehovich, Andre; Gifford, Howard C.; King, Michael A.

    2008-03-01

    We investigate the use of linear model observers to predict human performance in a localization ROC (LROC) study. The task is to locate gallium-avid tumors in simulated SPECT images of a digital phantom. Our study is intended to find the optimal strength of smoothing priors incorporating various degrees of anatomical knowledge. Although humans reading the images must perform a search task, our models ignore search by assuming the lesion location is known. We use area under the model ROC curve to predict human area under the LROC curve. We used three models, the non-prewhitening matched filter (NPWMF), the channelized nonprewhitening (CNPW), and the channelized Hotelling observer (CHO). All models have access to noise-free reconstructions, which are used to compute the signal template. The NPWMF model does a poor job of predicting human performance. The CNPW and CHO model do a somewhat better job, but still do not qualitatively capture the human results. None of the models accurately predicts the smoothing strength which maximizes human performance.

  16. Image quality testing of assembled IR camera modules

    NASA Astrophysics Data System (ADS)

    Winters, Daniel; Erichsen, Patrik

    2013-10-01

    Infrared (IR) camera modules for the LWIR (8-12_m) that combine IR imaging optics with microbolometer focal plane array (FPA) sensors with readout electronics are becoming more and more a mass market product. At the same time, steady improvements in sensor resolution in the higher priced markets raise the requirement for imaging performance of objectives and the proper alignment between objective and FPA. This puts pressure on camera manufacturers and system integrators to assess the image quality of finished camera modules in a cost-efficient and automated way for quality control or during end-of-line testing. In this paper we present recent development work done in the field of image quality testing of IR camera modules. This technology provides a wealth of additional information in contrast to the more traditional test methods like minimum resolvable temperature difference (MRTD) which give only a subjective overall test result. Parameters that can be measured are image quality via the modulation transfer function (MTF) for broadband or with various bandpass filters on- and off-axis and optical parameters like e.g. effective focal length (EFL) and distortion. If the camera module allows for refocusing the optics, additional parameters like best focus plane, image plane tilt, auto-focus quality, chief ray angle etc. can be characterized. Additionally, the homogeneity and response of the sensor with the optics can be characterized in order to calculate the appropriate tables for non-uniformity correction (NUC). The technology can also be used to control active alignment methods during mechanical assembly of optics to high resolution sensors. Other important points that are discussed are the flexibility of the technology to test IR modules with different form factors, electrical interfaces and last but not least the suitability for fully automated measurements in mass production.

  17. Digital Camera with Apparatus for Authentication of Images Produced from an Image File

    NASA Technical Reports Server (NTRS)

    Friedman, Gary L. (Inventor)

    1996-01-01

    A digital camera equipped with a processor for authentication of images produced from an image file taken by the digital camera is provided. The digital camera processor has embedded therein a private key unique to it, and the camera housing has a public key that is so uniquely related to the private key that digital data encrypted with the private key may be decrypted using the public key. The digital camera processor comprises means for calculating a hash of the image file using a predetermined algorithm, and second means for encrypting the image hash with the private key, thereby producing a digital signature. The image file and the digital signature are stored in suitable recording means so they will be available together. Apparatus for authenticating the image file as being free of any alteration uses the public key for decrypting the digital signature, thereby deriving a secure image hash identical to the image hash produced by the digital camera and used to produce the digital signature. The authenticating apparatus calculates from the image file an image hash using the same algorithm as before. By comparing this last image hash with the secure image hash, authenticity of the image file is determined if they match. Other techniques to address time-honored methods of deception, such as attaching false captions or inducing forced perspectives, are included.

  18. Multiexposure and multifocus image fusion with multidimensional camera shake compensation

    NASA Astrophysics Data System (ADS)

    Gomez, Alexis Lluis; Saravi, Sara; Edirisinghe, Eran A.

    2013-10-01

    Multiexposure image fusion algorithms are used for enhancing the perceptual quality of an image captured by sensors of limited dynamic range. This is achieved by rendering a single scene based on multiple images captured at different exposure times. Similarly, multifocus image fusion is used when the limited depth of focus on a selected focus setting of a camera results in parts of an image being out of focus. The solution adopted is to fuse together a number of multifocus images to create an image that is focused throughout. A single algorithm that can perform both multifocus and multiexposure image fusion is proposed. This algorithm is a new approach in which a set of unregistered multiexposure/focus images is first registered before being fused to compensate for the possible presence of camera shake. The registration of images is done via identifying matching key-points in constituent images using scale invariant feature transforms. The random sample consensus algorithm is used to identify inliers of SIFT key-points removing outliers that can cause errors in the registration process. Finally, the coherent point drift algorithm is used to register the images, preparing them to be fused in the subsequent fusion stage. For the fusion of images, a new approach based on an improved version of a wavelet-based contourlet transform is used. The experimental results and the detailed analysis presented prove that the proposed algorithm is capable of producing high-dynamic range (HDR) or multifocus images by registering and fusing a set of multiexposure or multifocus images taken in the presence of camera shake. Further, comparison of the performance of the proposed algorithm with a number of state-of-the art algorithms and commercial software packages is provided. In particular, our literature review has revealed that this is one of the first attempts where the compensation of camera shake, a very likely practical problem that can result in HDR image capture using handheld devices, has been addressed as a part of a multifocus and multiexposure image enhancement system.

  19. Spatial calibration of full stokes polarization imaging camera

    NASA Astrophysics Data System (ADS)

    Vedel, M.; Breugnot, S.; Lechocinski, N.

    2014-05-01

    Objective and background: We present a new method for the calibration of Bossa Nova Technologies' full Stokes, passive polarization imaging camera SALSA. The SALSA camera is a Division of Time Imaging Polarimeter. It uses custom made Ferroelectric Liquid Crystals mounted directly in front of the camera's CCD. Regular calibration process based on Data Reduction Matrix calculation assumes a perfect spatial uniformity of the FLC. However, alignment of FLC molecules can be disturbed by external constraints like mechanical stress from fixture, temperature variations and humidity. This disarray of the FLC molecules alignment appears as spatial non-uniformity. With typical DRM condition numbers of 2 to 5, the influence on DOLP and DOCP variations over the field of view can get up to 10%. Spatial nonuniformity of commercially available FLC products is the limiting factor for achieving reliable performances over the whole camera's field of view. We developed a field calibration technique based on mapping the CCD into areas of interest, then applying the DRM calculations on those individual areas. Results: First, we provide general background of the SALSA camera's technology, its performances and limitations. Detailed analysis of commercially available FLCs is described. Particularly, the spatial non uniformity influence on the Stokes parameters. Then, the new calibration technique is presented. Several configurations and parameters are tested: even division of the CCD into square-shaped regions, the number of regions, adaptive regions. Finally, the spatial DRM "stitching" process is described, especially for live calculation and display of Stokes parameters.

  20. Space-Variant Restoration of Images Degraded by Camera Motion Blur

    Microsoft Academic Search

    Michal Sorel; Jan Flusser

    2008-01-01

    We examine the problem of restoration from multiple images degraded by camera motion blur. We consider scenes with significant depth variations resulting in space-variant blur. The proposed algorithm can be applied if the camera moves along an arbitrary curve parallel to the image plane, without any rotations. The knowledge of camera trajectory and camera parameters is not necessary. At the

  1. Laser Imaging Video Camera Sees Through Fire, Fog, Smoke

    NASA Technical Reports Server (NTRS)

    2015-01-01

    Under a series of SBIR contracts with Langley Research Center, inventor Richard Billmers refined a prototype for a laser imaging camera capable of seeing through fire, fog, smoke, and other obscurants. Now, Canton, Ohio-based Laser Imaging through Obscurants (LITO) Technologies Inc. is demonstrating the technology as a perimeter security system at Glenn Research Center and planning its future use in aviation, shipping, emergency response, and other fields.

  2. Radiometric cloud imaging with an uncooled microbolometer thermal infrared camera.

    PubMed

    Shaw, Joseph; Nugent, Paul; Pust, Nathan; Thurairajah, Brentha; Mizutani, Kohei

    2005-07-25

    An uncooled microbolometer-array thermal infrared camera has been incorporated into a remote sensing system for radiometric sky imaging. The radiometric calibration is validated and improved through direct comparison with spectrally integrated data from the Atmospheric Emitted Radiance Interferometer (AERI). With the improved calibration, the Infrared Cloud Imager (ICI) system routinely obtains sky images with radiometric uncertainty less than 0.5 W/(m(2 )sr) for extended deployments in challenging field environments. We demonstrate the infrared cloud imaging technique with still and time-lapse imagery of clear and cloudy skies, including stratus, cirrus, and wave clouds. PMID:19498585

  3. A compact gamma camera for biological imaging

    SciTech Connect

    Bradley, E.L.; Cella, J.; Majewski, S.; Popov, V.; Jianguo Qian; Saha, M.S.; Smith, M.F.; Weisenberger, A.G.; Welsh, R.E.

    2006-02-01

    A compact detector, sized particularly for imaging a mouse, is described. The active area of the detector is approximately 46 mm; spl times/ 96 mm. Two flat-panel Hamamatsu H8500 position-sensitive photomultiplier tubes (PSPMTs) are coupled to a pixellated NaI(Tl) scintillator which views the animal through a copper-beryllium (CuBe) parallel-hole collimator specially designed for {sup 125}I. Although the PSPMTs have insensitive areas at their edges and there is a physical gap, corrections for scintillation light collection at the junction between the two tubes results in a uniform response across the entire rectangular area of the detector. The system described has been developed to optimize both sensitivity and resolution for in-vivo imaging of small animals injected with iodinated compounds. We demonstrate an in-vivo application of this detector, particularly to SPECT, by imaging mice injected with approximately 10-15; spl mu/Ci of {sup 125}I.

  4. Sub-100g uncooled thermal imaging camera design

    NASA Astrophysics Data System (ADS)

    Brown, Alistair

    2008-10-01

    There are many applications for thermal imaging systems where low weight, high performance and high durability are at a premium. These include UAV systems, future warrior programs and thermal weapon sights. Thermal imaging camera design is restricted by a number external constraints including, detector packaging, detector performance and optical design. This paper describes how, by combining the latest 25µm pitch detector technology, novel optical design and shutter-less image processing a high resolution imager a system weight of 100g can be achieved. Recently developed detectors have low mass vacuum packages, in this example a 384x288 25um un-cooled microbolometer has a weight of less than 25g. By comparison, earlier 35µm and 50 µm devices were In the region of 40g. Where cameras are used in harsh environments mechanical shutters present both a reliability issue and additional weight. The low-weight camera utilises Xti Shutter-less technology to generate high quality images without the need for any form of mechanical shutter. The resulting camera has no moving parts. Lenses for Long Wave Infrared (LWIR) Thermal imaging are typically manufactured using Germanium (Ge) elements. These lenses tend to be designed with f/1.0 apertures and as a result add significant weight to the design. Thanks to the smaller detector pitch and system sensitivity a lens has been designed with a focal length of 14.95mm at f/1.3 where the mass of the optical components is 9g. The final optical assembly, including passive athermalisation has a mass of no more than 15g.

  5. Innovative Camera and Image Processing System to Characterize Cryospheric Changes

    NASA Astrophysics Data System (ADS)

    Schenk, A.; Csatho, B. M.; Nagarajan, S.

    2010-12-01

    The polar regions play an important role in Earth’s climatic and geodynamic systems. Digital photogrammetric mapping provides a means for monitoring the dramatic changes observed in the polar regions during the past decades. High-resolution, photogrammetrically processed digital aerial imagery provides complementary information to surface measurements obtained by laser altimetry systems. While laser points accurately sample the ice surface, stereo images allow for the mapping of features, such as crevasses, flow bands, shear margins, moraines, leads, and different types of sea ice. Tracking features in repeat images produces a dense velocity vector field that can either serve as validation for interferometrically derived surface velocities or it constitutes a stand-alone product. A multi-modal, photogrammetric platform consists of one or more high-resolution, commercial color cameras, GPS and inertial navigation system as well as optional laser scanner. Such a system, using a Canon EOS-1DS Mark II camera, was first flown on the Icebridge missions Fall 2009 and Spring 2010, capturing hundreds of thousands of images at a frame rate of about one second. While digital images and videos have been used for quite some time for visual inspection, precise 3D measurements with low cost, commercial cameras require special photogrammetric treatment that only became available recently. Calibrating the multi-camera imaging system and geo-referencing the images are absolute prerequisites for all subsequent applications. Commercial cameras are inherently non-metric, that is, their sensor model is only approximately known. Since these cameras are not as rugged as photogrammetric cameras, the interior orientation also changes, due to temperature and pressure changes and aircraft vibration, resulting in large errors in 3D measurements. It is therefore necessary to calibrate the cameras frequently, at least whenever the system is newly installed. Geo-referencing the images is performed by the Applanix navigation system. Our new method enables a 3D reconstruction of ice sheet surface with high accuracy and unprecedented details, as it is demonstrated by examples from the Antarctic Peninsula, acquired by the IceBridge mission. Repeat digital imaging also provides data for determining surface elevation changes and velocities that are critical parameters for ice sheet models. Although these methods work well, there are known problems with satellite images and the traditional area-based matching, especially over rapidly changing outlet glaciers. To take full advantage of the high resolution, repeat stereo imaging we have developed a new method. The processing starts with the generation of a DEM from geo-referenced stereo images of the first time epoch. The next step is concerned with extracting and matching interest points in object space. Since an interest point moves its spatial position between two time epochs, such points are only radiometrically conjugate but not geometrically. In fact, the geometric displacement of two identical points, together with the time difference, renders velocities. We computed the evolution of the velocity field and surface topography on the floating tongue of the Jakobshavn glacier from historical stereo aerial photographs to illustrate the approach.

  6. Engineering design criteria for an image intensifier/image converter camera

    NASA Technical Reports Server (NTRS)

    Sharpsteen, J. T.; Lund, D. L.; Stoap, L. J.; Solheim, C. D.

    1976-01-01

    The design, display, and evaluation of an image intensifier/image converter camera which can be utilized in various requirements of spaceshuttle experiments are described. An image intensifier tube was utilized in combination with two brassboards as power supply and used for evaluation of night photography in the field. Pictures were obtained showing field details which would have been undistinguishable to the naked eye or to an ordinary camera.

  7. Characterization of a PET Camera Optimized for ProstateImaging

    SciTech Connect

    Huber, Jennifer S.; Choong, Woon-Seng; Moses, William W.; Qi,Jinyi; Hu, Jicun; Wang, G.C.; Wilson, David; Oh, Sang; Huesman, RonaldH.; Derenzo, Stephen E.

    2005-11-11

    We present the characterization of a positron emission tomograph for prostate imaging that centers a patient between a pair of external curved detector banks (ellipse: 45 cm minor, 70 cm major axis). The distance between detector banks adjusts to allow patient access and to position the detectors as closely as possible for maximum sensitivity with patients of various sizes. Each bank is composed of two axial rows of 20 HR+ block detectors for a total of 80 detectors in the camera. The individual detectors are angled in the transaxial plane to point towards the prostate to reduce resolution degradation in that region. The detectors are read out by modified HRRT data acquisition electronics. Compared to a standard whole-body PET camera, our dedicated-prostate camera has the same sensitivity and resolution, less background (less randoms and lower scatter fraction) and a lower cost. We have completed construction of the camera. Characterization data and reconstructed images of several phantoms are shown. Sensitivity of a point source in the center is 946 cps/mu Ci. Spatial resolution is 4 mm FWHM in the central region.

  8. ARNICA, the NICMOS 3 imaging camera of TIRGO.

    NASA Astrophysics Data System (ADS)

    Lisi, F.; Baffa, C.; Hunt, L.; Stanga, R.

    ARNICA (ARcetri Near Infrared CAmera) is the imaging camera for the near infrared bands between 1.0 and 2.5 ?m that Arcetri Observatory has designed and built as a general facility for the TIRGO telescope (1.5 m diameter, f/20) located at Gornergrat (Switzerland). The scale is 1?per pixel, with sky coverage of more than 4 min×4 min on the NICMOS 3 (256×256 pixels, 40 ?m side) detector array. The camera is remotely controlled by a PC 486, connected to the array control electronics via a fiber-optics link. A C-language package, running under MS-DOS on the PC 486, acquires and stores the frames, and controls the timing of the array. The camera is intended for imaging of large extra-galactic and Galactic fields; a large effort has been dedicated to explore the possibility of achieving precise photometric measurements in the J, H, K astronomical bands, with very promising results.

  9. VLSI Architecture and FPGA Prototyping of a Digital Camera for Image Security and Authentication

    E-print Network

    Mohanty, Saraju P.

    VLSI Architecture and FPGA Prototyping of a Digital Camera for Image Security and Authentication and security mechanism for images produced by it. Since the proposal of the trustworthy digital camera Watermarking Unit Flash Memory Compression Unit Encryption Unit Fig. 1. Secure digital camera for image

  10. A Comparative Study of Microscopic Images Captured by a Box Type Digital Camera Versus a Standard Microscopic Photography Camera Unit

    PubMed Central

    Desai, Nandini J.; Gupta, B. D.; Patel, Pratik Narendrabhai

    2014-01-01

    Introduction: Obtaining images of slides viewed by a microscope can be invaluable for both diagnosis and teaching.They can be transferred among technologically-advanced hospitals for further consultation and evaluation. But a standard microscopic photography camera unit (MPCU)(MIPS-Microscopic Image projection System) is costly and not available in resource poor settings. The aim of our endeavour was to find a comparable and cheaper alternative method for photomicrography. Materials and Methods: We used a NIKON Coolpix S6150 camera (box type digital camera) with Olympus CH20i microscope and a fluorescent microscope for the purpose of this study. Results: We got comparable results for capturing images of light microscopy, but the results were not as satisfactory for fluorescent microscopy. Conclusion: A box type digital camera is a comparable, less expensive and convenient alternative to microscopic photography camera unit. PMID:25478350

  11. Radiation imaging with a new scintillator and a CMOS camera

    NASA Astrophysics Data System (ADS)

    Kurosawa, S.; Shoji, Y.; Pejchal, J.; Yokota, Y.; Yoshikawa, A.

    2014-07-01

    A new imaging system consisting of a high-sensitivity complementary metal-oxide semiconductor (CMOS) sensor, a microscope and a new scintillator, Ce-doped Gd3(Al,Ga)5O12 (Ce:GAGG) grown by the Czochralski process, has been developed. The noise, the dark current and the sensitivity of the CMOS camera (ORCA-Flash4.0, Hamamatsu) was revised and compared to a conventional CMOS, whose sensitivity is at the same level as that of a charge coupled device (CCD) camera. Without the scintillator, this system had a good position resolution of 2.1 ± 0.4 ?m and we succeeded in obtaining the alpha-ray images using 1-mm thick Ce:GAGG crystal. This system can be applied for example to high energy X-ray beam profile monitor, etc.

  12. TIRCAM2: The TIFR near infrared imaging camera

    NASA Astrophysics Data System (ADS)

    Naik, M. B.; Ojha, D. K.; Ghosh, S. K.; Poojary, S. S.; Jadhav, R. B.; Meshram, G. S.; Sandimani, P. R.; Bhagat, S. B.; D'Costa, S. L. A.; Gharat, S. M.; Bakalkar, C. B.; Ninan, J. P.; Joshi, J. S.

    2012-12-01

    TIRCAM2 (TIFR near infrared imaging camera - II) is a closed cycle cooled imager that has been developed by the Infrared Astronomy Group at the Tata Institute of Fundamental Research for observations in the near infrared band of 1 to 3.7 ?m with existing Indian telescopes. In this paper, we describe some of the technical details of TIRCAM2 and report its observing capabilities, measured performance and limiting magnitudes with the 2-m IUCAA Girawali telescope and the 1.2-m PRL Gurushikhar telescope. The main highlight is the camera's capability of observing in the nbL (3.59 mum) band enabling our primary motivation of mapping of Polycyclic Aromatic Hydrocarbon (PAH) emission at 3.3 mum.

  13. Traffic monitoring with serial images from airborne cameras

    NASA Astrophysics Data System (ADS)

    Reinartz, Peter; Lachaise, Marie; Schmeer, Elisabeth; Krauss, Thomas; Runge, Hartmut

    The classical means to measure traffic density and velocity depend on local measurements from induction loops and other on site instruments. This information does not give the whole picture of the two-dimensional traffic situation. In order to obtain precise knowledge about the traffic flow of a large area, only airborne cameras or cameras positioned at very high locations (towers, etc.) can provide an up-to-date image of all roads covered. The paper aims at showing the potential of using image time series from these cameras to derive traffic parameters on the basis of single car measurements. To be able to determine precise velocities and other parameters from an image time series, exact geocoding is one of the first requirements for the acquired image data. The methods presented here for determining several traffic parameters for single vehicles and vehicle groups involve recording and evaluating a number of digital or analog aerial images from high altitude and with a large total field of view. Visual and automatic methods for the interpretation of images are compared. It turns out that the recording frequency of the individual images should be at least 1/3 Hz (visual interpretation), but is preferably 3 Hz or more, especially for automatic vehicle tracking. The accuracy and potentials of the methods are analyzed and presented, as well as the usage of a digital road database for improving the tracking algorithm and for integrating the results for further traffic applications. Shortcomings of the methods are given as well as possible improvements regarding methodology and sensor platform.

  14. Source Camera Identification for Low Resolution Heavily Compressed Images

    Microsoft Academic Search

    Erwin J. Alles; Zeno J. M. H. Geradts; C. J. Veenman

    2008-01-01

    In this paper, we propose a method to exploit photo response non-uniformity (PRNU) to identify the source camera of heavily JPEG compressed digital photographs of resolution 640 times 480 pixels. Similarly to research reported previously, we extract the PRNU patterns from both reference and questioned images using a two-dimensional high-pass filter and compare these patterns by calculating the correlation coefficient

  15. Self calibration of camera with non-linear imaging model

    NASA Astrophysics Data System (ADS)

    Hou, Wenguang; Shang, Tao; Ding, Mingyue

    2007-11-01

    Being put forward by the researchers in computer vision, self calibration commonly deals with camera with linear model. Since the distortion is practically existed especially for ordinary camera, the result of calibration can't meet the demand of vision measurement with high accuracy regardless of the distortion. Being obedience to systematism mainly, the distortion is the target function of distortion coefficient, principal point, principal distance ratio and skew factor etc. So there exists a group of parameters including of distortion coefficient, principal point, principal distance ratio and skew factor and fundamental matrix which make homologous point meets epipolar restriction theoretically. Accordingly, the paper advances the way titled self calibration of camera with non-linear imaging model which is on basis of the Kruppa equation. In calculating the fundamental matrix, we can obtain interior elements except principal distance by taking into account distortion correction about image coordinate. Then the principal distance can be obtained by using Kruppa equation. This way only need some homologous points between two images, not need any known information about objects. Lots of experiments have proven its correctness and reliability.

  16. PIV camera response to high frequency signal: comparison of CCD and CMOS cameras using particle image simulation

    NASA Astrophysics Data System (ADS)

    Abdelsalam, D. G.; Stanislas, M.; Coudert, S.

    2014-08-01

    We present a quantitative comparison between FlowMaster3 CCD and Phantom V9.1 CMOS cameras’ response in the scope of application to particle image velocimetry (PIV). First, the subpixel response is characterized using a specifically designed set-up. The crosstalk between adjacent pixels for the two cameras is then estimated and compared. Then, the camera response is experimentally characterized using particle image simulation. Based on a three-point Gaussian peak fitting, the bias and RMS errors between locations of simulated and real images for the two cameras are accurately calculated using a homemade program. The results show that, although the pixel response is not perfect, the optical crosstalk between adjacent pixels stays relatively low and the accuracy of the position determination of an ideal PIV particle image is much better than expected.

  17. An efficient image compressor for charge coupled devices camera.

    PubMed

    Li, Jin; Xing, Fei; You, Zheng

    2014-01-01

    Recently, the discrete wavelet transforms- (DWT-) based compressor, such as JPEG2000 and CCSDS-IDC, is widely seen as the state of the art compression scheme for charge coupled devices (CCD) camera. However, CCD images project on the DWT basis to produce a large number of large amplitude high-frequency coefficients because these images have a large number of complex texture and contour information, which are disadvantage for the later coding. In this paper, we proposed a low-complexity posttransform coupled with compressing sensing (PT-CS) compression approach for remote sensing image. First, the DWT is applied to the remote sensing image. Then, a pair base posttransform is applied to the DWT coefficients. The pair base are DCT base and Hadamard base, which can be used on the high and low bit-rate, respectively. The best posttransform is selected by the l p -norm-based approach. The posttransform is considered as the sparse representation stage of CS. The posttransform coefficients are resampled by sensing measurement matrix. Experimental results on on-board CCD camera images show that the proposed approach significantly outperforms the CCSDS-IDC-based coder, and its performance is comparable to that of the JPEG2000 at low bit rate and it does not have the high excessive implementation complexity of JPEG2000. PMID:25114977

  18. Parallel phase-sensitive three-dimensional imaging camera

    DOEpatents

    Smithpeter, Colin L. (Albuquerque, NM); Hoover, Eddie R. (Sandia Park, NM); Pain, Bedabrata (Los Angeles, CA); Hancock, Bruce R. (Altadena, CA); Nellums, Robert O. (Albuquerque, NM)

    2007-09-25

    An apparatus is disclosed for generating a three-dimensional (3-D) image of a scene illuminated by a pulsed light source (e.g. a laser or light-emitting diode). The apparatus, referred to as a phase-sensitive 3-D imaging camera utilizes a two-dimensional (2-D) array of photodetectors to receive light that is reflected or scattered from the scene and processes an electrical output signal from each photodetector in the 2-D array in parallel using multiple modulators, each having inputs of the photodetector output signal and a reference signal, with the reference signal provided to each modulator having a different phase delay. The output from each modulator is provided to a computational unit which can be used to generate intensity and range information for use in generating a 3-D image of the scene. The 3-D camera is capable of generating a 3-D image using a single pulse of light, or alternately can be used to generate subsequent 3-D images with each additional pulse of light.

  19. An Efficient Image Compressor for Charge Coupled Devices Camera

    PubMed Central

    Li, Jin; Xing, Fei; You, Zheng

    2014-01-01

    Recently, the discrete wavelet transforms- (DWT-) based compressor, such as JPEG2000 and CCSDS-IDC, is widely seen as the state of the art compression scheme for charge coupled devices (CCD) camera. However, CCD images project on the DWT basis to produce a large number of large amplitude high-frequency coefficients because these images have a large number of complex texture and contour information, which are disadvantage for the later coding. In this paper, we proposed a low-complexity posttransform coupled with compressing sensing (PT-CS) compression approach for remote sensing image. First, the DWT is applied to the remote sensing image. Then, a pair base posttransform is applied to the DWT coefficients. The pair base are DCT base and Hadamard base, which can be used on the high and low bit-rate, respectively. The best posttransform is selected by the lp-norm-based approach. The posttransform is considered as the sparse representation stage of CS. The posttransform coefficients are resampled by sensing measurement matrix. Experimental results on on-board CCD camera images show that the proposed approach significantly outperforms the CCSDS-IDC-based coder, and its performance is comparable to that of the JPEG2000 at low bit rate and it does not have the high excessive implementation complexity of JPEG2000. PMID:25114977

  20. Quantifying biodiversity using digital cameras and automated image analysis.

    NASA Astrophysics Data System (ADS)

    Roadknight, C. M.; Rose, R. J.; Barber, M. L.; Price, M. C.; Marshall, I. W.

    2009-04-01

    Monitoring the effects on biodiversity of extensive grazing in complex semi-natural habitats is labour intensive. There are also concerns about the standardization of semi-quantitative data collection. We have chosen to focus initially on automating the most time consuming aspect - the image analysis. The advent of cheaper and more sophisticated digital camera technology has lead to a sudden increase in the number of habitat monitoring images and information that is being collected. We report on the use of automated trail cameras (designed for the game hunting market) to continuously capture images of grazer activity in a variety of habitats at Moor House National Nature Reserve, which is situated in the North of England at an average altitude of over 600m. Rainfall is high, and in most areas the soil consists of deep peat (1m to 3m), populated by a mix of heather, mosses and sedges. The cameras have been continuously in operation over a 6 month period, daylight images are in full colour and night images (IR flash) are black and white. We have developed artificial intelligence based methods to assist in the analysis of the large number of images collected, generating alert states for new or unusual image conditions. This paper describes the data collection techniques, outlines the quantitative and qualitative data collected and proposes online and offline systems that can reduce the manpower overheads and increase focus on important subsets in the collected data. By converting digital image data into statistical composite data it can be handled in a similar way to other biodiversity statistics thus improving the scalability of monitoring experiments. Unsupervised feature detection methods and supervised neural methods were tested and offered solutions to simplifying the process. Accurate (85 to 95%) categorization of faunal content can be obtained, requiring human intervention for only those images containing rare animals or unusual (undecidable) conditions, and enabling automatic deletion of images generated by erroneous triggering (e.g. cloud movements). This is the first step to a hierarchical image processing framework, where situation subclasses such as birds or climatic conditions can be fed into more appropriate automated or semi-automated data mining software.

  1. Imaging of Venus from Galileo: Early results and camera performance

    USGS Publications Warehouse

    Belton, M.J.S.; Gierasch, P.; Klaasen, K.P.; Anger, C.D.; Carr, M.H.; Chapman, C.R.; Davies, M.E.; Greeley, R.; Greenberg, R.; Head, J.W.; Neukum, G.; Pilcher, C.B.; Veverka, J.; Fanale, F.P.; Ingersoll, A.P.; Pollock, J.B.; Morrison, D.; Clary, M.C.; Cunningham, W.; Breneman, H.

    1992-01-01

    Three images of Venus have been returned so far by the Galileo spacecraft following an encounter with the planet on UT February 10, 1990. The images, taken at effective wavelengths of 4200 and 9900 A??, characterize the global motions and distribution of haze near the Venus cloud tops and, at the latter wavelength, deep within the main cloud. Previously undetected markings are clearly seen in the near-infrared image. The global distribution of these features, which have maximum contrasts of 3%, is different from that recorded at short wavelengths. In particular, the "polar collar," which is omnipresent in short wavelength images, is absent at 9900 A??. The maximum contrast in the features at 4200 A?? is about 20%. The optical performance of the camera is described and is judged to be nominal. ?? 1992.

  2. Image-intensifier camera studies of shocked metal surfaces

    SciTech Connect

    Engelke, R.P.; Thurston, R.S.

    1986-01-01

    A high-space-resolution image-intensifier camera with luminance gain of up to 5000 and exposure times as short as 30 ns has been applied to the study of the interaction of posts and welds with strongly shocked metal surfaces, which included super strong steels. The time evolution of a single experiment can be recorded by multiple pulsing of the camera. Phenomena that remain coherent for relatively long durations have been observed. An important feature of the hydrodynamic flow resulting from post-plate interactions is the creation of a wave that propagates outward on the plate; the flow blocks the explosive product gases from escaping through the plate for greater than 10 ..mu..s. Electron beam welds were ineffective in blocking product gases from escaping for even short periods of time.

  3. First experiences with ARNICA, the ARCETRI observatory imaging camera

    NASA Astrophysics Data System (ADS)

    Lisi, F.; Baffa, C.; Hunt, L.; Maiolino, R.; Moriondo, G.; Stanga, R.

    1994-03-01

    ARNICA (ARcetri Near Infrared CAmera) is the imaging camera for the near infrared bands between 1.0 and 2.5 micrometer that Arcetri Observatory has designed and built as a common use instrument for the TIRGO telescope (1.5 m diameter, f/20) located at Gornergrat (Switzerland). The scale is 1 sec per pixel, with sky coverage of more than 4 min x 4 min on the NICMOS 3 (256 x 256 pixels, 40 micrometer side) detector array. The optical path is compact enough to be enclosed in a 25.4 cm diameter dewar; the working temperature of detector and optics is 76 K. We give an estimate of performance, in terms of sensitivity with an assigned observing time, along with some preliminary considerations on photometric accuracy.

  4. MWIR COMIC imaging camera for the ADONIS adaptive optics system

    NASA Astrophysics Data System (ADS)

    Feautrier, Philippe; Beuzit, Jean-Luc; Lacombe, Francois; Petmezakis, Panayoti; Geoffray, Herve; Monin, Jean-Louis; Talureau, Bernard; Gigan, Pierre; Hubin, Norbert; Audaire, Luc

    1995-09-01

    A 1-5 micrometers astronomical infrared imaging camera, COMIC, is currently being developed for the ADONIS adaptive optics system, as a collaborative project of Observatoire de Paris and Observatoire de Grenoble under ESO (European Space Observatory) contract. This camera is based on a 128 by 128 HgCdTe/CCD array built by the CEA-LETI-LIR (Grenoble, France). Among its main characteristics, this detector offers a very high storage capacity of 3 106 e- with a total system read-out noise of about 600 e- which makes it particularly optimized for the 3-5 mum. COMIC will be installed in the fall of 1995 at the output focus of the ADONIS AO system on the ESO 3.6-m telescope at La Silla (Chile).

  5. Mosaicing of acoustic camera images K. Kim, N. Neretti and N. Intrator

    E-print Network

    Intrator, Nathan

    Mosaicing of acoustic camera images K. Kim, N. Neretti and N. Intrator Abstract: An algorithm, inhomogeneous illumination and low frame rate is presented. Imaging geometry of acoustic cameras. For an acoustic camera, it is shown that, under the same conditions, an affine transformation is a good

  6. A two-camera imaging system for pest detection and aerial application

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This presentation reports on the design and testing of an airborne two-camera imaging system for pest detection and aerial application assessment. The system consists of two digital cameras with 5616 x 3744 effective pixels. One camera captures normal color images with blue, green and red bands, whi...

  7. Expert interpretation compensates for reduced image quality of camera-digitized images referred to radiologists.

    PubMed

    Zwingenberger, Allison L; Bouma, Jennifer L; Saunders, H Mark; Nodine, Calvin F

    2011-01-01

    We compared the accuracy of five veterinary radiologists when reading 20 radiographic cases on both analog film and in camera-digitized format. In addition, we compared the ability of five veterinary radiologists vs. 10 private practice veterinarians to interpret the analog images. Interpretation accuracy was compared using receiver operating characteristic curve analysis. Veterinary radiologists' accuracy did not significantly differ between analog vs. camera-digitized images (P = 0.13) although sensitivity was higher for analog images. Radiologists' interpretation of both digital and analog images was significantly better compared with the private veterinarians (P < 0.05). PMID:21831251

  8. LROC WAC 100 Meter Scale Photometrically Normalized Map of the Moon

    NASA Astrophysics Data System (ADS)

    Boyd, A. K.; Nuno, R. G.; Robinson, M. S.; Denevi, B. W.; Hapke, B. W.

    2013-12-01

    The Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) monthly global observations allowed derivation of a robust empirical photometric solution over a broad range of incidence, emission and phase (i, e, g) angles. Combining the WAC stereo-based GLD100 [1] digital terrain model (DTM) and LOLA polar DTMs [2] enabled precise topographic corrections to photometric angles. Over 100,000 WAC observations at 643 nm were calibrated to reflectance (I/F). Photometric angles (i, e, g), latitude, and longitude were calculated and stored for each WAC pixel. The 6-dimensional data set was then reduced to 3 dimensions by photometrically normalizing I/F with a global solution similar to [3]. The global solution was calculated from three 2°x2° tiles centered on (1°N, 147°E), (45°N, 147°E), and (89°N, 147°E), and included over 40 million WAC pixels. A least squares fit to a multivariate polynomial of degree 4 (f(i,e,g)) was performed, and the result was the starting point for a minimum search solving the non-linear function min[{1-[ I/F / f(i,e,g)] }2]. The input pixels were filtered to incidence angles (calculated from topography) < 89° and I/F greater than a minimum threshold to avoid shadowed pixels, and the output normalized I/F values were gridded into an equal-area map projection at 100 meters/pixel. At each grid location the median, standard deviation, and count of valid pixels were recorded. The normalized reflectance map is the result of the median of all normalized WAC pixels overlapping that specific 100-m grid cell. There are an average of 86 WAC normalized I/F estimates at each cell [3]. The resulting photometrically normalized mosaic provides the means to accurately compare I/F values for different regions on the Moon (see Nuno et al. [4]). The subtle differences in normalized I/F can now be traced across the local topography at regions that are illuminated at any point during the LRO mission (while the WAC was imaging), including at polar latitudes. This continuous map of reflectance at 643 nm, normalized to a standard geometry of i=30, e=0, g=30, ranges from 0.036 to 0.36 (0.01%-99.99% of the histogram) with a global mean reflectance of 0.115. Immature rays of Copernican craters are typically >0.14 and maria are typically <0.07 with averages for individual maria ranging from 0.046 to 0.060. The materials with the lowest normalized reflectance on the Moon are pyroclastic deposits at Sinus Aestuum (<0.036) and those with the highest normalized reflectance are found on steep crater walls (>0.36)[4]. 1. Scholten et al. (2012) J. Geophys. Res., 117, doi: 10.1029/2011JE003926. 2. Smith et al. (2010), Geophys. Res. Lett., 37, L18204, doi:10.1029/2010GL043751. 3. Boyd et al. (2012) LPSC XLIII, #2795 4. Nuno et al. AGU, (this conference)

  9. MECHANICAL ADVANCING HANDLE THAT SIMPLIFIES MINIRHIZOTRON CAMERA REGISTRATION AND IMAGE COLLECTION

    EPA Science Inventory

    Minirkizotrons in conjunction with a minirkizotron video camera system are becoming widely used tools for investigating root production and survical in a variety of ecosystems. Image collection with a minirhizotron camera can be time consuming and tedious particularly when hundre...

  10. Glacier flow monitoring by digital camera and space-borne SAR images

    E-print Network

    Boyer, Edmond

    1 Glacier flow monitoring by digital camera and space-borne SAR images Flavien Vernier1, Renaud glaciers and to measure their surface velocity by different techniques. However, the image size) images the motion of Alpine glaciers. The optical images are acquired by a digital camera installed near

  11. Camera Response Functions for Image Forensics: An Automatic Algorithm for Splicing Detection

    Microsoft Academic Search

    Yu-Feng Hsu; Shih-Fu Chang

    2010-01-01

    We present a fully automatic method to detect doctored digital images. Our method is based on a rigorous consistency checking principle of physical characteristics among different arbitrarily shaped image regions. In this paper, we specifically study the camera response function (CRF), a fundamental property in cameras mapping input irradiance to output image intensity. A test image is first automatically segmented

  12. Frequency identification of vibration signals using video camera image data.

    PubMed

    Jeng, Yih-Nen; Wu, Chia-Hung

    2012-01-01

    This study showed that an image data acquisition system connecting a high-speed camera or webcam to a notebook or personal computer (PC) can precisely capture most dominant modes of vibration signal, but may involve the non-physical modes induced by the insufficient frame rates. Using a simple model, frequencies of these modes are properly predicted and excluded. Two experimental designs, which involve using an LED light source and a vibration exciter, are proposed to demonstrate the performance. First, the original gray-level resolution of a video camera from, for instance, 0 to 256 levels, was enhanced by summing gray-level data of all pixels in a small region around the point of interest. The image signal was further enhanced by attaching a white paper sheet marked with a black line on the surface of the vibration system in operation to increase the gray-level resolution. Experimental results showed that the Prosilica CV640C CMOS high-speed camera has the critical frequency of inducing the false mode at 60 Hz, whereas that of the webcam is 7.8 Hz. Several factors were proven to have the effect of partially suppressing the non-physical modes, but they cannot eliminate them completely. Two examples, the prominent vibration modes of which are less than the associated critical frequencies, are examined to demonstrate the performances of the proposed systems. In general, the experimental data show that the non-contact type image data acquisition systems are potential tools for collecting the low-frequency vibration signal of a system. PMID:23202026

  13. Frequency Identification of Vibration Signals Using Video Camera Image Data

    PubMed Central

    Jeng, Yih-Nen; Wu, Chia-Hung

    2012-01-01

    This study showed that an image data acquisition system connecting a high-speed camera or webcam to a notebook or personal computer (PC) can precisely capture most dominant modes of vibration signal, but may involve the non-physical modes induced by the insufficient frame rates. Using a simple model, frequencies of these modes are properly predicted and excluded. Two experimental designs, which involve using an LED light source and a vibration exciter, are proposed to demonstrate the performance. First, the original gray-level resolution of a video camera from, for instance, 0 to 256 levels, was enhanced by summing gray-level data of all pixels in a small region around the point of interest. The image signal was further enhanced by attaching a white paper sheet marked with a black line on the surface of the vibration system in operation to increase the gray-level resolution. Experimental results showed that the Prosilica CV640C CMOS high-speed camera has the critical frequency of inducing the false mode at 60 Hz, whereas that of the webcam is 7.8 Hz. Several factors were proven to have the effect of partially suppressing the non-physical modes, but they cannot eliminate them completely. Two examples, the prominent vibration modes of which are less than the associated critical frequencies, are examined to demonstrate the performances of the proposed systems. In general, the experimental data show that the non-contact type image data acquisition systems are potential tools for collecting the low-frequency vibration signal of a system. PMID:23202026

  14. The Herschel/PACS 2560 bolometers imaging camera

    E-print Network

    Nicolas Billot; Patrick Agnese; Jean-Louis Augueres; Alain Beguin; Andre Bouere; Olivier Boulade; Christophe Cara; Christelle Cloue; Eric Doumayrou; Lionel Duband; Benoit Horeau; Isabelle Le Mer; Jean Le Pennec; Jerome Martignac; Koryo Okumura; Vincent Reveret; Marc Sauvage; Francois Simoens; Laurent Vigroux

    2006-06-26

    The development program of the flight model imaging camera for the PACS instrument on-board the Herschel spacecraft is nearing completion. This camera has two channels covering the 60 to 210 microns wavelength range. The focal plane of the short wavelength channel is made of a mosaic of 2x4 3-sides buttable bolometer arrays (16x16 pixels each) for a total of 2048 pixels, while the long wavelength channel has a mosaic of 2 of the same bolometer arrays for a total of 512 pixels. The 10 arrays have been fabricated, individually tested and integrated in the photometer. They represent the first filled arrays of fully collectively built bolometers with a cold multiplexed readout, allowing for a properly sampled coverage of the full instrument field of view. The camera has been fully characterized and the ground calibration campaign will take place after its delivery to the PACS consortium in mid 2006. The bolometers, working at a temperature of 300 mK, have a NEP close to the BLIP limit and an optical bandwidth of 4 to 5 Hz that will permit the mapping of large sky areas. This paper briefly presents the concept and technology of the detectors as well as the cryocooler and the warm electronics. Then we focus on the performances of the integrated focal planes (responsivity, NEP, low frequency noise, bandwidth).

  15. ARNICA: the Arcetri Observatory NICMOS3 imaging camera

    NASA Astrophysics Data System (ADS)

    Lisi, Franco; Baffa, Carlo; Hunt, Leslie K.

    1993-10-01

    ARNICA (ARcetri Near Infrared CAmera) is the imaging camera for the near infrared bands between 1.0 and 2.5 micrometers that Arcetri Observatory has designed and built as a general facility for the TIRGO telescope (1.5 m diameter, f/20) located at Gornergrat (Switzerland). The scale is 1' per pixel, with sky coverage of more than 4' X 4' on the NICMOS 3 (256 X 256 pixels, 40 micrometers side) detector array. The optical path is compact enough to be enclosed in a 25.4 cm diameter dewar; the working temperature is 76 K. The camera is remotely controlled by a 486 PC, connected to the array control electronics via a fiber-optics link. A C-language package, running under MS-DOS on the 486 PC, acquires and stores the frames, and controls the timing of the array. We give an estimate of performance, in terms of sensitivity with an assigned observing time, along with some details on the main parameters of the NICMOS 3 detector.

  16. Image quality assessment of 2-chip color camera in comparison with 1-chip color and 3-chip color cameras in various lighting conditions: initial results

    NASA Astrophysics Data System (ADS)

    Adham Khiabani, Sina; Zhang, Yun; Fathollahi, Fatemeh

    2014-05-01

    A 2-chip color camera, named UNB Super-camera, is introduced in this paper. Its image qualities in different lighting conditions are compared with those of a 1-chip color camera and a 3-chip color camera. The 2-chip color camera contains a high resolution monochrome (panchromatic) sensor and a low resolution color sensor. The high resolution color images of the 2-chip color camera are produced through an image fusion technique: UNB pan-sharp, also named FuzeGo. This fusion technique has been widely used to produce high resolution color satellite images from a high resolution panchromatic image and low resolution multispectral (color) image for a decade. Now, the fusion technique is further extended to produce high resolution color still images and video images from a 2-chip color camera. The initial quality assessments of a research project proved that the light sensitivity, image resolution and color quality of the Super-camera (2-chip camera) is obviously better than those of the same generation 1-chip camera. It is also proven that the image quality of the Super-camera is much better than the same generation 3-chip camera when the light is low, such as in a normal room light condition or darker. However, the resolution of the Super-camera is the same as that of the 3- chip camera, these evaluation results suggest the potential of using 2-chip camera to replace 3-chip camera for capturing high quality color images, which is not only able to lower the cost of camera manufacture but also significantly improving the light sensitivity.

  17. Noise evaluation of Compton camera imaging for proton therapy

    NASA Astrophysics Data System (ADS)

    Ortega, P. G.; Torres-Espallardo, I.; Cerutti, F.; Ferrari, A.; Gillam, J. E.; Lacasta, C.; Llosá, G.; Oliver, J. F.; Sala, P. R.; Solevi, P.; Rafecas, M.

    2015-02-01

    Compton Cameras emerged as an alternative for real-time dose monitoring techniques for Particle Therapy (PT), based on the detection of prompt-gammas. As a consequence of the Compton scattering process, the gamma origin point can be restricted onto the surface of a cone (Compton cone). Through image reconstruction techniques, the distribution of the gamma emitters can be estimated, using cone-surfaces backprojections of the Compton cones through the image space, along with more sophisticated statistical methods to improve the image quality. To calculate the Compton cone required for image reconstruction, either two interactions, the last being photoelectric absorption, or three scatter interactions are needed. Because of the high energy of the photons in PT the first option might not be adequate, as the photon is not absorbed in general. However, the second option is less efficient. That is the reason to resort to spectral reconstructions, where the incoming ? energy is considered as a variable in the reconstruction inverse problem. Jointly with prompt gamma, secondary neutrons and scattered photons, not strongly correlated with the dose map, can also reach the imaging detector and produce false events. These events deteriorate the image quality. Also, high intensity beams can produce particle accumulation in the camera, which lead to an increase of random coincidences, meaning events which gather measurements from different incoming particles. The noise scenario is expected to be different if double or triple events are used, and consequently, the reconstructed images can be affected differently by spurious data. The aim of the present work is to study the effect of false events in the reconstructed image, evaluating their impact in the determination of the beam particle ranges. A simulation study that includes misidentified events (neutrons and random coincidences) in the final image of a Compton Telescope for PT monitoring is presented. The complete chain of detection, from the beam particle entering a phantom to the event classification, is simulated using FLUKA. The range determination is later estimated from the reconstructed image obtained from a two and three-event algorithm based on Maximum Likelihood Expectation Maximization. The neutron background and random coincidences due to a therapeutic-like time structure are analyzed for mono-energetic proton beams. The time structure of the beam is included in the simulations, which will affect the rate of particles entering the detector.

  18. Noise evaluation of Compton camera imaging for proton therapy.

    PubMed

    Ortega, P G; Torres-Espallardo, I; Cerutti, F; Ferrari, A; Gillam, J E; Lacasta, C; Llosá, G; Oliver, J F; Sala, P R; Solevi, P; Rafecas, M

    2015-02-21

    Compton Cameras emerged as an alternative for real-time dose monitoring techniques for Particle Therapy (PT), based on the detection of prompt-gammas. As a consequence of the Compton scattering process, the gamma origin point can be restricted onto the surface of a cone (Compton cone). Through image reconstruction techniques, the distribution of the gamma emitters can be estimated, using cone-surfaces backprojections of the Compton cones through the image space, along with more sophisticated statistical methods to improve the image quality. To calculate the Compton cone required for image reconstruction, either two interactions, the last being photoelectric absorption, or three scatter interactions are needed. Because of the high energy of the photons in PT the first option might not be adequate, as the photon is not absorbed in general. However, the second option is less efficient. That is the reason to resort to spectral reconstructions, where the incoming ? energy is considered as a variable in the reconstruction inverse problem. Jointly with prompt gamma, secondary neutrons and scattered photons, not strongly correlated with the dose map, can also reach the imaging detector and produce false events. These events deteriorate the image quality. Also, high intensity beams can produce particle accumulation in the camera, which lead to an increase of random coincidences, meaning events which gather measurements from different incoming particles. The noise scenario is expected to be different if double or triple events are used, and consequently, the reconstructed images can be affected differently by spurious data. The aim of the present work is to study the effect of false events in the reconstructed image, evaluating their impact in the determination of the beam particle ranges. A simulation study that includes misidentified events (neutrons and random coincidences) in the final image of a Compton Telescope for PT monitoring is presented. The complete chain of detection, from the beam particle entering a phantom to the event classification, is simulated using FLUKA. The range determination is later estimated from the reconstructed image obtained from a two and three-event algorithm based on Maximum Likelihood Expectation Maximization. The neutron background and random coincidences due to a therapeutic-like time structure are analyzed for mono-energetic proton beams. The time structure of the beam is included in the simulations, which will affect the rate of particles entering the detector. PMID:25658644

  19. 2D/3D image (facial) comparison using camera matching.

    PubMed

    Goos, Mirelle I M; Alberink, Ivo B; Ruifrok, Arnout C C

    2006-11-10

    A problem in forensic facial comparison of images of perpetrators and suspects is that distances between fixed anatomical points in the face, which form a good starting point for objective, anthropometric comparison, vary strongly according to the position and orientation of the camera. In case of a cooperating suspect, a 3D image may be taken using e.g. a laser scanning device. By projecting the 3D image onto a 2D image with the suspect's head in the same pose as that of the perpetrator, using the same focal length and pixel aspect ratio, numerical comparison of (ratios of) distances between fixed points becomes feasible. An experiment was performed in which, starting from two 3D scans and one 2D image of two colleagues, male and female, and using seven fixed anatomical locations in the face, comparisons were made for the matching and non-matching case. Using this method, the non-matching pair cannot be distinguished from the matching pair of faces. Facial expression and resolution of images were all more or less optimal, and the results of the study are not encouraging for the use of anthropometric arguments in the identification process. More research needs to be done though on larger sets of facial comparisons. PMID:16337353

  20. A comparison of select image-compression algorithms for an electronic still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    This effort is a study of image-compression algorithms for an electronic still camera. An electronic still camera can record and transmit high-quality images without the use of film, because images are stored digitally in computer memory. However, high-resolution images contain an enormous amount of information, and will strain the camera's data-storage system. Image compression will allow more images to be stored in the camera's memory. For the electronic still camera, a compression algorithm that produces a reconstructed image of high fidelity is most important. Efficiency of the algorithm is the second priority. High fidelity and efficiency are more important than a high compression ratio. Several algorithms were chosen for this study and judged on fidelity, efficiency and compression ratio. The transform method appears to be the best choice. At present, the method is compressing images to a ratio of 5.3:1 and producing high-fidelity reconstructed images.

  1. Camera Gap Removal in SOLIS/VSM Images

    E-print Network

    Marble, Andrew R; Pevtsov, Alexei A

    2013-01-01

    The Vector Spectromagnetograph (VSM) instrument on the Synoptic Optical Longterm Investigations of the Sun (SOLIS) telescope is capable of obtaining spectropolarimetry for the full Sun (or a select latitudinal range) with one arcsecond spatial resolution and 0.05 Angstrom spectral resolution. This is achieved by scanning the Sun in declination and building up spectral cubes for multiple polarization states, utilizing a beamsplitter and two separate 2k x 2k CCD cameras. As a result, the eastern and western hemispheres of the Sun are separated in preliminary VSM images by a vertical gap with soft edges and variable position and width. Prior to the comprehensive analysis presented in this document, a trial-and-error approach to removing the gap had yielded an algorithm that was inconsistent, undocumented, and responsible for incorrectly eliminating too many image columns. Here we describe, in detail, the basis for a new, streamlined, and properly calibrated prescription for locating and removing the gap that is ...

  2. Embedded image enhancement for high-throughput cameras

    NASA Astrophysics Data System (ADS)

    Geerts, Stan J. C.; Cornelissen, Dion; de With, Peter H. N.

    2014-03-01

    This paper presents image enhancement for a novel Ultra-High-Definition (UHD) video camera offering 4K images and higher. Conventional image enhancement techniques need to be reconsidered for the high-resolution images and the low-light sensitivity of the new sensor. We study two image enhancement functions and evaluate and optimize the algorithms for embedded implementation in programmable logic (FPGA). The enhancement study involves high-quality Auto White Balancing (AWB) and Local Contrast Enhancement (LCE). We have compared multiple algorithms from literature, both with objective and subjective metrics. In order to objectively compare Local Contrast (LC), an existing LC metric is modified for LC measurement in UHD images. For AWB, we have found that color histogram stretching offers a subjective high image quality and it is among the algorithms with the lowest complexity, while giving only a small balancing error. We impose a color-to-color gain constraint, which improves robustness of low-light images. For local contrast enhancement, a combination of contrast preserving gamma and single-scale Retinex is selected. A modified bilateral filter is designed to prevent halo artifacts, while significantly reducing the complexity and simultaneously preserving quality. We show that by cascading contrast preserving gamma and single-scale Retinex, the visibility of details is improved towards the level appropriate for high-quality surveillance applications. The user is offered control over the amount of enhancement. Also, we discuss the mapping of those functions on a heterogeneous platform to come to an effective implementation while preserving quality and robustness.

  3. Measurement of Space Variant PSF for Restoring Degraded Images by Security Cameras

    Microsoft Academic Search

    Tadashi Ito; Y. Fujii; N. Ohta; S. Saitoh; T. Matsuura; T. Yamamoto

    2006-01-01

    Images recorded by a security camera have often severely degraded due to dirty lens or secular distortion of the recording system. To restore these images, the fully determination of space variant point spread function (PSF) is required. To measure PSF, we used a liquid crystal display. We made some experiment to restore the images by a CCD camera with intentionally

  4. Denoising vs. Deblurring: HDR Imaging Techniques Using Moving Cameras Li Zhang Alok Deshpande Xin Chen

    E-print Network

    Zhang, Li

    Denoising vs. Deblurring: HDR Imaging Techniques Using Moving Cameras Li Zhang Alok Deshpande Xin-resolution quantization for reliable HDR imag- ing from a moving camera. Specifically, we propose a unified probabilistic formulation that allows us to analytically com- pare two HDR imaging alternatives: (1) deblurring a single

  5. Image Splicing Detection using Camera Response Function Consistency and Automatic Segmentation

    Microsoft Academic Search

    Yu-feng Hsu; Shih-fu Chang

    2007-01-01

    We propose a fully automatic spliced image detection method based on consistency checking of camera characteristics among different areas in an image. A test image is first segmented into distinct areas. One camera response function (CRF) is estimated from each area using geometric invariants from lo- cally planar irradiance points (LPIPs). To classify a boundary segment between two areas as

  6. AN INVESTIGATION INTO ALIASING IN IMAGES RECAPTURED FROM AN LCD MONITOR USING A DIGITAL CAMERA

    E-print Network

    Dragotti, Pier Luigi

    AN INVESTIGATION INTO ALIASING IN IMAGES RECAPTURED FROM AN LCD MONITOR USING A DIGITAL CAMERA Hani, United Kingdom ABSTRACT With current technology, high quality recaptured images can be created from soft displays, such as an LCD monitor, using a digital still camera and professional image editing software

  7. Multipurpose Ukus Camera with an Image Tube for Astronomical Photography - Part One - General Description

    Microsoft Academic Search

    V. L. Kuznetsov; E. I. Laptev; Y. I. Malakhov; A. M. Mechetin; M. H. Rodriguez; Y. O. Romanyuk; O. A. Svyatogorov; M. G. Sosonkin; L. M. Shulman

    1984-01-01

    A multipurpose camera with an image-tube for astronomical photography has been designed at the Main Astronomical Observatory of the Ukrainian SSR Academy of Sciences. The block diagram of the camera is given. First observations with the camera have been made.

  8. The Tumor Resection Camera (TReCam), a multipixel imaging probe for radio-guided surgery

    Microsoft Academic Search

    E. Netter; L. Pinot; L. Menard; M. A. Duval; B. Janvier; F. Lefebvre; R. Siebert; Y. Charon

    2009-01-01

    Using the POCI camera, we recently demonstrated the clinical impact of per-operative imaging techniques thanks to a successful clinical trial. Taking advantage of both the POCI experience and the availability of new pixelated detectors, we are developing a new hand held gamma camera TReCam (tumor resection camera). The first prototype offers a 49 ?? 49 mm2 field of view. It

  9. VME image acquisition and processing using standard TV CCD cameras

    NASA Astrophysics Data System (ADS)

    Epaud, F.; Verdier, P.

    1994-12-01

    The ESRF has released the first version of a low-cost image acquisition and processing system based on a industrial VME board and commercial CCD TV cameras. The images from standard CCIR (625 lines) or EIA (525 lines) inputs are digitised with 8-bit dynamic range and stored in a general purpose frame buffer to be processed by the embedded firmware. They can also be transferred to a UNIX workstation through the network for display in a X11 window, or stored in a file for off-line processing with image analysis packages like KHOROS, IDL, etc. The front-end VME acquisition system can be controlled with a Graphic Users' Interface (GUI) based on X11/Motif running under UNIX. The first release of the system is in operation and allows one to observe and analyse beam spots around the accelerators. The system has been extended to make it possible to position a mu sample (less than 10 ?m 2) not visible to the naked eye. This system is a general purpose image acquisition system which may have wider applications.

  10. Image deblurring using the direction dependence of camera resolution

    NASA Astrophysics Data System (ADS)

    Hirai, Yukio; Yoshikawa, Hiroyasu; Shimizu, Masayoshi

    2013-03-01

    The blurring that occurs in the lens of a camera has a tendency to further degrade in areas away from the on-axis of the image. In addition, the degradation of the blurred image in an off-axis area exhibits directional dependence. Conventional methods have been known to use the Wiener filter or the Richardson-Lucy algorithm to mitigate the problem. These methods use the pre-defined point spread function (PSF) in the restoration process, thereby preventing an increase in the noise elements. However, the nonuniform degradation that depends on the direction is not improved even though the edges are emphasized by these conventional methods. In this paper, we analyze the directional dependence of resolution based on the modeling of an optical system using a blurred image. We propose a novel image deblurring method that employs a reverse filter based on optimizing the directional dependence coefficients of the regularization term in the maximum a posterior probability (MAP) algorithm. We have improved the directional dependence of resolution by optimizing the weight coefficients of the direction in which the resolution is degraded.

  11. Radiometric calibration of the Scripps Earth Polychromatic Imaging Camera

    NASA Astrophysics Data System (ADS)

    Early, Edward A.; Bush, Brett C.; Brown, Steven W.; Allen, David W.; Johnson, B. Carol

    2002-01-01

    As part of the Triana mission, the Scripps Earth Polychromatic Imaging Camera (Scripps-EPIC) will view the full sunlit side of Earth from the Lagrange-1 point. The National Institute of Standards and Technology and the Scripps Institution of Oceanography, in collaboration with the contractor, Lockheed-Martin, planned the radiometric calibration of Scripps-EPIC. The measurements for this radiometric calibration were selected based upon the optical characteristics of Scripps-EPIC, the measurement equation relating signal to spectral radiance, and the available optical sources and calibrated radiometers. The guiding principle for the calibration was to perform separate, controlled measurements for each parameter in the measurement equation, namely dark signal, linearity, exposure time, and spectral radiance responsivity.

  12. Arthropod eye-inspired digital camera with unique imaging characteristics

    NASA Astrophysics Data System (ADS)

    Xiao, Jianliang; Song, Young Min; Xie, Yizhu; Malyarchuk, Viktor; Jung, Inhwa; Choi, Ki-Joong; Liu, Zhuangjian; Park, Hyunsung; Lu, Chaofeng; Kim, Rak-Hwan; Li, Rui; Crozier, Kenneth B.; Huang, Yonggang; Rogers, John A.

    2014-06-01

    In nature, arthropods have a remarkably sophisticated class of imaging systems, with a hemispherical geometry, a wideangle field of view, low aberrations, high acuity to motion and an infinite depth of field. There are great interests in building systems with similar geometries and properties due to numerous potential applications. However, the established semiconductor sensor technologies and optics are essentially planar, which experience great challenges in building such systems with hemispherical, compound apposition layouts. With the recent advancement of stretchable optoelectronics, we have successfully developed strategies to build a fully functional artificial apposition compound eye camera by combining optics, materials and mechanics principles. The strategies start with fabricating stretchable arrays of thin silicon photodetectors and elastomeric optical elements in planar geometries, which are then precisely aligned and integrated, and elastically transformed to hemispherical shapes. This imaging device demonstrates nearly full hemispherical shape (about 160 degrees), with densely packed artificial ommatidia. The number of ommatidia (180) is comparable to those of the eyes of fire ants and bark beetles. We have illustrated key features of operation of compound eyes through experimental imaging results and quantitative ray-tracing-based simulations. The general strategies shown in this development could be applicable to other compound eye devices, such as those inspired by moths and lacewings (refracting superposition eyes), lobster and shrimp (reflecting superposition eyes), and houseflies (neural superposition eyes).

  13. Applying and extending ISO/TC42 digital camera resolution standards to mobile imaging products

    NASA Astrophysics Data System (ADS)

    Williams, Don; Burns, Peter D.

    2007-01-01

    There are no fundamental differences between today's mobile telephone cameras and consumer digital still cameras that suggest many existing ISO imaging performance standards do not apply. To the extent that they have lenses, color filter arrays, detectors, apertures, image processing, and are hand held, there really are no operational or architectural differences. Despite this, there are currently differences in the levels of imaging performance. These are driven by physical and economic constraints, and image-capture conditions. Several ISO standards for resolution, well established for digital consumer digital cameras, require care when applied to the current generation of cell phone cameras. In particular, accommodation of optical flare, shading non-uniformity and distortion are recommended. We offer proposals for the application of existing ISO imaging resolution performance standards to mobile imaging products, and suggestions for extending performance standards to the characteristic behavior of camera phones.

  14. Extreme ultra-violet movie camera for imaging microsecond time scale magnetic reconnection

    SciTech Connect

    Chai, Kil-Byoung; Bellan, Paul M. [Applied Physics, Caltech, 1200 E. California Boulevard, Pasadena, California 91125 (United States)] [Applied Physics, Caltech, 1200 E. California Boulevard, Pasadena, California 91125 (United States)

    2013-12-15

    An ultra-fast extreme ultra-violet (EUV) movie camera has been developed for imaging magnetic reconnection in the Caltech spheromak/astrophysical jet experiment. The camera consists of a broadband Mo:Si multilayer mirror, a fast decaying YAG:Ce scintillator, a visible light block, and a high-speed visible light CCD camera. The camera can capture EUV images as fast as 3.3 × 10{sup 6} frames per second with 0.5 cm spatial resolution. The spectral range is from 20 eV to 60 eV. EUV images reveal strong, transient, highly localized bursts of EUV radiation when magnetic reconnection occurs.

  15. Evaluation of the Compton camera method for spectroscopic imaging with ambient-temperature detector technology

    NASA Astrophysics Data System (ADS)

    Earnhart, Jonathan R. D.; Prettyman, Thomas H.; Ianakiev, Kiril D.; Gardner, Robin P.

    1999-10-01

    A prototype Compton camera using ambient-temperature semiconductor detectors is developed for gamma ray spectroscopic imaging. Two camera configurations are evaluated, one using an intrinsic silicon detector for the front plane detector and the other using a CdZnTe detector for the front plane. Both configurations use a large-volume coplanar grid CdZnTe detector for the back plane. The effect of detector noise, energy resolution, and timing resolution on camera performance is described. Technical issues underlying the development of Compton cameras for spectroscopic imaging are presented and imaging of radioactive sources is demonstrated.

  16. Coded-aperture Compton camera for gamma-ray imaging

    NASA Astrophysics Data System (ADS)

    Farber, Aaron M.

    This dissertation describes the development of a novel gamma-ray imaging system concept and presents results from Monte Carlo simulations of the new design. Current designs for large field-of-view gamma cameras suitable for homeland security applications implement either a coded aperture or a Compton scattering geometry to image a gamma-ray source. Both of these systems require large, expensive position-sensitive detectors in order to work effectively. By combining characteristics of both of these systems, a new design can be implemented that does not require such expensive detectors and that can be scaled down to a portable size. This new system has significant promise in homeland security, astronomy, botany and other fields, while future iterations may prove useful in medical imaging, other biological sciences and other areas, such as non-destructive testing. A proof-of-principle study of the new gamma-ray imaging system has been performed by Monte Carlo simulation. Various reconstruction methods have been explored and compared. General-Purpose Graphics-Processor-Unit (GPGPU) computation has also been incorporated. The resulting code is a primary design tool for exploring variables such as detector spacing, material selection and thickness and pixel geometry. The advancement of the system from a simple 1-dimensional simulation to a full 3-dimensional model is described. Methods of image reconstruction are discussed and results of simulations consisting of both a 4 x 4 and a 16 x 16 object space mesh have been presented. A discussion of the limitations and potential areas of further study is also presented.

  17. Spectral imaging using a commercial color-filter array digital camera

    Microsoft Academic Search

    Roy S. Berns; Lawrence A. Taplin; Mahdi Nezamabadi; Mahnaz Mohammadi; Yonghui Zhao

    2005-01-01

    A multi-year research program is underway to develop and deliver spectral-based digital cameras for imaging cultural heritage at the National Gallery of Art, Washington DC and the Museum of Modern Art, New York. The cameras will be used for documentation, production imaging, and conservation science. Three approaches have undergone testing: a liquid-crystal tunable filter (LCTF) coupled with a monochrome camera,

  18. LROC NAC Photometry as a Tool for Studying Physical and Compositional Properties of the Lunar Surface

    NASA Astrophysics Data System (ADS)

    Clegg, R. N.; Jolliff, B. L.; Boyd, A. K.; Stopar, J. D.; Sato, H.; Robinson, M. S.; Hapke, B. W.

    2014-10-01

    LROC NAC photometry has been used to study the effects of rocket exhaust on lunar soil properties, and here we apply the same photometric methods to place compositional constraints on regions of silicic volcanism and pure anorthosite on the Moon.

  19. A CCD CAMERA-BASED HYPERSPECTRAL IMAGING SYSTEM FOR STATIONARY AND AIRBORNE APPLICATIONS

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper describes a charge coupled device (CCD) camera-based hyperspectral imaging system designed for both stationary and airborne remote sensing applications. The system consists of a high performance digital CCD camera, an imaging spectrograph, an optional focal plane scanner, and a PC comput...

  20. ATR/OTR-SY Tank Camera Purge System and in Tank Color Video Imaging System

    SciTech Connect

    Werry, S.M.

    1995-06-06

    This procedure will document the satisfactory operation of the 101-SY tank Camera Purge System (CPS) and 101-SY in tank Color Camera Video Imaging System (CCVIS). Included in the CPRS is the nitrogen purging system safety interlock which shuts down all the color video imaging system electronics within the 101-SY tank vapor space during loss of nitrogen purge pressure.

  1. Image Restoration for Security Cameras with Dirty Lens under Oblique Illumination

    Microsoft Academic Search

    Yusaku Fujii; Naoya Ohta; Tadashi Ito; Saburou Saitoh; Tsutomu Matsuura; Takao Yamamoto

    2006-01-01

    An approach for restoring the images of a suspected person taken by a security camera with dirty lens is proposed. In the approach, the peculiar facts concerning the security camera system that all the things in the image except the suspected person itself are usually preserved and that can be used for investigations are to be used as fully as

  2. Importance of Developing Image Restoration Techniques for Security Cameras under Severe Conditions

    Microsoft Academic Search

    Y. Fujii; T. Ito; N. Ohta; S. Saitoh; T. Matsuura; T. Yamamoto

    2006-01-01

    A concept, which was proposed and has been pursued by the authors for restoring the images of a suspected person taken by a security camera, is reviewed. In the concept, the peculiar facts concerning the security camera system that all the things in the image except the suspected person itself are usually preserved and that can be used for investigations

  3. Experimental and modeling studies of imaging with curvilinear electronic eye cameras

    E-print Network

    Rogers, John A.

    Experimental and modeling studies of imaging with curvilinear electronic eye cameras Viktor of the imaging properties of planar, hemispherical, and elliptic parabolic electronic eye cameras are compared.-J. Yu, J. B. Geddes 3rd, J. Xiao, S. Wang, Y. Huang, and J. A. Rogers, "A hemispherical electronic eye

  4. The experimental evaluation of a prototype rotating slat collimator for planar gamma camera imaging

    Microsoft Academic Search

    M A Lodge; D M Binnie; M A Flower; S Webb

    1995-01-01

    A collimator consisting of a series of parallel slats has been constructed and used in conjunction with a conventional gamma camera to collect one-dimensional projections of the radioisotope distribution being imaged. With the camera remaining stationary, the collimator was made to rotate continuously over the face of the detector and the projections acquired were used to reconstruct a planar image

  5. Ultrasonic camera automatic image depth and stitching modifications for monitoring aerospace composites

    Microsoft Academic Search

    Brad Regez; Goutham Kirikera; Martin Tan Hwai Yuen; Sridhar Krishnaswamy; Bob Lasser

    2009-01-01

    Two modifications to an ultrasonic camera system have been performed in an effort to reduce setup time and post inspection image processing. Current production ultrasonic cameras have image gates that are adjusted manually. The process to adjust them prior to each inspection consumes large amounts of time and requires a skilled operator. The authors have overcome this by integrating the

  6. Ultra-fast MTF Test for High-Volume production of CMOS Imaging Cameras

    Microsoft Academic Search

    Michael Dahl; Josef Heinisch; Stefan Krey; Stefan M. Bäumer; Johan Lurquin; Linghua Chen

    2004-01-01

    During the last years compact CMOS imaging cameras have grown into high volume applications such as mobile phones, PDAs, etc. In order to insure a constant quality of the lenses of the cameras, MTF is used as a figure of merit. MTF is a polychromatic, objective test for imaging lens quality including diffraction effects, system aberrations and surface defects as

  7. Color camera computed tomography imaging spectrometer for improved spatial-spectral image accuracy

    NASA Technical Reports Server (NTRS)

    Wilson, Daniel W. (Inventor); Bearman, Gregory H. (Inventor); Johnson, William R. (Inventor)

    2011-01-01

    Computed tomography imaging spectrometers ("CTIS"s) having color focal plane array detectors are provided. The color FPA detector may comprise a digital color camera including a digital image sensor, such as a Foveon X3.RTM. digital image sensor or a Bayer color filter mosaic. In another embodiment, the CTIS includes a pattern imposed either directly on the object scene being imaged or at the field stop aperture. The use of a color FPA detector and the pattern improves the accuracy of the captured spatial and spectral information.

  8. Toward an image compression algorithm for the high-resolution electronic still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    Taking pictures with a camera that uses a digital recording medium instead of film has the advantage of recording and transmitting images without the use of a darkroom or a courier. However, high-resolution images contain an enormous amount of information and strain data-storage systems. Image compression will allow multiple images to be stored in the High-Resolution Electronic Still Camera. The camera is under development at Johnson Space Center. Fidelity of the reproduced image and compression speed are of tantamount importance. Lossless compression algorithms are fast and faithfully reproduce the image, but their compression ratios will be unacceptably low due to noise in the front end of the camera. Future efforts will include exploring methods that will reduce the noise in the image and increase the compression ratio.

  9. Analysis of a multiple reception model for processing images from the solid-state imaging camera

    NASA Technical Reports Server (NTRS)

    Yan, T.-Y.

    1991-01-01

    A detection model to identify the presence of Galileo optical communications from an Earth-based Transmitter (GOPEX) signal by processing multiple signal receptions extracted from the camera images is described. The model decomposes a multi-signal reception camera image into a set of images so that the location of the pixel being illuminated is known a priori and the laser can illuminate only one pixel at each reception instance. Numerical results show that if effects on the pointing error due to atmospheric refraction can be controlled to between 20 to 30 microrad, the beam divergence of the GOPEX laser should be adjusted to be between 30 to 40 microrad when the spacecraft is 30 million km away from Earth. Furthermore, increasing beyond 5 the number of receptions for processing will not produce a significant detection probability advantage.

  10. Ultrasonic camera automatic image depth and stitching modifications for monitoring aerospace composites

    NASA Astrophysics Data System (ADS)

    Regez, Brad; Kirikera, Goutham; Yuen, Martin Tan Hwai; Krishnaswamy, Sridhar; Lasser, Bob

    2009-03-01

    Two modifications to an ultrasonic camera system have been performed in an effort to reduce setup time and post inspection image processing. Current production ultrasonic cameras have image gates that are adjusted manually. The process to adjust them prior to each inspection consumes large amounts of time and requires a skilled operator. The authors have overcome this by integrating the A-Scan and image together such that the image gating is automatically adjusted using the A-Scan data. The system monitors the A-scan signal which is in the center of the camera's field of view (FOV) and adjusts the image gating accordingly. This integration will allow for defect detection at any depth of the inspected area. Ultrasonic camera operation requires the inspector to scan the surface manually while observing the cameras FOV in the monitor. If the monitor image indicates a defect the operator then stores that image manually and marks an index on the surface as to where the image has been acquired. The second modification automates this effort by employing a digital encoder and image capture card. The encoder is used to track movement of the camera on the structures surface, record positions, and trigger the image capture device. The images are stored real time in the buffer memory rather than on the hard drive. The storing of images in the buffer enables for a more rapid acquisition time compared to storing the images individually to the hard drive. Once the images are stored, an algorithm tracks the movement of the camera through the encoder and accordingly displays the image to the inspector. Upon completion of the scan, an algorithm digitally stitches all the images to create a single full field image. The modifications were tested on a aerospace composite laminate with known defects and the results are discussed.

  11. Free-viewpoint image generation from a video captured by a handheld camera

    NASA Astrophysics Data System (ADS)

    Takeuchi, Kota; Fukushima, Norishige; Yendo, Tomohiro; Panahpour Tehrani, Mehrdad; Fujii, Toshiaki; Tanimoto, Masayuki

    2011-03-01

    In general, free-viewpoint image is generated by captured images by a camera array aligned on a straight line or circle. A camera array is able to capture synchronized dynamic scene. However, camera array is expensive and requires great care to be aligned exactly. In contrast to camera array, a handy camera is easily available and can capture a static scene easily. We propose a method that generates free-viewpoint images from a video captured by a handheld camera in a static scene. To generate free-viewpoint images, view images from several viewpoints and information of camera pose/positions of these viewpoints are needed. In a previous work, a checkerboard pattern has to be captured in every frame to calculate these parameters. And in another work, a pseudo perspective projection is assumed to estimate parameters. This assumption limits a camera movement. However, in this paper, we can calculate these parameters by "Structure From Motion". Additionally, we propose a selection method for reference images from many captured frames. And we propose a method that uses projective block matching and graph-cuts algorithm with reconstructed feature points to estimate a depth map of a virtual viewpoint.

  12. Development of filter exchangeable 3CCD camera for multispectral imaging acquisition

    NASA Astrophysics Data System (ADS)

    Lee, Hoyoung; Park, Soo Hyun; Kim, Moon S.; Noh, Sang Ha

    2012-05-01

    There are a lot of methods to acquire multispectral images. Dynamic band selective and area-scan multispectral camera has not developed yet. This research focused on development of a filter exchangeable 3CCD camera which is modified from the conventional 3CCD camera. The camera consists of F-mounted lens, image splitter without dichroic coating, three bandpass filters, three image sensors, filer exchangeable frame and electric circuit for parallel image signal processing. In addition firmware and application software have developed. Remarkable improvements compared to a conventional 3CCD camera are its redesigned image splitter and filter exchangeable frame. Computer simulation is required to visualize a pathway of ray inside of prism when redesigning image splitter. Then the dimensions of splitter are determined by computer simulation which has options of BK7 glass and non-dichroic coating. These properties have been considered to obtain full wavelength rays on all film planes. The image splitter is verified by two line lasers with narrow waveband. The filter exchangeable frame is designed to make swap bandpass filters without displacement change of image sensors on film plane. The developed 3CCD camera is evaluated to application of detection to scab and bruise on Fuji apple. As a result, filter exchangeable 3CCD camera could give meaningful functionality for various multispectral applications which need to exchange bandpass filter.

  13. High performance imaging streak camera for the National Ignition Facility.

    PubMed

    Opachich, Y P; Kalantar, D H; MacPhee, A G; Holder, J P; Kimbrough, J R; Bell, P M; Bradley, D K; Hatch, B; Brienza-Larsen, G; Brown, C; Brown, C G; Browning, D; Charest, M; Dewald, E L; Griffin, M; Guidry, B; Haugh, M J; Hicks, D G; Homoelle, D; Lee, J J; Mackinnon, A J; Mead, A; Palmer, N; Perfect, B H; Ross, J S; Silbernagel, C; Landen, O

    2012-12-01

    An x-ray streak camera platform has been characterized and implemented for use at the National Ignition Facility. The camera has been modified to meet the experiment requirements of the National Ignition Campaign and to perform reliably in conditions that produce high electromagnetic interference. A train of temporal ultra-violet timing markers has been added to the diagnostic in order to calibrate the temporal axis of the instrument and the detector efficiency of the streak camera was improved by using a CsI photocathode. The performance of the streak camera has been characterized and is summarized in this paper. The detector efficiency and cathode measurements are also presented. PMID:23278024

  14. A Compton camera for spectroscopic imaging from 100keV to 1MeV

    NASA Astrophysics Data System (ADS)

    Earnhart, Jonathan Raby Dewitt

    The objective of this work is to investigate Compton camera technology for spectroscopic imaging of gamma rays in the 100keV to 1MeV range. An efficient, specific purpose Monte Carlo code was developed to investigate the image formation process in Compton cameras. The code is based on a pathway sampling technique with extensive use of variance reduction techniques. The code includes detailed Compton scattering physics, including incoherent scattering functions, Doppler broadening, and multiple scattering. Experiments were performed with two different camera configurations for a scene containing a 75Se source and a 137Cs source. The first camera was based on a fixed silicon detector in the front plane and a CdZnTe detector mounted in the stage. The second camera configuration was based on two CdZnTe detectors. Both systems were able to reconstruct images of 75Se, using the 265keV line, and 137Cs, using the 662keV line. Only the silicon-CdZnTe camera was able to resolve the low intensity 400keV line of 75Se. Neither camera was able to reconstruct the 75Se source location using the 136keV line. The energy resolution of the silicon-CdZnTe camera system was 4% at 662keV. This camera reproduced the location of the 137Cs source by event circle image reconstruction with angular resolutions of 10° for a source on the camera axis and 14° for a source 30° off axis. Typical detector pair efficiencies were measured as 3 x 10-11 at 662keV. The dual CdZnTe camera had an energy resolution of 3.2% at 662keV. This camera reproduced the location of the 137Cs source by event circle image reconstruction with angular resolutions of 8° for a source on the camera axis and 12° for a source 20° off axis. Typical detector pair efficiencies were measured as 7 x 10-11 at 662keV. Of the two prototype camera configurations tested, the silicon-CdZnTe configuration had superior imaging characteristics. This configuration is less sensitive to effects caused by source decay cascades and random coincident events. An implementation of the expectation maximum-maximum likelihood reconstruction technique improved the angular resolution to 6° and reduced the background in all the images. The measured counting rates were a factor of two low for the silicon-CdZnTe camera, and up to a factor of four high for the dual CdZnTe camera compared to simulation. (Abstract shortened by UMI.)

  15. Applications of the BAE SYSTEMS MicroIR uncooled infrared thermal imaging cameras

    NASA Astrophysics Data System (ADS)

    Wickman, Heather A.; Henebury, John J., Jr.; Long, Dennis R.

    2003-09-01

    MicroIR uncooled infrared imaging modules (based on VOx microbolometers), developed and manufactured at BAE SYSTEMS, are integrated into ruggedized, weatherproof camera systems and are currently supporting numerous security and surveillance applications. The introduction of uncooled thermal imaging has permitted the expansion of traditional surveillance and security perimeters. MicroIR cameras go beyond the imagery limits of visible and low-light short wavelength infrared sensors, providing continual, uninterrupted, high quality imagery both day and night. Coupled with an appropriate lens assembly, MicroIR cameras offer exemplary imagery performance that lends itself to a more comprehensive level of surveillance. With the current increased emphasis on security and surveillance, MicroIR Cameras are evolving as an unquestionably beneficial instrument in the security and surveillance arenas. This paper will elaborate on the attributes of the cameras, and discuss the development and the deployment, both present and future, of BAE SYSTEMS MicroIR Cameras.

  16. Modelling of Camera Phone Capture Channel for JPEG Colour Barcode Images

    NASA Astrophysics Data System (ADS)

    Tan, Keng T.; Ong, Siong Khai; Chai, Douglas

    As camera phones have permeated into our everyday lives, two dimensional (2D) barcode has attracted researchers and developers as a cost-effective ubiquitous computing tool. A variety of 2D barcodes and their applications have been developed. Often, only monochrome 2D barcodes are used due to their robustness in an uncontrolled operating environment of camera phones. However, we are seeing an emerging use of colour 2D barcodes for camera phones. Nonetheless, using a greater multitude of colours introduces errors that can negatively affect the robustness of barcode reading. This is especially true when developing a 2D barcode for camera phones which capture and store these barcode images in the baseline JPEG format. This paper present one aspect of the errors introduced by such camera phones by modelling the camera phone capture channel for JPEG colour barcode images.

  17. Cloud Detection with the Earth Polychromatic Imaging Camera (EPIC)

    NASA Technical Reports Server (NTRS)

    Meyer, Kerry; Marshak, Alexander; Lyapustin, Alexei; Torres, Omar; Wang, Yugie

    2011-01-01

    The Earth Polychromatic Imaging Camera (EPIC) on board the Deep Space Climate Observatory (DSCOVR) would provide a unique opportunity for Earth and atmospheric research due not only to its Lagrange point sun-synchronous orbit, but also to the potential for synergistic use of spectral channels in both the UV and visible spectrum. As a prerequisite for most applications, the ability to detect the presence of clouds in a given field of view, known as cloud masking, is of utmost importance. It serves to determine both the potential for cloud contamination in clear-sky applications (e.g., land surface products and aerosol retrievals) and clear-sky contamination in cloud applications (e.g., cloud height and property retrievals). To this end, a preliminary cloud mask algorithm has been developed for EPIC that applies thresholds to reflected UV and visible radiances, as well as to reflected radiance ratios. This algorithm has been tested with simulated EPIC radiances over both land and ocean scenes, with satisfactory results. These test results, as well as algorithm sensitivity to potential instrument uncertainties, will be presented.

  18. Simplified picosecond streak image tube for designing inexpensive commercial cameras

    NASA Astrophysics Data System (ADS)

    Degtyareva, Valentina P.; Fedotov, V. I.; Korobkova, T. A.; Polikarkina, Nadejda D.; Prokhorov, Alexander M.; Schelev, Mikhail Y.; Smirnova, A. V.; Soldatov, N. F.; Titkov, Evgenij I.

    1993-01-01

    The current demand for inexpensive streak camera manufacturing leads to the necessity in development of a variety of relatively simple and low cost image-converter tubes. One such tube, known as PIF-C, designed and manufactured in the Photoelectronics Department of the General Physics Institute (GPI), is now commercially available. Its experimentally measured time resolution in streak mode has approached one picosecond, and 3 ps in synchroscan mode at 82 MHz operation frequency. In single frame mode at 100 ns exposure time, the spatial resolution over 6 mm input area is within 15 lp/mm. Electron optical magnification of the tube is 1.5 x. PIF-C tubes may be supplied with one of the S1/S20/S25 photocathodes, fabricated either on borosilicate glass, UV-glass, or MgF2 substrate. Its P11 phosphor screen is deposited onto the fiber optic window. EBI of the PIF-C/S1 tube is in the range of 5 (DOT) 10-10 A/cm2.

  19. The Potential of Dual Camera Systems for Multimodal Imaging of Cardiac Electrophysiology and Metabolism

    PubMed Central

    Holcomb, Mark R.; Woods, Marcella C.; Uzelac, Ilija; Wikswo, John P.; Gilligan, Jonathan M.; Sidorov, Veniamin Y.

    2013-01-01

    Fluorescence imaging has become a common modality in cardiac electrodynamics. A single fluorescent parameter is typically measured. Given the growing emphasis on simultaneous imaging of more than one cardiac variable, we present an analysis of the potential of dual camera imaging, using as an example our straightforward dual camera system that allows simultaneous measurement of two dynamic quantities from the same region of the heart. The advantages of our system over others include an optional software camera calibration routine that eliminates the need for precise camera alignment. The system allows for rapid setup, dichroic image separation, dual-rate imaging, and high spatial resolution, and it is generally applicable to any two-camera measurement. This type of imaging system offers the potential for recording simultaneously not only transmembrane potential and intracellular calcium, two frequently measured quantities, but also other signals more directly related to myocardial metabolism, such as [K+]e, NADH, and reactive oxygen species, leading to the possibility of correlative multimodal cardiac imaging. We provide a compilation of dye and camera information critical to the design of dual camera systems and experiments. PMID:19657065

  20. Traffic monitoring with serial images from airborne cameras

    Microsoft Academic Search

    Peter Reinartz; Marie Lachaise; Elisabeth Schmeer; Thomas Krauss; Hartmut Runge

    2006-01-01

    The classical means to measure traffic density and velocity depend on local measurements from induction loops and other on site instruments. This information does not give the whole picture of the two-dimensional traffic situation. In order to obtain precise knowledge about the traffic flow of a large area, only airborne cameras or cameras positioned at very high locations (towers, etc.)

  1. Detection Algorithm for Color Image by Multiple Surveillance Camera under Low Illumination Based-on Fuzzy Corresponding Map

    Microsoft Academic Search

    Yutaka Hatakeyama; Masatoshi Makino; Akimichi Mitsuta; Kaoru Hirota

    2007-01-01

    An objects detection algorithm for color dynamic images from two cameras is proposed for a real surveillance system under low illumination. It provides automatic calculation of a Fuzzy Corresponding Map and color similarity for lower luminance conditions, which detects small chromatic regions in CCD camera images under lower illumination. Experimental detection results for two dynamic images from real surveillance cameras

  2. Pantir - a Dual Camera Setup for Precise Georeferencing and Mosaicing of Thermal Aerial Images

    NASA Astrophysics Data System (ADS)

    Weber, I.; Jenal, A.; Kneer, C.; Bongartz, J.

    2015-03-01

    Research and monitoring in fields like hydrology and agriculture are applications of airborne thermal infrared (TIR) cameras, which suffer from low spatial resolution and low quality lenses. Common ground control points (GCPs), lacking thermal activity and being relatively small in size, cannot be used in TIR images. Precise georeferencing and mosaicing however is necessary for data analysis. Adding a high resolution visible light camera (VIS) with a high quality lens very close to the TIR camera, in the same stabilized rig, allows us to do accurate geoprocessing with standard GCPs after fusing both images (VIS+TIR) using standard image registration methods.

  3. Sensor Fingerprint Digests for Fast Camera Identification from Geometrically Distorted Images

    E-print Network

    Fridrich, Jessica

    Sensor Fingerprint Digests for Fast Camera Identification from Geometrically Distorted Images,fridrich}@binghamton.edu ABSTRACT In camera identification using sensor fingerprint, it is absolutely essential that the fingerprint to a geometrical trans- formation, fingerprint detection becomes significantly more complicated. Besides

  4. Locating and Decoding EAN-13 Barcodes from Images Captured by Digital Cameras

    Microsoft Academic Search

    Douglas Chai; Florian Hock

    2005-01-01

    In this paper, we propose a vision-based technique to locate and decode EAN-13 barcodes from images captured by digital cameras. The ultimate aim of our approach is to enable electronic devices with cameras such as mobile phones and personal digital assistants (PDAs) to act as a barcode reader

  5. An airborne multispectral imaging system based on two consumer-grade cameras for agricultural remote sensing

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper describes the design and evaluation of an airborne multispectral imaging system based on two identical consumer-grade cameras for agricultural remote sensing. The cameras are equipped with a full-frame complementary metal oxide semiconductor (CMOS) sensor with 5616 × 3744 pixels. One came...

  6. Adaptive image feature prediction and control for visual tracking with a hand-eye coordinated camera

    Microsoft Academic Search

    J. T. Feddema; C. S. G. Lee

    1990-01-01

    An adaptive method for visually tracking a known moving object with a single mobile camera is described. The method differs from previous methods of motion estimation in that both the camera and the object are moving. The objective is to predict the location of features of the object on the image plane based on past observations and past control inputs

  7. Detecting and tracking moving objects in real time images via active camera

    Microsoft Academic Search

    Murat Sürücü; Dilek Sürücü; R. Haciog?lu

    2010-01-01

    In this work, moving objects tracking methods in real-time images have been examined and emphasized on active camera hardware and software. An integrated system has been build together with moving objects detecting and tracking methods, noise reduction and active camera which includes pan\\/tilt servo motors. Moving object tracking process is achieved by adaptation of frame difference technique on two new

  8. AN EVALUATION OF A FAST, SCINTILLATOR-POLAROID FILM CAMERA FOR NEUTRON IMAGE DETECTION

    Microsoft Academic Search

    H. Berger; I. R. Kraska

    1962-01-01

    A new Polaroid camera for detecting thermal neutron images was subjected ; to a series of tests to determine its usefulness for neutron radiography. ; Because the response of this detector is extremely fast and convenient the camera ; can be highly recommended for alignment procedures and for preliminary ; radiographic exposures. The high contrast resolution capability in the order

  9. Comparison of three thermal cameras with canine hip area thermographic images.

    PubMed

    Vainionpää, Mari; Raekallio, Marja; Tuhkalainen, Elina; Hänninen, Hannele; Alhopuro, Noora; Savolainen, Maija; Junnila, Jouni; Hielm-Björkman, Anna; Snellman, Marjatta; Vainio, Outi

    2012-12-01

    The objective of this study was to compare the method of thermography by using three different resolution thermal cameras and basic software for thermographic images, separating the two persons taking the thermographic images (thermographers) from the three persons interpreting the thermographic images (interpreters). This was accomplished by studying the repeatability between thermographers and interpreters. Forty-nine client-owned dogs of 26 breeds were enrolled in the study. The thermal cameras used were of different resolutions-80 × 80, 180 × 180 and 320 × 240 pixels. Two trained thermographers took thermographic images of the hip area in all dogs using all three cameras. A total of six thermographic images per dog were taken. The thermographic images were analyzed using appropriate computer software, FLIR QuickReport 2.1. Three trained interpreters independently evaluated the mean temperatures of hip joint areas of the six thermographic images for each dog. The repeatability between thermographers was >0.975 with the two higher-resolution cameras and 0.927 with the lowest resolution camera. The repeatability between interpreters was >0.97 with each camera. Thus, the between-interpreter variation was small. The repeatability between thermographers and interpreters was considered high enough to encourage further studies with thermographic imaging in dogs. PMID:22785576

  10. Effects of environment factors on imaging performance of long focal length space camera

    NASA Astrophysics Data System (ADS)

    Guo, Quanfeng; Jin, Guang; Dong, Jihong; Li, Wei; Li, Yanchun; Wang, Haiping; Wang, Kejun; Zhao, Weiguo

    2012-10-01

    In course of developing, testing, launching and working in orbit, Space camera has to undergo the shock of external loads and changing environment. The optical performance of a long focal length space camera is largely determined by external mechanical loads and ambient temperature. The performance of the camera is a result of the interaction between environment factors. The performance of the optical system should be making an accurate forecast when a modern optical instrument is designed. In this paper, the research methods are reviewed firstly. Then the related technologies are described. The analysis methods of environment temperature and structural characteristics effecting space camera imaging performance are also discussed.

  11. D Hyperspectral Frame Imager Camera Data in Photogrammetric Mosaicking

    NASA Astrophysics Data System (ADS)

    Mäkeläinen, A.; Saari, H.; Hippi, I.; Sarkeala, J.; Soukkamäki, J.

    2013-08-01

    A new 2D hyperspectral frame camera system has been developed by VTT (Technical Research Center of Finland) and Rikola Ltd. It contains frame based and very light camera with RGB-NIR sensor and it is suitable for light weight and cost effective UAV planes. MosaicMill Ltd. has converted the camera data into proper format for photogrammetric processing, and camera's geometrical accuracy and stability are evaluated to guarantee required accuracies for end user applications. MosaicMill Ltd. has also applied its' EnsoMOSAIC technology to process hyperspectral data into orthomosaics. This article describes the main steps and results on applying hyperspectral sensor in orthomosaicking. The most promising results as well as challenges in agriculture and forestry are also described.

  12. A Prediction Method of TV Camera Image for Space Manual-control Rendezvous and Docking

    NASA Astrophysics Data System (ADS)

    Zhen, Huang; Qing, Yang; Wenrui, Wu

    Space manual-control rendezvous and docking (RVD) is a key technology for accomplishing the RVD mission in manned space engineering, especially when automatic control system is out of work. The pilot on chase spacecraft manipulates the hand-stick by the image of target spacecraft captured by TV camera. From the TV image, the relative position and attitude of chase and target spacecrafts can be shown. Therefore, the size, the position, the brightness and the shadow of the target on TV camera are key to guarantee the success of manual-control RVD. A method of predicting the on-orbit TV camera image at different relative positions and light conditions during the process of RVD is discussed. Firstly, the basic principle of capturing the image of cross drone on target spacecraft by TV camera is analyzed theoretically, based which the strategy of manual-control RVD is discussed in detail. Secondly, the relationship between the displayed size or position and the real relative distance of chase and target spacecrafts is presented, the brightness and reflection by the target spacecraft at different light conditions are decribed, the shadow on cross drone caused by the chase or target spacecraft is analyzed. Thirdly, a prediction method of on-orbit TV camera images at certain orbit and light condition is provided, and the characteristics of TV camera image during the RVD is analyzed. Finally, the size, the position, the brightness and the shadow of target spacecraft on TV camera image at typical orbit is simulated. The result, by comparing the simulated images with the real images captured by the TV camera on Shenzhou manned spaceship , shows that the prediction method is reasonable

  13. Imaging Asteroid 4 Vesta Using the Framing Camera

    NASA Technical Reports Server (NTRS)

    Keller, H. Uwe; Nathues, Andreas; Coradini, Angioletta; Jaumann, Ralf; Jorda, Laurent; Li, Jian-Yang; Mittlefehldt, David W.; Mottola, Stefano; Raymond, C. A.; Schroeder, Stefan E.

    2011-01-01

    The Framing Camera (FC) onboard the Dawn spacecraft serves a dual purpose. Next to its central role as a prime science instrument it is also used for the complex navigation of the ion drive spacecraft. The CCD detector with 1024 by 1024 pixels provides the stability for a multiyear mission and its high requirements of photometric accuracy over the wavelength band from 400 to 1000 nm covered by 7 band-pass filters. Vesta will be observed from 3 orbit stages with image scales of 227, 63, and 17 m/px, respectively. The mapping of Vesta s surface with medium resolution will be only completed during the exit phase when the north pole will be illuminated. A detailed pointing strategy will cover the surface at least twice at similar phase angles to provide stereo views for reconstruction of the topography. During approach the phase function of Vesta was determined over a range of angles not accessible from earth. This is the first step in deriving the photometric function of the surface. Combining the topography based on stereo tie points with the photometry in an iterative procedure will disclose details of the surface morphology at considerably smaller scales than the pixel scale. The 7 color filters are well positioned to provide information on the spectral slope in the visible, the depth of the strong pyroxene absorption band, and their variability over the surface. Cross calibration with the VIR spectrometer that extends into the near IR will provide detailed maps of Vesta s surface mineralogy and physical properties. Georeferencing all these observation will result in a coherent and unique data set. During Dawn s approach and capture FC has already demonstrated its performance. The strong variation observed by the Hubble Space Telescope can now be correlated with surface units and features. We will report on results obtained from images taken during survey mode covering the whole illuminated surface. Vesta is a planet-like differentiated body, but its surface gravity and escape velocity are comparable to those of other asteroids and hence much smaller than those of the inner planets or

  14. New opportunities for quality enhancing of images captured by passive THz camera

    NASA Astrophysics Data System (ADS)

    Trofimov, Vyacheslav A.; Trofimov, Vladislav V.

    2014-10-01

    As it is well-known, the passive THz camera allows seeing concealed object without contact with a person and this camera is non-dangerous for a person. Obviously, efficiency of using the passive THz camera depends on its temperature resolution. This characteristic specifies possibilities of the detection for concealed object: minimal size of the object; maximal distance of the detection; image quality. Computer processing of the THz image may lead to many times improving of the image quality without any additional engineering efforts. Therefore, developing of modern computer code for its application to THz images is urgent problem. Using appropriate new methods one may expect such temperature resolution which will allow to see banknote in pocket of a person without any real contact. Modern algorithms for computer processing of THz images allow also to see object inside the human body using a temperature trace on the human skin. This circumstance enhances essentially opportunity of passive THz camera applications for counterterrorism problems. We demonstrate opportunities, achieved at present time, for the detection both of concealed objects and of clothes components due to using of computer processing of images captured by passive THz cameras, manufactured by various companies. Another important result discussed in the paper consists in observation of both THz radiation emitted by incandescent lamp and image reflected from ceramic floorplate. We consider images produced by THz passive cameras manufactured by Microsemi Corp., and ThruVision Corp., and Capital Normal University (Beijing, China). All algorithms for computer processing of the THz images under consideration in this paper were developed by Russian part of author list. Keywords: THz wave, passive imaging camera, computer processing, security screening, concealed and forbidden objects, reflected image, hand seeing, banknote seeing, ceramic floorplate, incandescent lamp.

  15. Development of gamma ray imaging cameras. Progress report for second year

    SciTech Connect

    Wehe, D.K.; Knoll, G.F.

    1992-05-28

    In January 1990, the Department of Energy initiated this project with the objective to develop the technology for general purpose, portable gamma ray imaging cameras useful to the nuclear industry. The ultimate goal of this R&D initiative is to develop the analog to the color television camera where the camera would respond to gamma rays instead of visible photons. The two-dimensional real-time image would be displayed would indicate the geometric location of the radiation relative to the camera`s orientation, while the brightness and ``color`` would indicate the intensity and energy of the radiation (and hence identify the emitting isotope). There is a strong motivation for developing such a device for applications within the nuclear industry, for both high- and low-level waste repositories, for environmental restoration problems, and for space and fusion applications. At present, there are no general purpose radiation cameras capable of producing spectral images for such practical applications. At the time of this writing, work on this project has been underway for almost 18 months. Substantial progress has been made in the project`s two primary areas: mechanically-collimated (MCC) and electronically-collimated camera (ECC) designs. We present developments covering the mechanically-collimated design, and then discuss the efforts on the electronically-collimated camera. The renewal proposal addresses the continuing R&D efforts for the third year effort. 8 refs.

  16. Single camera imaging system for color and near-infrared fluorescence image guided surgery.

    PubMed

    Chen, Zhenyue; Zhu, Nan; Pacheco, Shaun; Wang, Xia; Liang, Rongguang

    2014-08-01

    Near-infrared (NIR) fluorescence imaging systems have been developed for image guided surgery in recent years. However, current systems are typically bulky and work only when surgical light in the operating room (OR) is off. We propose a single camera imaging system that is capable of capturing NIR fluorescence and color images under normal surgical lighting illumination. Using a new RGB-NIR sensor and synchronized NIR excitation illumination, we have demonstrated that the system can acquire both color information and fluorescence signal with high sensitivity under normal surgical lighting illumination. The experimental results show that ICG sample with concentration of 0.13 ?M can be detected when the excitation irradiance is 3.92 mW/cm(2) at an exposure time of 10 ms. PMID:25136502

  17. A 58 x 62 pixel Si:Ga array camera for 5 - 14 micron astronomical imaging

    NASA Technical Reports Server (NTRS)

    Gezari, D. Y.; Folz, W. C.; Woods, L. A.; Wooldridge, J. B.

    1989-01-01

    A new infrared array camera system has been successfully applied to high background 5 - 14 micron astronomical imaging photometry observations, using a hybrid 58 x 62 pixel Si:Ga array detector. The off-axis reflective optical design incorporating a parabolic camera mirror, circular variable filter wheel, and cold aperture stop produces diffraction-limited images with negligible spatial distortion and minimum thermal background loading. The camera electronic system architecture is divided into three subsystems: (1) high-speed analog front end, including 2-channel preamp module, array address timing generator, bias power suppies, (2) two 16 bit, 3 microsec per conversion A/D converters interfaced to an arithmetic array processor, and (3) an LSI 11/73 camera control and data analysis computer. The background-limited observational noise performance of the camera at the NASA/IRTF telescope is NEFD (1 sigma) = 0.05 Jy/pixel min exp 1/2.

  18. Lunar TecTonics New LROC images show recent

    E-print Network

    Rhoads, James

    's "vomit comet" Interview with Channel 10 Interview with Channel 12 KTAR story russian Hot springs point to rocky origins for life A New Scientist story tackles the question that strikes at the very heart of one

  19. Development of an ultra-violet digital camera for volcanic SO2 imaging

    Microsoft Academic Search

    G. J. S. Bluth; J. M. Shannon; I. M. Watson; A. J. Prata; V. J. Realmuto

    2007-01-01

    In an effort to improve monitoring of passive volcano degassing, we have constructed and tested a digital camera for quantifying the sulfur dioxide (SO2) content of volcanic plumes. The camera utilizes a bandpass filter to collect photons in the ultra-violet (UV) region where SO2 selectively absorbs UV light. SO2 is quantified by imaging calibration cells of known SO2 concentrations. Images

  20. Remarks on 3D human body posture reconstruction from multiple camera images

    Microsoft Academic Search

    Yusuke Nagasawa; Takako Ohta; Yukiko Mutsuji; Kazuhiko Takahashi; Masafumi Hashimoto

    2007-01-01

    This paper proposes a human body posture estimation method based on back projection of human silhouette images extracted from multi-camera images. To achieve real-time 3D human body posture estimation, a server-client system is introduced into the multi-camera system, improvements of the background subtraction and back projection are investigated. To evaluate the feasibility of the proposed method, 3D estimation experiments of

  1. Correcting spatial distortion and non-uniformity in planar images from ?-Camera systems

    Microsoft Academic Search

    D. Thanasas; D. Maintas; E. Georgiou; N. Giokaris; A. Karabarbounis; C. N. Papanicolas; E. Stiliaris

    2008-01-01

    In this work a correction method for the spatial distortion and non-uniformity of planar images is presented. It is based on an event-by-event correction algorithm suitable for images obtained from small Field of View (FOV) ?-Camera systems which are equipped with a Position Sensitive PhotoMultiplier Tube (PSPMT). In our study, the ?-Camera system consists of a 3 inch PSPMT with

  2. Reconstruction of face image from security camera based on a measurement of space variant PSF

    Microsoft Academic Search

    Tadashi Ito; Hitoshi Hoshino; Yusaku Fujii; Naoya Ohta

    2009-01-01

    Images recorded by security camera are often severely degraded due to dirty lens or secular distortion of the recording system. To restore these images, the fully determination of space variant point spread function (PSF) is required. To measure PSF, we have proposed a method using a liquid crystal display, and shown some experimental results to restore the images by a

  3. An image compression algorithm for a high-resolution digital still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    The Electronic Still Camera (ESC) project will provide for the capture and transmission of high-quality images without the use of film. The image quality will be superior to video and will approach the quality of 35mm film. The camera, which will have the same general shape and handling as a 35mm camera, will be able to send images to earth in near real-time. Images will be stored in computer memory (RAM) in removable cartridges readable by a computer. To save storage space, the image will be compressed and reconstructed at the time of viewing. Both lossless and loss-y image compression algorithms are studied, described, and compared.

  4. Be Foil "Filter Knee Imaging" NSTX Plasma with Fast Soft X-ray Camera

    SciTech Connect

    B.C. Stratton; S. von Goeler; D. Stutman; K. Tritz; L.E. Zakharov

    2005-08-08

    A fast soft x-ray (SXR) pinhole camera has been implemented on the National Spherical Torus Experiment (NSTX). This paper presents observations and describes the Be foil Filter Knee Imaging (FKI) technique for reconstructions of a m/n=1/1 mode on NSTX. The SXR camera has a wide-angle (28{sup o}) field of view of the plasma. The camera images nearly the entire diameter of the plasma and a comparable region in the vertical direction. SXR photons pass through a beryllium foil and are imaged by a pinhole onto a P47 scintillator deposited on a fiber optic faceplate. An electrostatic image intensifier demagnifies the visible image by 6:1 to match it to the size of the charge-coupled device (CCD) chip. A pair of lenses couples the image to the CCD chip.

  5. Temperature resolution enhancing of commercially available THz passive cameras due to computer processing of images

    NASA Astrophysics Data System (ADS)

    Trofimov, Vyacheslav A.; Trofimov, Vladislav V.; Kuchik, Igor E.

    2014-06-01

    As it is well-known, application of the passive THz camera for the security problems is very promising way. It allows seeing concealed object without contact with a person and this camera is non-dangerous for a person. Efficiency of using the passive THz camera depends on its temperature resolution. This characteristic specifies possibilities of the detection of concealed object: minimal size of the object, maximal distance of the detection, image detail. One of probable ways for a quality image enhancing consists in computer processing of image. Using computer processing of the THz image of objects concealed on the human body, one may improve it many times. Consequently, the instrumental resolution of such device may be increased without any additional engineering efforts. We demonstrate new possibilities for seeing the clothes details, which raw images, produced by the THz cameras, do not allow to see. We achieve good quality of the image due to applying various spatial filters with the aim to demonstrate independence of processed images on math operations. This result demonstrates a feasibility of objects seeing. We consider images produced by THz passive cameras manufactured by Microsemi Corp., and ThruVision Corp., and Capital Normal University (Beijing, China).

  6. UCXp camera imaging principle and key technologies of data post-processing

    NASA Astrophysics Data System (ADS)

    Yuan, Fangyan; Li, Guoqing; Zuo, Zhengli; Liu, Jianmin; Wu, Liang; Yu, Xiaoping; Zhao, Haitao

    2014-03-01

    The large format digital aerial camera product UCXp was introduced into the Chinese market in 2008, the image consists of 17310 columns and 11310 rows with a pixel size of 6 mm. The UCXp camera has many advantages compared with the same generation camera, with multiple lenses exposed almost at the same time and no oblique lens. The camera has a complex imaging process whose principle will be detailed in this paper. On the other hand, the UCXp image post-processing method, including data pre-processing and orthophoto production, will be emphasized in this article. Based on the data of new Beichuan County, this paper will describe the data processing and effects.

  7. A Low-Cost Imaging Method to Avoid Hand Shake Blur for Cell Phone Cameras

    NASA Astrophysics Data System (ADS)

    Luo, Lin-Bo; Chong, Jong-Wha

    In this letter, a novel imaging method to reduce the hand shake blur of a cell phone camera without using frame memory is proposed. The method improves the captured image in real time through the use of two additional preview images whose parameters can be calculated in advance and stored in a look-up table. The method does not require frame memory, and thus it can significantly reduce the chip size. The scheme is suitable for integration into a low-cost image sensor of a cell phone camera.

  8. Matching Images to Models: Camera Calibration for 3-D Surface Reconstruction

    NASA Technical Reports Server (NTRS)

    Morris, Robin D.; Smelyanskiy, Vadim N.; Cheeseman. Peter C.; Norvig, Peter (Technical Monitor)

    2001-01-01

    In a previous paper we described a system which recursively recovers a super-resolved three dimensional surface model from a set of images of the surface. In that paper we assumed that the camera calibration for each image was known. In this paper we solve two problems. Firstly, if an estimate of the surface is already known, the problem is to calibrate a new image relative to the existing surface model. Secondly, if no surface estimate is available, the relative camera calibration between the images in the set must be estimated. This will allow an initial surface model to be estimated. Results of both types of estimation are given.

  9. Development of CCD Cameras for Soft X-ray Imaging at the National Ignition Facility

    SciTech Connect

    Teruya, A. T. [LLNL; Palmer, N. E. [LLNL; Schneider, M. B. [LLNL; Bell, P. M. [LLNL; Sims, G. [Spectral Instruments; Toerne, K. [Spectral Instruments; Rodenburg, K. [Spectral Instruments; Croft, M. [Spectral Instruments; Haugh, M. J. [NSTec; Charest, M. R. [NSTec; Romano, E. D. [NSTec; Jacoby, K. D. [NSTec

    2013-09-01

    The Static X-Ray Imager (SXI) is a National Ignition Facility (NIF) diagnostic that uses a CCD camera to record time-integrated X-ray images of target features such as the laser entrance hole of hohlraums. SXI has two dedicated positioners on the NIF target chamber for viewing the target from above and below, and the X-ray energies of interest are 870 eV for the “soft” channel and 3 – 5 keV for the “hard” channels. The original cameras utilize a large format back-illuminated 2048 x 2048 CCD sensor with 24 micron pixels. Since the original sensor is no longer available, an effort was recently undertaken to build replacement cameras with suitable new sensors. Three of the new cameras use a commercially available front-illuminated CCD of similar size to the original, which has adequate sensitivity for the hard X-ray channels but not for the soft. For sensitivity below 1 keV, Lawrence Livermore National Laboratory (LLNL) had additional CCDs back-thinned and converted to back-illumination for use in the other two new cameras. In this paper we describe the characteristics of the new cameras and present performance data (quantum efficiency, flat field, and dynamic range) for the front- and back-illuminated cameras, with comparisons to the original cameras.

  10. Proposal for real-time terahertz imaging system with palm-size terahertz camera and compact quantum cascade laser

    E-print Network

    Oda, Naoki

    This paper describes a real-time terahertz (THz) imaging system, using the combination of a palm-size THz camera with a compact quantum cascade laser (QCL). The THz camera contains a 320x240 microbolometer focal plane array ...

  11. 3-D imaging of complex source fields with a Compton camera imager

    SciTech Connect

    McKisson, J.E.; Haskins, P.S.; Henderson, D.P. Jr. [Radiation Technologies, Inc., Alachua, FL (United States)] [and others

    1996-12-31

    Many practical applications of Compton Camera Imager (CCI) systems require imaging of extended sources in diffuse backgrounds. This paper presents results of simulations of CO system data from extended source distributions with and without a diffuse background. The simulations were performed using a biased photon transport Monte Carlo code which was validated with MCNP results. The reconstruction code used an iterative back-projection technique based upon the Expectation Maximization Maximum Likelihood method. Examples are shown of two clearly resolved cylindrical sources, an extended source both with and without background, and extended sources at multiple depths within the volume. The results demonstrate the ability to image extended sources in high backgrounds with a limited number of events.

  12. In-plane displacement and strain measurements using a camera phone and digital image correlation

    NASA Astrophysics Data System (ADS)

    Yu, Liping; Pan, Bing

    2014-05-01

    In-plane displacement and strain measurements of planar objects by processing the digital images captured by a camera phone using digital image correlation (DIC) are performed in this paper. As a convenient communication tool for everyday use, the principal advantages of a camera phone are its low cost, easy accessibility, and compactness. However, when used as a two-dimensional DIC system for mechanical metrology, the assumed imaging model of a camera phone may be slightly altered during the measurement process due to camera misalignment, imperfect loading, sample deformation, and temperature variations of the camera phone, which can produce appreciable errors in the measured displacements. In order to obtain accurate DIC measurements using a camera phone, the virtual displacements caused by these issues are first identified using an unstrained compensating specimen and then corrected by means of a parametric model. The proposed technique is first verified using in-plane translation and out-of-plane translation tests. Then, it is validated through a determination of the tensile strains and elastic properties of an aluminum specimen. Results of the present study show that accurate DIC measurements can be conducted using a common camera phone provided that an adequate correction is employed.

  13. Advanced camera image data acquisition system for Pi-of-the-Sky

    NASA Astrophysics Data System (ADS)

    Kwiatkowski, Maciej; Kasprowicz, Grzegorz; Pozniak, Krzysztof; Romaniuk, Ryszard; Wrochna, Grzegorz

    2008-11-01

    The paper describes a new generation of high performance, remote control, CCD cameras designed for astronomical applications. A completely new camera PCB was designed, manufactured, tested and commissioned. The CCD chip was positioned in a different way than previously resulting in better performance of the astronomical video data acquisition system. The camera was built using a low-noise, 4Mpixel CCD circuit by STA. The electronic circuit of the camera is highly parameterized and reconfigurable, as well as modular in comparison with the solution of first generation, due to application of open software solutions and FPGA circuit, Altera Cyclone EP1C6. New algorithms were implemented into the FPGA chip. There were used the following advanced electronic circuit in the camera system: microcontroller CY7C68013a (core 8051) by Cypress, image processor AD9826 by Analog Devices, GigEth interface RTL8169s by Realtec, memory SDRAM AT45DB642 by Atmel, CPU typr microprocessor ARM926EJ-S AT91SAM9260 by ARM and Atmel. Software solutions for the camera and its remote control, as well as image data acquisition are based only on the open source platform. There were used the following image interfaces ISI and API V4L2, data bus AMBA, AHB, INDI protocol. The camera will be replicated in 20 pieces and is designed for continuous on-line, wide angle observations of the sky in the research program Pi-of-the-Sky.

  14. Compact camera for multispectral and conventional imaging based on patterned filters.

    PubMed

    Skauli, Torbjørn; Torkildsen, Hans Erling; Nicolas, Stephane; Opsahl, Thomas; Haavardsholm, Trym; Kåsen, Ingebjørg; Rognmo, Atle

    2014-05-01

    A multispectral camera concept is presented. The concept is based on using a patterned filter in the focal plane, combined with scanning of the field of view. The filter layout has stripes of different bandpass filters extending orthogonally to the scan direction. The pattern of filter stripes is such that all bands are sampled multiple times, while minimizing the total duration of the sampling of a given scene point. As a consequence, the filter needs only a small part of the area of an image sensor. The remaining area can be used for conventional 2D imaging. A demonstrator camera has been built with six bands in the visible and near infrared, as well as a panchromatic 2D imaging capability. Image recording and reconstruction is demonstrated, but the quality of image reconstruction is expected to be a main challenge for systems based on this concept. An important advantage is that the camera can potentially be made very compact, and also low cost. It is shown that under assumptions that are not unreasonable, the proposed camera concept can be much smaller than a conventional imaging spectrometer. In principle, it can be smaller in volume by a factor on the order of several hundred while collecting the same amount of light per multispectral band. This makes the proposed camera concept very interesting for small airborne platforms and other applications requiring compact spectral imagers. PMID:24921891

  15. Suite of proposed imaging performance metrics and test methods for fire service thermal imaging cameras

    NASA Astrophysics Data System (ADS)

    Amon, Francine; Lock, Andrew; Bryner, Nelson

    2008-04-01

    The use of thermal imaging cameras (TIC) by the fire service is increasing as fire fighters become more aware of the value of these tools. The National Fire Protection Association (NFPA) is currently developing a consensus standard for design and performance requirements for TIC as used by the fire service. This standard will include performance requirements for TIC design robustness and image quality. The National Institute of Standards and Technology facilitates this process by providing recommendations for science-based performance metrics and test methods to the NFPA technical committee charged with the development of this standard. A suite of imaging performance metrics and test methods based on the harsh operating environment and limitations of use particular to the fire service has been proposed for inclusion in the standard. The performance metrics include large area contrast, effective temperature range, spatial resolution, nonuniformity, and thermal sensitivity. Test methods to measure TIC performance for these metrics are in various stages of development. An additional procedure, image recognition, has also been developed to facilitate the evaluation of TIC design robustness. The pass/fail criteria for each of these imaging performance metrics are derived from perception tests in which image contrast, brightness, noise, and spatial resolution are degraded to the point that users can no longer consistently perform tasks involving TIC due to poor image quality.

  16. Image quality evaluation of color displays using a Fovean color camera

    NASA Astrophysics Data System (ADS)

    Roehrig, Hans; Dallas, William J.; Fan, Jiahua; Krupinski, Elizabeth A.; Redford, Gary R.; Yoneda, Takahiro

    2007-03-01

    This paper presents preliminary data on the use of a color camera for the evaluation of Quality Control (QC) and Quality Analysis (QA) of a color LCD in comparison with that of a monochrome LCD. The color camera is a C-MOS camera with a pixel size of 9 µm and a pixel matrix of 2268 × 1512 × 3. The camera uses a sensor that has co-located pixels for all three primary colors. The imaging geometry used mostly was 12 × 12 camera pixels per display pixel even though it appears that an imaging geometry of 17.6 might provide results which are more accurate. The color camera is used as an imaging colorimeter, where each camera pixel is calibrated to serve as a colorimeter. This capability permits the camera to determine chromaticity of the color LCD at different sections of the display. After the color calibration with a CS-200 colorimeter the color coordinates of the display's primaries determined from the camera's luminance response are very close to those found from the CS-200. Only the color coordinates of the display's white point were in error. Modulation Transfer Function (MTF) as well as Noise in terms of the Noise Power Spectrum (NPS) of both LCDs were evaluated. The horizontal MTFs of both displays have a larger negative slope than the vertical MTFs, indicating that the horizontal MTFs are poorer than the vertical MTFs. However the modulations at the Nyquist frequency seem lower for the color LCD than for the monochrome LCD. These results contradict simulations regarding MTFs in the vertical direction. The spatial noise of the color display in both directions are larger than that of the monochrome display. Attempts were also made to analyze the total noise in terms of spatial and temporal noise by applying subtractions of images taken at exactly the same exposure. Temporal noise seems to be significantly lower than spatial noise.

  17. DEFINITION OF AIRWAY COMPOSITION WITHIN GAMMA CAMERA IMAGES

    EPA Science Inventory

    The efficacies on inhaled pharmacologic drugs in the prophylaxis and treatment if airway diseases could be improved if particles were selectively directed to appropriate Sites. n the medical arena, planar gamma scintillation cameras may be employed to study factors affecting such...

  18. Camera Animation

    NSDL National Science Digital Library

    A general discussion of the use of cameras in computer animation. This section includes principles of traditional film techniques and suggestions for the use of a camera during an architectural walkthrough. This section includes html pages, images and one video.

  19. NPS assessment of color medical image displays using a monochromatic CCD camera

    NASA Astrophysics Data System (ADS)

    Roehrig, Hans; Gu, Xiliang; Fan, Jiahua

    2012-10-01

    This paper presents an approach to Noise Power Spectrum (NPS) assessment of color medical displays without using an expensive imaging colorimeter. The R, G and B color uniform patterns were shown on the display under study and the images were taken using a high resolution monochromatic camera. A colorimeter was used to calibrate the camera images. Synthetic intensity images were formed by the weighted sum of the R, G, B and the dark screen images. Finally the NPS analysis was conducted on the synthetic images. The proposed method replaces an expensive imaging colorimeter for NPS evaluation, which also suggests a potential solution for routine color medical display QA/QC in the clinical area, especially when imaging of display devices is desired

  20. High-frame-rate intensified fast optically shuttered TV cameras with selected imaging applications

    SciTech Connect

    Yates, G.J.; King, N.S.P.

    1994-08-01

    This invited paper focuses on high speed electronic/electro-optic camera development by the Applied Physics Experiments and Imaging Measurements Group (P-15) of Los Alamos National Laboratory`s Physics Division over the last two decades. The evolution of TV and image intensifier sensors and fast readout fast shuttered cameras are discussed. Their use in nuclear, military, and medical imaging applications are presented. Several salient characteristics and anomalies associated with single-pulse and high repetition rate performance of the cameras/sensors are included from earlier studies to emphasize their effects on radiometric accuracy of electronic framing cameras. The Group`s test and evaluation capabilities for characterization of imaging type electro-optic sensors and sensor components including Focal Plane Arrays, gated Image Intensifiers, microchannel plates, and phosphors are discussed. Two new unique facilities, the High Speed Solid State Imager Test Station (HSTS) and the Electron Gun Vacuum Test Chamber (EGTC) arc described. A summary of the Group`s current and developmental camera designs and R&D initiatives are included.

  1. Design Considerations Of A Compton Camera For Low Energy Medical Imaging

    SciTech Connect

    Harkness, L. J.; Boston, A. J.; Boston, H. C.; Cresswell, J. R.; Grint, A. N.; Judson, D. S.; Nolan, P. J.; Oxley, D. C. [Department of Physics, University of Liverpool, Oliver Lodge Laboratory, Liverpool, UK L697ZE (United Kingdom); Lazarus, I.; Simpson, J. [STFC Daresbury Laboratory, Daresbury, Warrington, WA44AD (United Kingdom)

    2009-12-02

    Development of a Compton camera for low energy medical imaging applications is underway. The ProSPECTus project aims to utilize position sensitive detectors to generate high quality images using electronic collimation. This method has the potential to significantly increase the imaging efficiency compared with mechanically collimated SPECT systems, a highly desirable improvement on clinical systems. Design considerations encompass the geometrical optimisation and evaluation of image quality from the system which is to be built and assessed.

  2. Design Considerations Of A Compton Camera For Low Energy Medical Imaging

    NASA Astrophysics Data System (ADS)

    Harkness, L. J.; Boston, A. J.; Boston, H. C.; Cresswell, J. R.; Grint, A. N.; Lazarus, I.; Judson, D. S.; Nolan, P. J.; Oxley, D. C.; Simpson, J.

    2009-12-01

    Development of a Compton camera for low energy medical imaging applications is underway. The ProSPECTus project aims to utilize position sensitive detectors to generate high quality images using electronic collimation. This method has the potential to significantly increase the imaging efficiency compared with mechanically collimated SPECT systems, a highly desirable improvement on clinical systems. Design considerations encompass the geometrical optimisation and evaluation of image quality from the system which is to be built and assessed.

  3. High-resolution position-sensitive proportional counter camera for radiochromatographic imaging

    SciTech Connect

    Schuresko, D.D.; Kopp, M.K.; Harter, J.A.; Bostick, W.D.

    1988-12-01

    A high-resolution proportional counter camera for imaging two- dimensional (2-D) distributions of radionuclides is described. The camera can accommodate wet or dry samples that are separated from the counter gas volume by a 6-..mu..m Mylar membrane. Using 95% Xe-5% CO/sub 2/ gas at 3-MPa pressure and electronic collimation based upon pulse energy discrimination, the camera's performance characteristics for /sup 14/C distributions are as follows: active area--10 by 10 cm, position resolution--0.5 mm, total background--300 disintegrations per minute, and count-rate capability--10/sup 5/ disintegrations per second. With computerized data acquisition, the camera is a significant improvement in analytical instrumentation for imaging 2-D radionuclide distributions over present-day commercially available technology. (Note: This manuscript was completed in July 1983). 13 refs., 10 figs.

  4. Robust real time extraction of plane segments from time-of-flight camera images

    NASA Astrophysics Data System (ADS)

    Dalbah, Yosef; Koltermann, Dirk; Wahl, Friedrich M.

    2014-04-01

    We present a method that extracts plane segments from images of a time-of-flight camera. Future driver assistance systems rely on an accurate description of the vehicle's environment. Time-of-flight cameras can be used for environment perception and for the reconstruction of the environment. Since most structures in urban environments are planar, extracted plane segments from single camera images can be used for the creation of a global map. We present a method for real time detection of planar surface structures from time-of-flight camera data. The concept is based on a planar surface segmentation that serves as the fundament for a subsequent global planar surface extraction. The evaluation demonstrates the ability of the described algorithm to detect planar surfaces form depth data of complex scenarios in real time. We compare our methods to state of the art planar surface extraction algorithms.

  5. GPI/V.TEK streak/single-frame image converter camera

    NASA Astrophysics Data System (ADS)

    Lozovoi, Valerij I.; Postovalov, Valdis E.; Prokhorov, Alexander M.; Schelev, Mikhail Y.; Park, Seung-Han; Kim, Ung; Lee, Jae-Sun

    1993-01-01

    As a part of the joint research projects between General Physics Institute and Yonsei University/V.TEK Company, an experimental prototype of an image converter camera is designed and manufactured. The camera operates both in single frame and single shot streak modes. Single frame exposures are varied in the 250 - 1000 ns range, while recording intervals in streak mode are adjusted within the 2 - 1000 ns range over a 25 mm-wide output screen area. Temporal resolution at maximum streak speed is better than 10 ps. Total camera gain is 5 (DOT) 104. The camera is equipped with a specially designed PIF-V.1 image converter tube. Available are choices among S1, S20, or S25 photocathodes fabricated onto Molibden glass/UV glass, or MgF2 substrate.

  6. Focus and alignment using out-of-focus stellar images at the Dark Energy Camera

    NASA Astrophysics Data System (ADS)

    Roodman, Aaron

    2010-07-01

    The focus and alignment system of the prime focus Dark Energy Camera (DECam), for the Dark Energy Survey at the CTIO 4 meter Blanco telescope, is described. DECam includes eight 2K by 2K CCDs placed 1.5mm extra- and intra-focally for active control of focus and alignment, as well as for wavefront measurement. We describe an algorithm for out-of-focus star (donut) image analysis and present results on the use of donuts for focus and alignment. Results will be presented for both simulated DECam images and for images taken at the Blanco 4 meter with the current MosaicII camera.

  7. Microchannel plate pinhole camera for 20 to 100 keV x-ray imaging

    SciTech Connect

    Wang, C.L.; Leipelt, G.R.; Nilson, D.G.

    1984-10-03

    We present the design and construction of a sensitive pinhole camera for imaging suprathermal x-rays. Our device is a pinhole camera consisting of four filtered pinholes and microchannel plate electron multiplier for x-ray detection and signal amplification. We report successful imaging of 20, 45, 70, and 100 keV x-ray emissions from the fusion targets at our Novette laser facility. Such imaging reveals features of the transport of hot electrons and provides views deep inside the target.

  8. The Mast Cameras and Mars Descent Imager (MARDI) for the 2009 Mars Science Laboratory

    NASA Technical Reports Server (NTRS)

    Malin, M. C.; Bell, J. F.; Cameron, J.; Dietrich, W. E.; Edgett, K. S.; Hallet, B.; Herkenhoff, K. E.; Lemmon, M. T.; Parker, T. J.; Sullivan, R. J.

    2005-01-01

    Based on operational experience gained during the Mars Exploration Rover (MER) mission, we proposed and were selected to conduct two related imaging experiments: (1) an investigation of the geology and short-term atmospheric vertical wind profile local to the Mars Science Laboratory (MSL) landing site using descent imaging, and (2) a broadly-based scientific investigation of the MSL locale employing visible and very near infra-red imaging techniques from a pair of mast-mounted, high resolution cameras. Both instruments share a common electronics design, a design also employed for the MSL Mars Hand Lens Imager (MAHLI) [1]. The primary differences between the cameras are in the nature and number of mechanisms and specific optics tailored to each camera s requirements.

  9. Optical characterization of UV multispectral imaging cameras for SO2 plume measurements

    NASA Astrophysics Data System (ADS)

    Stebel, K.; Prata, F.; Dauge, F.; Durant, A.; Amigo, A.,

    2012-04-01

    Only a few years ago spectral imaging cameras for SO2 plume monitoring were developed for remote sensing of volcanic plumes. We describe the development from a first camera using a single filter in the absorption band of SO2 to more advanced systems using several filters and an integrated spectrometer. The first system was based on the Hamamatsu C8484 UV camera (1344 x 1024 pixels) with high quantum efficiency in the UV region from 280 nm onward. At the heart of the second UV camera system, EnviCam, is a cooled Alta U47 camera, equipped with two on-band (310 and 315 nm) and two off-band (325 and 330 nm) filters. The third system utilizes again the uncooled Hamamatsu camera for faster sampling (~10 Hz) and a four-position filter-wheel equipped with two 10 nm filters centered at 310 and 330 nm, a UV broadband view and a blackened plate for dark-current measurement. Both cameras have been tested with lenses with different focal lengths. A co-aligned spectrometer provides a ~0.3nm resolution spectrum within the field-of-view of the camera. We describe the ground-based imaging cameras systems developed and utilized at our Institute. Custom made cylindrical quartz calibration cells with 50 mm diameter, to cover the entire field of view of the camera optics, are filled with various amounts of gaseous SO2 (typically between 100 and 1500 ppm•m). They are used for calibration and characterization of the cameras in the laboratory. We report about the procedures for monitoring and analyzing SO2 path-concentration and fluxes. This includes a comparison of the calibration in the atmosphere using the SO2 cells versus the SO2 retrieval from the integrated spectrometer. The first UV cameras have been used to monitor ship emissions (Ny-Ålesund, Svalbard and Genova, Italy). The second generation of cameras were first tested for industrial stack monitoring during a field campaign close to the Rovinari (Romania) power plant in September 2010, revealing very high SO2 emissions (> 1000 ppm•m). The second generation cameras are now used by students from several universities in Romania. The newest system has been tested for volcanic plume monitoring at Turrialba, Costa Rica in January, 2011, at Merapi volcani, Indonesia in February 2011, at Lascar volcano in Chile in July 2011 and at Etna/Stromboli (Italy) in November 2011. Retrievals from some of these campaigns will be presented.

  10. Flexible camera applications of an advanced uncooled microbolometer thermal imaging core

    NASA Astrophysics Data System (ADS)

    Rumbaugh, Roy N.; Pongratz, Simon; Breen, Tom; Wickman, Heather; Klug, Ron; Gess, Aaron; Hays, John; Bastian, Jonathan; Hall, Greg; Arion, Tim; Owens, John; Siviter, David

    2004-04-01

    Since its introduction less than a year ago, many camera products and end-user applications have benefited from upgrading to the revolutionary BAE Systems MicroIRTM SCC500TM Standard Camera Core. This flexible, multi-resolution, uncooled, vanadium oxide (VOx) microbolometer based imaging engine is delivering higher performance at a lower price to diverse applications with more unique requirements than previous generations of engines. These applications include firefighting, surveillance, security, navigarion, weapon sight, missile, space, automotive and many others. This paper highlights several cameras, systems, and their applictiaons to illustrate some of the real-world uses and benefits of these products.

  11. Multiple-plane particle image velocimetry using a light-field camera.

    PubMed

    Skupsch, Christoph; Brücker, Christoph

    2013-01-28

    Planar velocity fields in flows are determined simultaneously on parallel measurement planes by means of an in-house manufactured light-field camera. The planes are defined by illuminating light sheets with constant spacing. Particle positions are reconstructed from a single 2D recording taken by a CMOS-camera equipped with a high-quality doublet lens array. The fast refocusing algorithm is based on synthetic-aperture particle image velocimetry (SAPIV). The reconstruction quality is tested via ray-tracing of synthetically generated particle fields. The introduced single-camera SAPIV is applied to a convective flow within a measurement volume of 30 x 30 x 50 mm³. PMID:23389157

  12. A Compton camera for spectroscopic imaging from 100 keV to 1 MeV

    SciTech Connect

    Earnhart, J.R.D.

    1998-12-31

    A review of spectroscopic imaging issues, applications, and technology is presented. Compton cameras based on solid state semiconductor detectors stands out as the best system for the nondestructive assay of special nuclear materials. A camera for this application has been designed based on an efficient specific purpose Monte Carlo code developed for this project. Preliminary experiments have been performed which demonstrate the validity of the Compton camera concept and the accuracy of the code. Based on these results, a portable prototype system is in development. Proposed future work is addressed.

  13. A high-resolution airborne four-camera imaging system for agricultural remote sensing

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper describes the design and testing of an airborne multispectral digital imaging system for remote sensing applications. The system consists of four high resolution charge coupled device (CCD) digital cameras and a ruggedized PC equipped with a frame grabber and image acquisition software. T...

  14. Intraoperative Imaging Guidance for Sentinel Node Biopsy in Melanoma Using a Mobile Gamma Camera

    Microsoft Academic Search

    Lynn T. Dengel; Mitali J. More; Patricia G. Judy; Gina R. Petroni; Mark E. Smolkin; Patrice K. Rehm; Stan Majewski; Mark B. Williams; Craig L. Slingluff Jr

    2011-01-01

    The objective is to evaluate the sensitivity and clinical utility of intraoperative mobile gamma camera (MGC) imaging in sentinel lymph node biopsy (SLNB) in melanoma. The false-negative rate for SLNB for melanoma is approximately 17%, for which failure to identify the sentinel lymph node (SLN) is a major cause. Intraoperative imaging may aid in detection of SLN near the primary

  15. Flexible camera applications of an advanced uncooled microbolometer thermal imaging core

    Microsoft Academic Search

    Roy N. Rumbaugh; Simon Pongratz; Tom Breen; Heather Wickman; Ron Klug; Aaron Gess; John Hays; Jonathan Bastian; Greg Hall; Tim Arion; John Owens; David Siviter

    2004-01-01

    Since its introduction less than a year ago, many camera products and end-user applications have benefited from upgrading to the revolutionary BAE Systems MicroIRTM SCC500TM Standard Camera Core. This flexible, multi-resolution, uncooled, vanadium oxide (VOx) microbolometer based imaging engine is delivering higher performance at a lower price to diverse applications with more unique requirements than previous generations of engines. These

  16. Hybrid Compton camera/coded aperture imaging system

    DOEpatents

    Mihailescu, Lucian (Livermore, CA); Vetter, Kai M. (Alameda, CA)

    2012-04-10

    A system in one embodiment includes an array of radiation detectors; and an array of imagers positioned behind the array of detectors relative to an expected trajectory of incoming radiation. A method in another embodiment includes detecting incoming radiation with an array of radiation detectors; detecting the incoming radiation with an array of imagers positioned behind the array of detectors relative to a trajectory of the incoming radiation; and performing at least one of Compton imaging using at least the imagers and coded aperture imaging using at least the imagers. A method in yet another embodiment includes detecting incoming radiation with an array of imagers positioned behind an array of detectors relative to a trajectory of the incoming radiation; and performing Compton imaging using at least the imagers.

  17. Constructing a Database from Multiple 2D Images for Camera Pose Estimation and Robot Localization

    NASA Technical Reports Server (NTRS)

    Wolf, Michael; Ansar, Adnan I.; Brennan, Shane; Clouse, Daniel S.; Padgett, Curtis W.

    2012-01-01

    The LMDB (Landmark Database) Builder software identifies persistent image features (landmarks) in a scene viewed multiple times and precisely estimates the landmarks 3D world positions. The software receives as input multiple 2D images of approximately the same scene, along with an initial guess of the camera poses for each image, and a table of features matched pair-wise in each frame. LMDB Builder aggregates landmarks across an arbitrarily large collection of frames with matched features. Range data from stereo vision processing can also be passed to improve the initial guess of the 3D point estimates. The LMDB Builder aggregates feature lists across all frames, manages the process to promote selected features to landmarks, and iteratively calculates the 3D landmark positions using the current camera pose estimations (via an optimal ray projection method), and then improves the camera pose estimates using the 3D landmark positions. Finally, it extracts image patches for each landmark from auto-selected key frames and constructs the landmark database. The landmark database can then be used to estimate future camera poses (and therefore localize a robotic vehicle that may be carrying the cameras) by matching current imagery to landmark database image patches and using the known 3D landmark positions to estimate the current pose.

  18. Detecting Image Splicing using Geometry Invariants and Camera Characteristics Consistency

    Microsoft Academic Search

    Yu-feng Hsu; Shih-fu Chang

    2006-01-01

    Recent advances in computer technology have made digital image tampering more and more common. In this paper, we propose an authentic vs. spliced image classification method making use of geometry invariants in a semi-automatic man- ner. For a given image, we identify suspicious splicing ar- eas, compute the geometry invariants from the pixels within each region, and then estimate the

  19. Retinal axial focusing and multi-layer imaging with a liquid crystal adaptive optics camera

    NASA Astrophysics Data System (ADS)

    Liu, Rui-Xue; Zheng, Xian-Liang; Li, Da-Yu; Xia, Ming-Liang; Hu, Li-Fa; Cao, Zhao-Liang; Mu, Quan-Quan; Xuan, Li

    2014-09-01

    With the help of adaptive optics (AO) technology, cellular level imaging of living human retina can be achieved. Aiming to reduce distressing feelings and to avoid potential drug induced diseases, we attempted to image retina with dilated pupil and froze accommodation without drugs. An optimized liquid crystal adaptive optics camera was adopted for retinal imaging. A novel eye stared system was used for stimulating accommodation and fixating imaging area. Illumination sources and imaging camera kept linkage for focusing and imaging different layers. Four subjects with diverse degree of myopia were imaged. Based on the optical properties of the human eye, the eye stared system reduced the defocus to less than the typical ocular depth of focus. In this way, the illumination light can be projected on certain retina layer precisely. Since that the defocus had been compensated by the eye stared system, the adopted 512 × 512 liquid crystal spatial light modulator (LC-SLM) corrector provided the crucial spatial fidelity to fully compensate high-order aberrations. The Strehl ratio of a subject with -8 diopter myopia was improved to 0.78, which was nearly close to diffraction-limited imaging. By finely adjusting the axial displacement of illumination sources and imaging camera, cone photoreceptors, blood vessels and nerve fiber layer were clearly imaged successfully.

  20. IMAGE BASED RENDERING WITH DEPTH CAMERAS: HOW MANY ARE NEEDED? Christopher Gilliam, James Pearson, Mike Brookes and Pier Luigi Dragotti

    E-print Network

    Dragotti, Pier Luigi

    IMAGE BASED RENDERING WITH DEPTH CAMERAS: HOW MANY ARE NEEDED? Christopher Gilliam, James Pearson, Mike Brookes and Pier Luigi Dragotti Electrical and Electronic Engineering Department, Imperial College

  1. Removal of parasitic image due to metal specularity based on digital micromirror device camera

    NASA Astrophysics Data System (ADS)

    Zhao, Shou-Bo; Zhang, Fu-Min; Qu, Xing-Hua; Chen, Zhe; Zheng, Shi-Wei

    2014-06-01

    Visual inspection for a highly reflective surface is commonly faced with a serious limitation, which is that useful information on geometric construction and textural defects is covered by a parasitic image due to specular highlights. In order to solve the problem, we propose an effective method for removing the parasitic image. Specifically, a digital micromirror device (DMD) camera for programmable imaging is first described. The strength of the optical system is to process scene ray before image formation. Based on the DMD camera, an iterative algorithm of modulated region selection, precise region mapping, and multimodulation provides removal of the parasitic image and reconstruction of a correction image. Finally, experimental results show the performance of the proposed approach.

  2. A 5-18 micron array camera for high-background astronomical imaging

    NASA Technical Reports Server (NTRS)

    Gezari, Daniel Y.; Folz, Walter C.; Woods, Lawrence A.; Varosi, Frank

    1992-01-01

    A new infrared array camera system using a Hughes/SBRC 58 x 62 pixel hybrid Si:Ga array detector has been successfully applied to high-background 5-18-micron astronomical imaging observations. The off-axis reflective optical system minimizes thermal background loading and produces diffraction-limited images with negligible spatial distortion. The noise equivalent flux density (NEFD) of the camera at 10 microns on the 3.0-m NASA/Infrared Telescope Facility with broadband interference filters and 0.26 arcsec pixel is NEFD = 0.01 Jy/sq rt min per pixel (1sigma), and it operates at a frame rate of 30 Hz with no compromise in observational efficiency. The electronic and optical design of the camera, its photometric characteristics, examples of observational results, and techniques for successful array imaging in a high- background astronomical application are discussed.

  3. Achieving thermography with a thermal security camera using uncooled amorphous silicon microbolometer image sensors

    NASA Astrophysics Data System (ADS)

    Wang, Yu-Wei; Tesdahl, Curtis; Owens, Jim; Dorn, David

    2012-06-01

    Advancements in uncooled microbolometer technology over the last several years have opened up many commercial applications which had been previously cost prohibitive. Thermal technology is no longer limited to the military and government market segments. One type of thermal sensor with low NETD which is available in the commercial market segment is the uncooled amorphous silicon (?-Si) microbolometer image sensor. Typical thermal security cameras focus on providing the best image quality by auto tonemaping (contrast enhancing) the image, which provides the best contrast depending on the temperature range of the scene. While this may provide enough information to detect objects and activities, there are further benefits of being able to estimate the actual object temperatures in a scene. This thermographic ability can provide functionality beyond typical security cameras by being able to monitor processes. Example applications of thermography[2] with thermal camera include: monitoring electrical circuits, industrial machinery, building thermal leaks, oil/gas pipelines, power substations, etc...[3][5] This paper discusses the methodology of estimating object temperatures by characterizing/calibrating different components inside a thermal camera utilizing an uncooled amorphous silicon microbolometer image sensor. Plots of system performance across camera operating temperatures will be shown.

  4. Vehicle occupancy detection camera position optimization using design of experiments and standard image references

    NASA Astrophysics Data System (ADS)

    Paul, Peter; Hoover, Martin; Rabbani, Mojgan

    2013-03-01

    Camera positioning and orientation is important to applications in domains such as transportation since the objects to be imaged vary greatly in shape and size. In a typical transportation application that requires capturing still images, inductive loops buried in the ground or laser trigger sensors are used when a vehicle reaches the image capture zone to trigger the image capture system. The camera in such a system is in a fixed position pointed at the roadway and at a fixed orientation. Thus the problem is to determine the optimal location and orientation of the camera when capturing images from a wide variety of vehicles. Methods from Design for Six Sigma, including identifying important parameters and noise sources and performing systematically designed experiments (DOE) can be used to determine an effective set of parameter settings for the camera position and orientation under these conditions. In the transportation application of high occupancy vehicle lane enforcement, the number of passengers in the vehicle is to be counted. Past work has described front seat vehicle occupant counting using a camera mounted on an overhead gantry looking through the front windshield in order to capture images of vehicle occupants. However, viewing rear seat passengers is more problematic due to obstructions including the vehicle body frame structures and seats. One approach is to view the rear seats through the side window. In this situation the problem of optimally positioning and orienting the camera to adequately capture the rear seats through the side window can be addressed through a designed experiment. In any automated traffic enforcement system it is necessary for humans to be able to review any automatically captured digital imagery in order to verify detected infractions. Thus for defining an output to be optimized for the designed experiment, a human defined standard image reference (SIR) was used to quantify the quality of the line-of-sight to the rear seats of the vehicle. The DOE-SIR method was exercised for determining the optimal camera position and orientation for viewing vehicle rear seats over a variety of vehicle types. The resulting camera geometry was used on public roadway image capture resulting in over 95% acceptable rear seat images for human viewing.

  5. Imaging with depth extension: where are the limits in fixed- focus cameras?

    NASA Astrophysics Data System (ADS)

    Bakin, Dmitry; Keelan, Brian

    2008-08-01

    The integration of novel optics designs, miniature CMOS sensors, and powerful digital processing into a single imaging module package is driving progress in handset camera systems in terms of performance, size (thinness) and cost. The miniature cameras incorporating high resolution sensors and fixed-focus Extended Depth of Field (EDOF) optics allow close-range reading of printed material (barcode patterns, business cards), while providing high quality imaging in more traditional applications. These cameras incorporate modified optics and digital processing to recover the soft-focus images and restore sharpness over a wide range of object distances. The effects a variety of parameters of the imaging module on the EDOF range were analyzed for a family of high resolution CMOS modules. The parameters include various optical properties of the imaging lens, and the characteristics of the sensor. The extension factors for the EDOF imaging module were defined in terms of an improved absolute resolution in object space while maintaining focus at infinity. This definition was applied for the purpose of identifying the minimally resolvable object details in mobile cameras with bar-code reading feature.

  6. Development of CCD cameras for soft x-ray imaging at the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Teruya, A. T.; Palmer, N. E.; Schneider, M. B.; Bell, P. M.; Sims, G.; Toerne, K.; Rodenburg, K.; Croft, M.; Haugh, M. J.; Charest, M. R.; Romano, E. D.; Jacoby, K. D.

    2013-09-01

    The Static X-Ray Imager (SXI) is a National Ignition Facility (NIF) diagnostic that uses a CCDcamera to record timeintegrated X-ray images of target features such as the laser entrance hole of hohlraums. SXI has two dedicated positioners on the NIF target chamber for viewing thetarget from above and below, and the X-ray energies of interest are 870 eV for the "soft" channel and 3 - 5 keV for the "hard" channels. The original cameras utilize a large formatbackilluminated 2048 x 2048 CCD sensor with 24 micron pixels. Since the original sensor isno longer available, an effort was recently undertaken to build replacement cameras withsuitable new sensors. Three of the new cameras use a commercially available front-illuminatedCCD of similar size to the original, which has adequate sensitivity for the hard X-ray channelsbut not for the soft. For sensitivity below 1 keV, Lawrence Livermore National Laboratory (LLNL) had additional CCDs back-thinned andconverted to back-illumination for use in the other two new cameras. In this paper we describethe characteristics of the new cameras and present performance data (quantum efficiency, flat field, and dynamic range) for the front- and back-illuminated cameras, with comparisons to the originalcameras.

  7. Development of a handheld fluorescence imaging camera for intraoperative sentinel lymph node mapping.

    PubMed

    Szyc, ?ukasz; Bonifer, Stefanie; Walter, Alfred; Jagemann, Uwe; Grosenick, Dirk; Macdonald, Rainer

    2015-05-01

    We present a compact fluorescence imaging system developed for real-time sentinel lymph node mapping. The device uses two near-infrared wavelengths to record fluorescence and anatomical images with a single charge-coupled device camera. Experiments on lymph node and tissue phantoms confirmed that the amount of dye in superficial lymph nodes can be better estimated due to the absorption correction procedure integrated in our device. Because of the camera head's small size and low weight, all accessible regions of tissue can be reached without the need for any adjustments. PMID:25585232

  8. Shading correction of camera captured document image with depth map information

    NASA Astrophysics Data System (ADS)

    Wu, Chyuan-Tyng; Allebach, Jan P.

    2015-01-01

    Camera modules have become more popular in consumer electronics and office products. As a consequence, people have many opportunities to use a camera-based device to record a hardcopy document in their daily lives. However, it is easy to let undesired shading into the captured document image through the camera. Sometimes, this non-uniformity may degrade the readability of the contents. In order to mitigate this artifact, some solutions have been developed. But most of them are only suitable for particular types of documents. In this paper, we introduce a content-independent and shape-independent method that will lessen the shading effects in captured document images. We want to reconstruct the image such that the result will look like a document image captured under a uniform lighting source. Our method utilizes the 3D depth map of the document surface and a look-up table strategy. We will first discuss the model and the assumptions that we used for the approach. Then, the process of creating and utilizing the look-up table will be described in the paper. We implement this algorithm with our prototype 3D scanner, which also uses a camera module to capture a 2D image of the object. Some experimental results will be presented to show the effectiveness of our method. Both flat and curved surface document examples will be included.

  9. Efficient Stereo Image Geometrical Reconstruction at Arbitrary Camera Settings from a Single Calibration

    PubMed Central

    Ji, Songbai; Fan, Xiaoyao; Roberts, David W.; Paulsen, Keith D.

    2015-01-01

    Camera calibration is central to obtaining a quantitative image-to-physical-space mapping from stereo images acquired in the operating room (OR). A practical challenge for cameras mounted to the operating microscope is maintenance of image calibration as the surgeon’s field-of-view is repeatedly changed (in terms of zoom and focal settings) throughout a procedure. Here, we present an efficient method for sustaining a quantitative image-to-physical space relationship for arbitrary image acquisition settings (S) without the need for camera re-calibration. Essentially, we warp images acquired at S into the equivalent data acquired at a reference setting, S0, using deformation fields obtained with optical flow by successively imaging a simple phantom. Closed-form expressions for the distortions were derived from which 3D surface reconstruction was performed based on the single calibration at S0. The accuracy of the reconstructed surface was 1.05 mm and 0.59 mm along and perpendicular to the optical axis of the operating microscope on average, respectively, for six phantom image pairs, and was 1.26 mm and 0.71 mm for images acquired with a total of 47 arbitrary settings during three clinical cases. The technique is presented in the context of stereovision; however, it may also be applicable to other types of video image acquisitions (e.g., endoscope) because it does not rely on any a priori knowledge about the camera system itself, suggesting the method is likely of considerable significance. PMID:25333148

  10. Low-complexity camera digital signal imaging for video document projection system

    NASA Astrophysics Data System (ADS)

    Hsia, Shih-Chang; Tsai, Po-Shien

    2011-04-01

    We present high-performance and low-complexity algorithms for real-time camera imaging applications. The main functions of the proposed camera digital signal processing (DSP) involve color interpolation, white balance, adaptive binary processing, auto gain control, and edge and color enhancement for video projection systems. A series of simulations demonstrate that the proposed method can achieve good image quality while keeping computation cost and memory requirements low. On the basis of the proposed algorithms, the cost-effective hardware core is developed using Verilog HDL. The prototype chip has been verified with one low-cost programmable device. The real-time camera system can achieve 1270 × 792 resolution with the combination of extra components and can demonstrate each DSP function.

  11. Geologic Analysis of the Surface Thermal Emission Images Taken by the VMC Camera, Venus Express

    NASA Astrophysics Data System (ADS)

    Basilevsky, A. T.; Shalygin, E. V.; Titov, D. V.; Markiewicz, W. J.; Scholten, F.; Roatsch, Th.; Fiethe, B.; Osterloh, B.; Michalik, H.; Kreslavsky, M. A.; Moroz, L. V.

    2010-03-01

    Analysis of Venus Monitoring Camera 1-µm images and surface emission modeling showed apparent emissivity at Chimon-mana tessera and shows that Tuulikki volcano is higher than that of the adjacent plains; Maat Mons did not show any signature of ongoing volcanism.

  12. Controlling Camera and Lights for Intelligent Image Acquisition and Merging Olena Borzenko, Yves Lesprance, Michael Jenkin

    E-print Network

    Lespérance, Yves

    Controlling Camera and Lights for Intelligent Image Acquisition and Merging Olena Borzenko, Yves controllable light sources. In these applica- tions, the problem of parameter selection arises: how to choose and merging problems for such systems. The prototype knowledge-based control- ler adjusts lighting

  13. Hyperspectral imaging using a color camera and its application for pathogen detection

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper reports the results of a feasibility study for the development of a hyperspectral image recovery (reconstruction) technique using a RGB color camera and regression analysis in order to detect and classify colonies of foodborne pathogens. The target bacterial pathogens were the six represe...

  14. Intraoperative coronary artery imaging with infrared camera in off-pump CABG

    Microsoft Academic Search

    Hisayoshi Suma; Tadashi Isomura; Taiko Horii; Toru Sato

    2000-01-01

    To achieve high quality off-pump coronary artery bypass grafting (CABG), thermal coronary artery imaging using a new generation infrared camera was used and anastomotic status was assessed intraoperatively. In 12 patients who underwent off-pump CABG, 18 grafts (11 internal thoracic, 2 radial, 2 gastroepiploic arteries, and 3 saphenous veins) were evaluated following completion of anastomoses. All grafts were clearly visualized

  15. Automatic orthorectification and mosaicking of oblique images from a zoom lens aerial camera

    NASA Astrophysics Data System (ADS)

    Zhou, Qianfei; Liu, Jinghong

    2015-01-01

    For the purpose of image distortion caused by the oblique photography of a zoom lens aerial camera, a fast and accurate image autorectification and mosaicking method in a ground control points (GCPs)-free environment was proposed. With the availability of integrated global positioning system (GPS) and inertial measurement units, the camera's exterior orientation parameters (EOPs) were solved through direct georeferencing. The one-parameter division model was adopted to estimate the distortion coefficient and the distortion center coordinates for the zoom lens to correct the lens distortion. Using the camera's EOPs and the lens distortion parameters, the oblique aerial images specified in the camera frame were geo-orthorectified into the mapping frame and then were mosaicked together based on the mapping coordinates to produce a larger field and high-resolution georeferenced image. Experimental results showed that the orthorectification error was less than 1.80 m at an 1100 m flight height above ground level, when compared with 14 presurveyed ground checkpoints which were measured by differential GPS. The mosaic error was about 1.57 m compared with 18 checkpoints. The accuracy was considered sufficient for urgent response such as military reconnaissance and disaster monitoring where GCPs were not available.

  16. Saturn's hydrogen aurora: Wide field and planetary camera 2 imaging from the Hubble Space Telescope

    Microsoft Academic Search

    John T. Trauger; John T. Clarke; Gilda E. Ballester; Robin W. Evans; Christopher J. Burrows; David Crisp; John S. Gallagher; Richard E. Griffiths; J. Jeff Hester; John G. Hoessel; Jon A. Holtzman; John E. Krist; Jeremy R. Mould; Raghvendra Sahai; Paul A. Scowen; Karl R. Stapelfeldt; Alan M. Watson

    1998-01-01

    Wide field and planetary camera 2\\/Hubble Space Telescope (WFPC2\\/HST) images of Saturn's far ultraviolet aurora reveal emissions confined to a narrow band of latitudes near Saturn's north and south poles. The aurorae are most prominent in the morning sector with patterns that appear fixed in local time. The geographic distribution and vertical extent of the auroral emissions seen in these

  17. A multiple-plate, multiple-pinhole camera for X-ray gamma-ray imaging

    NASA Technical Reports Server (NTRS)

    Hoover, R. B.

    1971-01-01

    Plates with identical patterns of precisely aligned pinholes constitute lens system which, when rotated about optical axis, produces continuous high resolution image of small energy X-ray or gamma ray source. Camera has applications in radiation treatment and nuclear medicine.

  18. An Image Authentication Scheme Considering Privacy $#8212; A First Step towards Surveillance Camera Authentication

    Microsoft Academic Search

    Nobutaka Kawaguchi; Shintaro Ueda; Naohiro Obata; Hiroshi Shigeno; Ken-ichi Okada

    2005-01-01

    In this paper, we propose an authentication scheme considering privacy aimed for JPEG images. A picture from a surveillance camera must be authenticated when submitted to a third party, such as a legal organization, courts and so on. On one hand, privacy of objects in the picture must be considered. Therefore, mosaic and masking must be performed to the picture

  19. An Improved Two-Point-Correction Method to Remove the Effect of the Radiance from Camera Interior on Infrared Image

    Microsoft Academic Search

    Shidu Dong; He Yan; Qun Jiang; Bo He; Huaqiu Wang

    2007-01-01

    Radiance coming from the interior of an uncooled infrared camera has a significant effect on infrared image. This paper presents a three-phase scheme for coping with the effect. The first phase requires the infrared images of the high temperature blackbody and the low temperature blackbody for various camera interior temperatures. The second phase forms functions of pixel values in the

  20. Experimental evaluation of an online gamma-camera imaging of permanent seed implantation (OGIPSI) prototype for partial breast irradiation

    Microsoft Academic Search

    Ananth Ravi; Curtis B. Caldwell; Jean-Philippe Pignol

    2008-01-01

    Previously, our team used Monte Carlo simulation to demonstrate that a gamma camera could potentially be used as an online image guidance device to visualize seeds during permanent breast seed implant procedures. This could allow for intraoperative correction if seeds have been misplaced. The objective of this study is to describe an experimental evaluation of an online gamma-camera imaging of

  1. Chandra High Resolution Camera Imaging of GRS 1758-258

    E-print Network

    W. A. Heindl; D. M. Smith

    2002-08-19

    We observed the "micro-quasar" GRS 1758-258 four times with Chandra. Two HRC-I observations were made in 2000 September-October spanning an intermediate-to-hard spectral transition (identified with RXTE). Another HRC-I and an ACIS/HETG observation were made in 2001 March following a hard-to-soft transition to a very low flux state. Based on the three HRC images and the HETG zero order image, the accurate position (J2000) of the X-ray source is RA = 18h 01m 12.39s, Dec = -25d 44m 36.1s (90% confidence radius = 0".45), consistent with the purported variable radio counterpart. All three HRC images are consistent with GRS 1758-258 being a point source, indicating that any bright jet is less than ~1 light-month in projected length, assuming a distance of 8.5 kpc.

  2. Film cameras or digital sensors? The challenge ahead for aerial imaging

    USGS Publications Warehouse

    Light, D.L.

    1996-01-01

    Cartographic aerial cameras continue to play the key role in producing quality products for the aerial photography business, and specifically for the National Aerial Photography Program (NAPP). One NAPP photograph taken with cameras capable of 39 lp/mm system resolution can contain the equivalent of 432 million pixels at 11 ??m spot size, and the cost is less than $75 per photograph to scan and output the pixels on a magnetic storage medium. On the digital side, solid state charge coupled device linear and area arrays can yield quality resolution (7 to 12 ??m detector size) and a broader dynamic range. If linear arrays are to compete with film cameras, they will require precise attitude and positioning of the aircraft so that the lines of pixels can be unscrambled and put into a suitable homogeneous scene that is acceptable to an interpreter. Area arrays need to be much larger than currently available to image scenes competitive in size with film cameras. Analysis of the relative advantages and disadvantages of the two systems show that the analog approach is more economical at present. However, as arrays become larger, attitude sensors become more refined, global positioning system coordinate readouts become commonplace, and storage capacity becomes more affordable, the digital camera may emerge as the imaging system for the future. Several technical challenges must be overcome if digital sensors are to advance to where they can support mapping, charting, and geographic information system applications.

  3. Oxygen imaging in microfluidic devices with optical sensors applying color cameras

    Microsoft Academic Search

    Birgit Ungerböck; Günter Mistlberger; Verena Charwat; Peter Ertl; Torsten Mayr

    2010-01-01

    Luminescent sensors with improved performance for monitoring dissolved oxygen in microfluidic devices were developed. Our brightness enhanced sensing films with reduced film thickness showed higher signals than standard sensors and responded in real-time.Sensor layers were integrated into microfluidic chips and readout off-chip on a fluorescence microscope with lifetime imaging or ratiometric imaging using the color channels of a CCD camera.

  4. X-ray framing camera for picosecond imaging of laser-produced plasmas

    Microsoft Academic Search

    D. G. Stearns; J. D. Wiedwald; B. M. Cook; R. L. Hanks; O. L. Landen

    1989-01-01

    We describe the development and characterization of an ultrafast x-ray framing camera capable of recording images with a temporal resolution of 50 ps and spatial resolution of 22 ..mu..m at the image plane. The unique design incorporates an x-ray photocathode directly into a suspended-strip transmission line. The photocathode is gated using a high-voltage (-5-kV) pulse of short duration generated with

  5. The iQID camera: An ionizing-radiation quantum imaging detector

    NASA Astrophysics Data System (ADS)

    Miller, Brian W.; Gregory, Stephanie J.; Fuller, Erin S.; Barrett, Harrison H.; Bradford Barber, H.; Furenlid, Lars R.

    2014-12-01

    We have developed and tested a novel, ionizing-radiation Quantum Imaging Detector (iQID). This scintillation-based detector was originally developed as a high-resolution gamma-ray imager, called BazookaSPECT, for use in single-photon emission computed tomography (SPECT). Recently, we have investigated the detector's response and imaging potential with other forms of ionizing radiation including alpha, neutron, beta, and fission fragment particles. The confirmed response to this broad range of ionizing radiation has prompted its new title. The principle operation of the iQID camera involves coupling a scintillator to an image intensifier. The scintillation light generated by particle interactions is optically amplified by the intensifier and then re-imaged onto a CCD/CMOS camera sensor. The intensifier provides sufficient optical gain that practically any CCD/CMOS camera can be used to image ionizing radiation. The spatial location and energy of individual particles are estimated on an event-by-event basis in real time using image analysis algorithms on high-performance graphics processing hardware. Distinguishing features of the iQID camera include portability, large active areas, excellent detection efficiency for charged particles, and high spatial resolution (tens of microns). Although modest, iQID has energy resolution that is sufficient to discriminate between particles. Additionally, spatial features of individual events can be used for particle discrimination. An important iQID imaging application that has recently been developed is real-time, single-particle digital autoradiography. We present the latest results and discuss potential applications.

  6. Anomaly Detection for Autonomous Inspection of Space Facilities using Camera Images

    Microsoft Academic Search

    Y. Sakai; H. Tanaka; T. Yairi; K. Machida

    2006-01-01

    For the purpose of realizing autonomous inspection of space facilities, this paper addresses the problem of anomaly detection from real-world images captured by free-flying space cameras. To cope with computer vision problems in space, we apply view-based and patch-based approach to represent features of image, and one-class SVM is used for classification. Anomaly detection framework is shown, which deals with

  7. Camera Identification from Cropped and Scaled Images Miroslav Goljan*

    E-print Network

    Fridrich, Jessica

    when digital content is presented as silent witness in the court. For example, in a child pornography identification, Photo-Response Non-Uniformity, digital forensic. 1. INTRODUCTION The problem of establishing an image or a video-clip to a specific piece of hardware. Sensor photo- response non-uniformity (PRNU) has

  8. CMOS image sensors: electronic camera-on-a-chip

    Microsoft Academic Search

    Eric R. Fossum

    1997-01-01

    CMOS active pixel sensors (APS) have performance competitive with charge-coupled device (CCD) technology, and offer advantages in on-chip functionality, system power reduction, cost, and miniaturization. This paper discusses the requirements for CMOS image sensors and their historical development, CMOS devices and circuits for pixels, analog signal chain, and on-chip analog-to-digital conversion are reviewed and discussed

  9. Real-time integral imaging system with handheld light field camera

    NASA Astrophysics Data System (ADS)

    Jeong, Youngmo; Kim, Jonghyun; Yeom, Jiwoon; Lee, Byoungho

    2014-11-01

    Our objective is to construct real-time pickup and display in integral imaging system with handheld light field camera. A micro lens array and high frame rate charge-coupled device (CCD) are used to implement handheld light field camera, and a simple lens array and a liquid crystal (LC) display panel are used to reconstruct three-dimensional (3D) images in real-time. Handheld light field camera is implemented by adding the micro lens array on CCD sensor. Main lens, which is mounted on CCD sensor, is used to capture the scene. To make the elemental image in real-time, pixel mapping algorithm is applied. With this algorithm, not only pseudoscopic problem can be solved, but also user can change the depth plane of the displayed 3D images in real-time. For real-time high quality 3D video generation, a high resolution and high frame rate CCD and LC display panel are used in proposed system. Experiment and simulation results are presented to verify our proposed system. As a result, 3D image is captured and reconstructed in real-time through integral imaging system.

  10. High-speed camera with real time processing for frequency domain imaging

    PubMed Central

    Shia, Victor; Watt, David; Faris, Gregory W.

    2011-01-01

    We describe a high-speed camera system for frequency domain imaging suitable for applications such as in vivo diffuse optical imaging and fluorescence lifetime imaging. 14-bit images are acquired at 2 gigapixels per second and analyzed with real-time pipeline processing using field programmable gate arrays (FPGAs). Performance of the camera system has been tested both for RF-modulated laser imaging in combination with a gain-modulated image intensifier and a simpler system based upon an LED light source. System amplitude and phase noise are measured and compared against theoretical expressions in the shot noise limit presented for different frequency domain configurations. We show the camera itself is capable of shot noise limited performance for amplitude and phase in as little as 3 ms, and when used in combination with the intensifier the noise levels are nearly shot noise limited. The best phase noise in a single pixel is 0.04 degrees for a 1 s integration time. PMID:21750770

  11. Removing Shading Distortions in Camera-based Document Images Using Inpainting and Surface Fitting With Radial Basis Functions

    E-print Network

    Tan, Chew Lim

    Removing Shading Distortions in Camera-based Document Images Using Inpainting and Surface Fitting first try to derive the shading image us- ing an inpainting technique with an automatic mask gen noises in the inpainted image and return a smooth shading image. Once the shading image is extracted

  12. Wide Field Camera 3: A Powerful New Imager for the Hubble Space Telescope

    NASA Technical Reports Server (NTRS)

    Kimble, Randy

    2008-01-01

    Wide Field Camera 3 (WFC3) is a powerful UV/visible/near-infrared camera in development for installation into the Hubble Space Telescope during upcoming Servicing Mission 4. WFC3 provides two imaging channels. The UVIS channel incorporates a 4096 x 4096 pixel CCD focal plane with sensitivity from 200 to 1000 nm. The IR channel features a 1024 x 1024 pixel HgCdTe focal plane covering 850 to 1700 nm. We report here on the design of the instrument, the performance of its flight detectors, results of the ground test and calibration program, and the plans for the Servicing Mission installation and checkout.

  13. High etendue UV camera for simultaneous four-color imaging on a single detector.

    PubMed

    Hicks, Brian A; Danowski, Meredith E; Martel, Jason F; Cook, Timothy A

    2013-07-20

    We describe a high etendue (0.12 cm(2) sr) camera that, without moving parts, simultaneously images four ultraviolet bands centered at 140, 175, 215, and 255 nm on a single detector into a minimum of ~7500 resolution elements. In addition to being an efficient way to make color photometric measurements of a static scene, the camera described here enables detection of spatial and temporal information that can be used to reveal energy dependent physical phenomena to complement the capability of other instruments ranging in complexity from filter wheels to integral field spectrographs. PMID:23872766

  14. ITEM—QM solutions for EM problems in image reconstruction exemplary for the Compton Camera

    NASA Astrophysics Data System (ADS)

    Pauli, J.; Pauli, E.-M.; Anton, G.

    2002-08-01

    Imaginary time expectation maximation (ITEM), a new algorithm for expectation maximization problems based on the quantum mechanics energy minimalization via imaginary (euclidian) time evolution is presented. Both ( the algorithm as well as the implementation ( http://www.johannes-pauli.de/item/index.html) are published under the terms of General GNU public License ( http://www.gnu.org/copyleft/gpl.html). Due to its generality ITEM is applicable to various image reconstruction problems like CT, PET, SPECT, NMR, Compton Camera, tomosynthesis as well as any other energy minimization problem. The choice of the optimal ITEM Hamiltonian is discussed and numerical results are presented for the Compton Camera.

  15. The trustworthy digital camera: Restoring credibility to the photographic image

    NASA Technical Reports Server (NTRS)

    Friedman, Gary L.

    1994-01-01

    The increasing sophistication of computers has made digital manipulation of photographic images, as well as other digitally-recorded artifacts such as audio and video, incredibly easy to perform and increasingly difficult to detect. Today, every picture appearing in newspapers and magazines has been digitally altered to some degree, with the severity varying from the trivial (cleaning up 'noise' and removing distracting backgrounds) to the point of deception (articles of clothing removed, heads attached to other people's bodies, and the complete rearrangement of city skylines). As the power, flexibility, and ubiquity of image-altering computers continues to increase, the well-known adage that 'the photography doesn't lie' will continue to become an anachronism. A solution to this problem comes from a concept called digital signatures, which incorporates modern cryptographic techniques to authenticate electronic mail messages. 'Authenticate' in this case means one can be sure that the message has not been altered, and that the sender's identity has not been forged. The technique can serve not only to authenticate images, but also to help the photographer retain and enforce copyright protection when the concept of 'electronic original' is no longer meaningful.

  16. The trustworthy digital camera: Restoring credibility to the photographic image

    NASA Astrophysics Data System (ADS)

    Friedman, Gary L.

    1994-02-01

    The increasing sophistication of computers has made digital manipulation of photographic images, as well as other digitally-recorded artifacts such as audio and video, incredibly easy to perform and increasingly difficult to detect. Today, every picture appearing in newspapers and magazines has been digitally altered to some degree, with the severity varying from the trivial (cleaning up 'noise' and removing distracting backgrounds) to the point of deception (articles of clothing removed, heads attached to other people's bodies, and the complete rearrangement of city skylines). As the power, flexibility, and ubiquity of image-altering computers continues to increase, the well-known adage that 'the photography doesn't lie' will continue to become an anachronism. A solution to this problem comes from a concept called digital signatures, which incorporates modern cryptographic techniques to authenticate electronic mail messages. 'Authenticate' in this case means one can be sure that the message has not been altered, and that the sender's identity has not been forged. The technique can serve not only to authenticate images, but also to help the photographer retain and enforce copyright protection when the concept of 'electronic original' is no longer meaningful.

  17. MONICA: a compact, portable dual gamma camera system for mouse whole-body imaging

    SciTech Connect

    Choyke, Peter L.; Xia, Wenze; Seidel, Jurgen; Kakareka, John W.; Pohida, Thomas J.; Milenic, Diane E.; Proffitt, James; Majewski, Stan; Weisenberger, Andrew G.; Green, Michael V.

    2010-04-01

    Introduction We describe a compact, portable dual-gamma camera system (named "MONICA" for MObile Nuclear Imaging CAmeras) for visualizing and analyzing the whole-body biodistribution of putative diagnostic and therapeutic single photon emitting radiotracers in animals the size of mice. Methods Two identical, miniature pixelated NaI(Tl) gamma cameras were fabricated and installed ?looking up? through the tabletop of a compact portable cart. Mice are placed directly on the tabletop for imaging. Camera imaging performance was evaluated with phantoms and field performance was evaluated in a weeklong In-111 imaging study performed in a mouse tumor xenograft model. Results Tc-99m performance measurements, using a photopeak energy window of 140 keV?10%, yielded the following results: spatial resolution (FWHM at 1 cm), 2.2 mm; sensitivity, 149 cps (counts per seconds)/MBq (5.5 cps/μCi); energy resolution (FWHM, full width at half maximum), 10.8%; count rate linearity (count rate vs. activity), r2=0.99 for 0?185 MBq (0?5 mCi) in the field of view (FOV); spatial uniformity, <3% count rate variation across the FOV. Tumor and whole-body distributions of the In-111 agent were well visualized in all animals in 5-min images acquired throughout the 168-h study period. Conclusion Performance measurements indicate that MONICA is well suited to whole-body single photon mouse imaging. The field study suggests that inter-device communications and user-oriented interfaces included in the MONICA design facilitate use of the system in practice. We believe that MONICA may be particularly useful early in the (cancer) drug development cycle where basic whole-body biodistribution data can direct future development of the agent under study and where logistical factors, e.g., limited imaging space, portability and, potentially, cost are important.

  18. MONICA: A Compact, Portable Dual Gamma Camera System for Mouse Whole-Body Imaging

    PubMed Central

    Xi, Wenze; Seidel, Jurgen; Karkareka, John W.; Pohida, Thomas J.; Milenic, Diane E.; Proffitt, James; Majewski, Stan; Weisenberger, Andrew G.; Green, Michael V.; Choyke, Peter L.

    2009-01-01

    Introduction We describe a compact, portable dual-gamma camera system (named “MONICA” for MObile Nuclear Imaging CAmeras) for visualizing and analyzing the whole-body biodistribution of putative diagnostic and therapeutic single photon emitting radiotracers in animals the size of mice. Methods Two identical, miniature pixelated NaI(Tl) gamma cameras were fabricated and installed “looking up” through the tabletop of a compact portable cart. Mice are placed directly on the tabletop for imaging. Camera imaging performance was evaluated with phantoms and field performance was evaluated in a weeklong In-111 imaging study performed in a mouse tumor xenograft model. Results Tc-99m performance measurements, using a photopeak energy window of 140 keV ± 10%, yielded the following results: spatial resolution (FWHM at 1-cm), 2.2-mm; sensitivity, 149 cps/MBq (5.5 cps/?Ci); energy resolution (FWHM), 10.8%; count rate linearity (count rate vs. activity), r2 = 0.99 for 0–185 MBq (0–5 mCi) in the field-of-view (FOV); spatial uniformity, < 3% count rate variation across the FOV. Tumor and whole-body distributions of the In-111 agent were well visualized in all animals in 5-minute images acquired throughout the 168-hour study period. Conclusion Performance measurements indicate that MONICA is well suited to whole-body single photon mouse imaging. The field study suggests that inter-device communications and user-oriented interfaces included in the MONICA design facilitate use of the system in practice. We believe that MONICA may be particularly useful early in the (cancer) drug development cycle where basic whole-body biodistribution data can direct future development of the agent under study and where logistical factors, e.g. limited imaging space, portability, and, potentially, cost are important. PMID:20346864

  19. Advanced High-Speed Framing Camera Development for Fast, Visible Imaging Experiments

    SciTech Connect

    Amy Lewis, Stuart Baker, Brian Cox, Abel Diaz, David Glass, Matthew Martin

    2011-05-11

    The advances in high-voltage switching developed in this project allow a camera user to rapidly vary the number of output frames from 1 to 25. A high-voltage, variable-amplitude pulse train shifts the deflection location to the new frame location during the interlude between frames, making multiple frame counts and locations possible. The final deflection circuit deflects to five different frame positions per axis, including the center position, making for a total of 25 frames. To create the preset voltages, electronically adjustable {+-}500 V power supplies were chosen. Digital-to-analog converters provide digital control of the supplies. The power supplies are clamped to {+-}400 V so as not to exceed the voltage ratings of the transistors. A field-programmable gated array (FPGA) receives the trigger signal and calculates the combination of plate voltages for each frame. The interframe time and number of frames are specified by the user, but are limited by the camera electronics. The variable-frame circuit shifts the plate voltages of the first frame to those of the second frame during the user-specified interframe time. Designed around an electrostatic image tube, a framing camera images the light present during each frame (at the photocathode) onto the tube’s phosphor. The phosphor persistence allows the camera to display multiple frames on the phosphor at one time. During this persistence, a CCD camera is triggered and the analog image is collected digitally. The tube functions by converting photons to electrons at the negatively charged photocathode. The electrons move quickly toward the more positive charge of the phosphor. Two sets of deflection plates skew the electron’s path in horizontal and vertical (x axis and y axis, respectively) directions. Hence, each frame’s electrons bombard the phosphor surface at a controlled location defined by the voltages on the deflection plates. To prevent the phosphor from being exposed between frames, the image tube is gated off between exposures.

  20. A portable device for small animal SPECT imaging in clinical gamma-cameras

    NASA Astrophysics Data System (ADS)

    Aguiar, P.; Silva-Rodríguez, J.; González-Castaño, D. M.; Pino, F.; Sánchez, M.; Herranz, M.; Iglesias, A.; Lois, C.; Ruibal, A.

    2014-07-01

    Molecular imaging is reshaping clinical practice in the last decades, providing practitioners with non-invasive ways to obtain functional in-vivo information on a diversity of relevant biological processes. The use of molecular imaging techniques in preclinical research is equally beneficial, but spreads more slowly due to the difficulties to justify a costly investment dedicated only to animal scanning. An alternative for lowering the costs is to repurpose parts of old clinical scanners to build new preclinical ones. Following this trend, we have designed, built, and characterized the performance of a portable system that can be attached to a clinical gamma-camera to make a preclinical single photon emission computed tomography scanner. Our system offers an image quality comparable to commercial systems at a fraction of their cost, and can be used with any existing gamma-camera with just an adaptation of the reconstruction software.

  1. Stereo Imaging Velocimetry Technique Using Standard Off-the-Shelf CCD Cameras

    NASA Technical Reports Server (NTRS)

    McDowell, Mark; Gray, Elizabeth

    2004-01-01

    Stereo imaging velocimetry is a fluid physics technique for measuring three-dimensional (3D) velocities at a plurality of points. This technique provides full-field 3D analysis of any optically clear fluid or gas experiment seeded with tracer particles. Unlike current 3D particle imaging velocimetry systems that rely primarily on laser-based systems, stereo imaging velocimetry uses standard off-the-shelf charge-coupled device (CCD) cameras to provide accurate and reproducible 3D velocity profiles for experiments that require 3D analysis. Using two cameras aligned orthogonally, we present a closed mathematical solution resulting in an accurate 3D approximation of the observation volume. The stereo imaging velocimetry technique is divided into four phases: 3D camera calibration, particle overlap decomposition, particle tracking, and stereo matching. Each phase is explained in detail. In addition to being utilized for space shuttle experiments, stereo imaging velocimetry has been applied to the fields of fluid physics, bioscience, and colloidal microscopy.

  2. Camera-Based Lock-in and Heterodyne Carrierographic Photoluminescence Imaging of Crystalline Silicon Wafers

    NASA Astrophysics Data System (ADS)

    Sun, Q. M.; Melnikov, A.; Mandelis, A.

    2014-05-01

    Carrierographic (spectrally gated photoluminescence) imaging of a crystalline silicon wafer using an InGaAs camera and two spread super-bandgap illumination laser beams is introduced in both low-frequency lock-in and high-frequency heterodyne modes. Lock-in carrierographic images of the wafer up to 400 Hz modulation frequency are presented. To overcome the frame rate and exposure time limitations of the camera, a heterodyne method is employed for high-frequency carrierographic imaging which results in high-resolution near-subsurface information. The feasibility of the method is guaranteed by the typical superlinearity behavior of photoluminescence, which allows one to construct a slow enough beat frequency component from nonlinear mixing of two high frequencies. Intensity-scan measurements were carried out with a conventional single-element InGaAs detector photocarrier radiometry system, and the nonlinearity exponent of the wafer was found to be around 1.7. Heterodyne images of the wafer up to 4 kHz have been obtained and qualitatively analyzed. With the help of the complementary lock-in and heterodyne modes, camera-based carrierographic imaging in a wide frequency range has been realized for fundamental research and industrial applications toward in-line nondestructive testing of semiconductor materials and devices.

  3. Note: In vivo pH imaging system using luminescent indicator and color camera

    NASA Astrophysics Data System (ADS)

    Sakaue, Hirotaka; Dan, Risako; Shimizu, Megumi; Kazama, Haruko

    2012-07-01

    Microscopic in vivo pH imaging system is developed that can capture the luminescent- and color-imaging. The former gives a quantitative measurement of a pH distribution in vivo. The latter captures the structural information that can be overlaid to the pH distribution for correlating the structure of a specimen and its pH distribution. By using a digital color camera, a luminescent image as well as a color image is obtained. The system uses HPTS (8-hydroxypyrene-1,3,6-trisulfonate) as a luminescent pH indicator for the luminescent imaging. Filter units are mounted in the microscope, which extract two luminescent images for using the excitation-ratio method. A ratio of the two images is converted to a pH distribution through a priori pH calibration. An application of the system to epidermal cells of Lactuca Sativa L is shown.

  4. Measuring the image quality of digital-camera sensors by a ping-pong ball

    NASA Astrophysics Data System (ADS)

    Pozo, Antonio M.; Rubiño, Manuel; Castro, José J.; Salas, Carlos; Pérez-Ocón, Francisco

    2014-07-01

    In this work, we present a low-cost experimental setup to evaluate the image quality of digital-camera sensors, which can be implemented in undergraduate and postgraduate teaching. The method consists of evaluating the modulation transfer function (MTF) of digital-camera sensors by speckle patterns using a ping-pong ball as a diffuser, with two handmade circular apertures acting as input and output ports, respectively. To specify the spatial-frequency content of the speckle pattern, it is necessary to use an aperture; for this, we made a slit in a piece of black cardboard. First, the MTF of a digital-camera sensor was calculated using the ping-pong ball and the handmade slit, and then the MTF was calculated using an integrating sphere and a high-quality steel slit. Finally, the results achieved with both experimental setups were compared, showing a similar MTF in both cases.

  5. Mixel camera--a new push-broom camera concept for high spatial resolution keystone-free hyperspectral imaging.

    PubMed

    Høye, Gudrun; Fridman, Andrei

    2013-05-01

    Current high-resolution push-broom hyperspectral cameras introduce keystone errors to the captured data. Efforts to correct these errors in hardware severely limit the optical design, in particular with respect to light throughput and spatial resolution, while at the same time the residual keystone often remains large. The mixel camera solves this problem by combining a hardware component--an array of light mixing chambers--with a mathematical method that restores the hyperspectral data to its keystone-free form, based on the data that was recorded onto the sensor with large keystone. A Virtual Camera software, that was developed specifically for this purpose, was used to compare the performance of the mixel camera to traditional cameras that correct keystone in hardware. The mixel camera can collect at least four times more light than most current high-resolution hyperspectral cameras, and simulations have shown that the mixel camera will be photon-noise limited--even in bright light--with a significantly improved signal-to-noise ratio compared to traditional cameras. A prototype has been built and is being tested. PMID:23669962

  6. [The hyperspectral camera side-scan geometric imaging in any direction considering the spectral mixing].

    PubMed

    Wang, Shu-Min; Zhang, Ai-Wu; Hu, Shao-Xing; Sun, Wei-Dong

    2014-07-01

    In order to correct the image distortion in the hyperspectral camera side-scan geometric Imaging, the image pixel geo-referenced algorithm was deduced in detail in the present paper, which is suitable to the linear push-broom camera side-scan imaging on the ground in any direction. It takes the orientation of objects in the navigation coordinates system into account. Combined with the ground sampling distance of geo-referenced image and the area of push broom imaging, the general process of geo-referenced image divided into grids is also presented. The new image rows and columns will be got through the geo-referenced image area dividing the ground sampling distance. Considering the error produced by round rule in the pixel grids generated progress, and the spectral mixing problem caused by traditional direct spectral sampling method in the process of image correction, the improved spectral sampling method based on the weighted fusion method was proposed. It takes the area proportion of adjacent pixels in the new generated pixel as coefficient and then the coefficients are normalized to avoid the spectral overflow. So the new generated pixel is combined with the geo-referenced adjacent pixels spectral. Finally the amounts of push-broom imaging experiments were taken on the ground, and the distortion images were corrected according to the algorithm proposed above. The results show that the linear image distortion correction algorithm is valid and robust. At the same time, multiple samples were selected in the corrected images to verify the spectral data. The results indicate that the improved spectral sampling method is better than the direct spectral sampling algorithm. It provides reference for the application of similar productions on the ground. PMID:25269321

  7. Novel intraoperative near-infrared fluorescence camera system for optical image-guided cancer surgery.

    PubMed

    Mieog, J Sven D; Vahrmeijer, Alexander L; Hutteman, Merlijn; van der Vorst, Joost R; Drijfhout van Hooff, Maurits; Dijkstra, Jouke; Kuppen, Peter J K; Keijzer, Rob; Kaijzel, Eric L; Que, Ivo; van de Velde, Cornelis J H; Löwik, Clemens W G M

    2010-08-01

    Current methods of intraoperative tumor margin detection using palpation and visual inspection frequently result in incomplete resections, which is an important problem in surgical oncology. Therefore, real-time visualization of cancer cells is needed to increase the number of patients with a complete tumor resection. For this purpose, near-infrared fluorescence (NIRF) imaging is a promising technique. Here we describe a novel, handheld, intraoperative NIRF camera system equipped with a 690 nm laser; we validated its utility in detecting and guiding resection of cancer tissues in two syngeneic rat models. The camera system was calibrated using an activated cathepsin-sensing probe (ProSense, VisEn Medical, Woburn, MA). Fluorescence intensity was strongly correlated with increased activated-probe concentration (R2= .997). During the intraoperative experiments, a camera exposure time of 10 ms was used, which provided the optimal tumor to background ratio. Primary mammary tumors (n = 20 tumors) were successfully resected under direct fluorescence guidance. The tumor to background ratio was 2.34 using ProSense680 at 10 ms camera exposure time. The background fluorescence of abdominal organs, in particular liver and kidney, was high, thereby limiting the ability to detect peritoneal metastases with cathepsin-sensing probes in these regions. In conclusion, we demonstrated the technical performance of this new camera system and its intraoperative utility in guiding resection of tumors. PMID:20643025

  8. Strategies for registering range images from unknown camera positions

    NASA Astrophysics Data System (ADS)

    Bernardini, Fausto; Rushmeier, Holly E.

    2000-03-01

    We describe a project to construct a 3D numerical model of Michelangelo's Florentine Pieta to be used in a study of the sculpture. Here we focus on the registration of the range images used to construct the model. The major challenge was the range of length scales involved. A resolution of 1 mm or less required for the 2.25 m tall piece. To achieve this resolution, we could only acquire an area of 20 by 20 cm per scan. A total of approximately 700 images were required. Ideally, a tracker would be attached to the scanner to record position and pose. The use of a tracker was not possible in the field. Instead, we used a crude-to-fine approach to registering the meshes to one another. The crudest level consisted of pairwise manual registration, aided by texture maps containing laser dots that were projected onto the sculpture. This crude alignment was refined by an automatic registration of laser dot centers. In this phase, we found that consistency constraints on dot matches were essential to obtaining accurate results. The laser dot alignment was refined by an automatic registration of laser dot centers. In this phase, we found that consistency constraints on dot matches were essential to obtaining accurate results. The laser dot alignment was further refined using a variation of the ICP algorithm developed by Besl and McKay. In the application of ICP to global registration, we developed a method to avoid one class of local minima by finding a set of points, rather than the single point, that matches each candidate point.

  9. Laser Doppler Perfusion Imaging with a high-speed CMOS-camera

    NASA Astrophysics Data System (ADS)

    Draijer, Matthijs J.; Hondebrink, Erwin; Steenbergen, Wiendelt; van Leeuwen, Ton G.

    2007-07-01

    The technique of Laser Doppler Perfusion Imaging (LDPI) is widely used for determining cerebral blood flow or skin perfusion in the case of burns. The commonly used Laser Doppler Perfusion Imagers are scanning systems which point by point scan the area under investigation and use a single photo detector to capture the photoelectric current to obtain a perfusion map. In that case the imaging time for a perfusion map of 64 x 64 pixels is around 5 minutes. Disadvantages of a long imaging time for in-vivo imaging are the bigger chance of movement artifacts, reduced comfort for the patient and the inability to follow fast changing perfusion conditions. We present a Laser Doppler Perfusion Imager which makes use of a high speed CMOS-camera. By illuminating the area under investigation and simultaneously taking images at high speed with the camera, it is possible to obtain a perfusion map of the area under investigation in a shorter period of time than with the commonly used Laser Doppler Perfusion Imagers.

  10. Skin hydration imaging using a long-wavelength near-infrared digital camera

    NASA Astrophysics Data System (ADS)

    Attas, E. Michael; Posthumus, Trevor B.; Schattka, Bernhard J.; Sowa, Michael G.; Mantsch, Henry H.; Zhang, Shuliang L.

    2001-07-01

    Skin hydration is a key factor in skin health. Hydration measurements can provide diagnostic information on the condition of skin and can indicate the integrity of the skin barrier function. Near-infrared spectroscopy measures the water content of living tissue by its effect on tissue reflectance at a particular wavelength. Imaging has the important advantage of showing the degree of hydration as a function of location. Short-wavelength (650-1050 nm) near infrared spectroscopic reflectance imaging has previously been used in-vivo to determine the relative water content of skin under carefully controlled laboratory conditions. We have recently developed a novel spectroscopic imaging system to acquire image sets in the long-wavelength region of the near infrared (960 to 1700 nm), where the water absorption bands are more intense. The LW-NIR systems uses a liquid- crystal tunable filter in front of the objective lens and incorporates a 12-bit digital camera with a 320-by-240-pixel indium-gallium arsenide array sensor. Custom software controls the camera and tunable filter, allowing image sets to be acquired and displayed in near-real time. Forearm skin hydration was measured in a clinical context using the long- wavelength imaging system, a short-wavelength imaging system, and non-imaging instrumentation. Among these, the LW-NIR system appears to be the most sensitive at measuring dehydration of skin.

  11. High-resolution imaging of biological and other objects with an X-ray digital camera.

    PubMed

    Tous, J; Blazek, K; Pína, L; Sopko, B

    2010-01-01

    A high-resolution CCD X-ray camera based on YAG:Ce or LuAG:Ce thin scintillators is presented. The high resolution in low energy X-ray radiation is quantified with several test objects. The achieved spatial resolution of the images is <1 microm. The objects used for imaging are grids and small animals with parts of several microns in extent. The high-resolution imaging system can be used with different types of ionizing radiation (X-ray, electron, UV, and VUV) and for non-destructive micro-radiography and synchrotron beam inspection. PMID:19818638

  12. Calibration error for dual-camera digital image correlation at microscale

    NASA Astrophysics Data System (ADS)

    Li, Kai; Wang, Qiang; Wu, Jia; Yu, Haiyang; Zhang, Dongsheng

    2012-07-01

    Digital image correlation (DIC) has been widely conducted in many engineering applications. This paper describes a dual-camera system which is mounted on a stereo light microscope to achieve 3D displacement measurement at microscale. A glass plate etched with precision grids was used as the calibration plate and a translation calibration procedure was introduced to obtain the intrinsic and extrinsic parameters of the cameras as well as the aberration of the imaging system. Two main error sources, including grid positioning and stage translation, were discussed. It was found that the subpixel positioning errors had limited influences on displacement measurement, while the incorrect grid positioning can be avoided by analyzing the standard deviation between the grid spacing. The systematic translation error of the stage must be eliminated to achieve accurate displacement measurement. Based on the above analysis, a precisely controlled motorized calibration stage was developed to fulfill fully automatic calibration for the microscopic dual-camera system. An application for measuring the surface texture of the human incisor has been presented. It is concluded that the microscopic dual-camera system is an economic, precise system for 3D profilometry and deformation measurement.

  13. Optimal Design of Anger Camera for Bremsstrahlung Imaging: Monte Carlo Evaluation

    PubMed Central

    Walrand, Stephan; Hesse, Michel; Wojcik, Randy; Lhommel, Renaud; Jamar, François

    2014-01-01

    A conventional Anger camera is not adapted to bremsstrahlung imaging and, as a result, even using a reduced energy acquisition window, geometric x-rays represent <15% of the recorded events. This increases noise, limits the contrast, and reduces the quantification accuracy. Monte Carlo (MC) simulations of energy spectra showed that a camera based on a 30-mm-thick BGO crystal and equipped with a high energy pinhole collimator is well-adapted to bremsstrahlung imaging. The total scatter contamination is reduced by a factor 10 versus a conventional NaI camera equipped with a high energy parallel hole collimator enabling acquisition using an extended energy window ranging from 50 to 350?keV. By using the recorded event energy in the reconstruction method, shorter acquisition time and reduced orbit range will be usable allowing the design of a simplified mobile gantry. This is more convenient for use in a busy catheterization room. After injecting a safe activity, a fast single photon emission computed tomography could be performed without moving the catheter tip in order to assess the liver dosimetry and estimate the additional safe activity that could still be injected. Further long running time MC simulations of realistic acquisitions will allow assessing the quantification capability of such system. Simultaneously, a dedicated bremsstrahlung prototype camera reusing PMT–BGO blocks coming from a retired PET system is currently under design for further evaluation. PMID:24982849

  14. First responder thermal imaging cameras: establishment of representative performance testing conditions

    NASA Astrophysics Data System (ADS)

    Amon, Francine; Hamins, Anthony; Rowe, Justin

    2006-04-01

    Thermal imaging cameras are rapidly becoming integral equipment for first responders for use in structure fires and other emergencies. Currently there are no standardized performance metrics or test methods available to the users and manufacturers of these instruments. The Building and Fire Research Laboratory (BFRL) at the National Institute of Standards and Technology is conducting research to establish test conditions that best represent the environment in which these cameras are used. First responders may use thermal imagers for field operations ranging from fire attack and search/rescue in burning structures, to hot spot detection in overhaul activities, to detecting the location of hazardous materials. In order to develop standardized performance metrics and test methods that capture the harsh environment in which these cameras may be used, information has been collected from the literature, and from full-scale tests that have been conducted at BFRL. Initial experimental work has focused on temperature extremes and the presence of obscuring media such as smoke. In full-scale tests, thermal imagers viewed a target through smoke, dust, and steam, with and without flames in the field of view. The fuels tested were hydrocarbons (methanol, heptane, propylene, toluene), wood, upholstered cushions, and carpeting with padding. Gas temperatures, CO, CO II, and O II volume fraction, emission spectra, and smoke concentrations were measured. Simple thermal bar targets and a heated mannequin fitted in firefighter gear were used as targets. The imagers were placed at three distances from the targets, ranging from 3 m to 12 m.

  15. Mars Exploration Rover (MER) Panoramic Camera (Pancam) Twilight Image Analysis for Determination of Planetary Boundary Layer and Dust Particle Size Parameters

    E-print Network

    Grounds, Stephanie Beth

    2012-02-14

    MARS EXPLORATION ROVER (MER) PANORAMIC CAMERA (PANCAM) TWILIGHT IMAGE ANALYSIS FOR DETERMINATION OF PLANETARY BOUNDARY LAYER AND DUST PARTICLE SIZE PARAMETERS A Thesis by STEPHANIE BETH GROUNDS Submitted to the Office of Graduate... Camera (Pancam) Twilight Image Analysis for Determination of Planetary Boundary Layer and Dust Particle Size Parameters Copyright 2010 Stephanie Beth Grounds MARS EXPLORATION ROVER (MER) PANORAMIC CAMERA (PANCAM) TWILIGHT IMAGE ANALYSIS...

  16. Real time speed estimation of moving vehicles from side view images from an uncalibrated video camera.

    PubMed

    Do?an, Sedat; Temiz, Mahir Serhan; Külür, Sitki

    2010-01-01

    In order to estimate the speed of a moving vehicle with side view camera images, velocity vectors of a sufficient number of reference points identified on the vehicle must be found using frame images. This procedure involves two main steps. In the first step, a sufficient number of points from the vehicle is selected, and these points must be accurately tracked on at least two successive video frames. In the second step, by using the displacement vectors of the tracked points and passed time, the velocity vectors of those points are computed. Computed velocity vectors are defined in the video image coordinate system and displacement vectors are measured by the means of pixel units. Then the magnitudes of the computed vectors in image space should be transformed to the object space to find the absolute values of these magnitudes. This transformation requires an image to object space information in a mathematical sense that is achieved by means of the calibration and orientation parameters of the video frame images. This paper presents proposed solutions for the problems of using side view camera images mentioned here. PMID:22399909

  17. Striping Noise Removal of Images Acquired by Cbers 2 CCD Camera Sensor

    NASA Astrophysics Data System (ADS)

    Amraei, E.; Mobasheri, M. R.

    2014-10-01

    CCD Camera is a multi-spectral sensor that is carried by CBERS 2 satellite. Imaging technique in this sensor is push broom. In images acquired by the CCD Camera, some vertical striping noise can be seen. This is due to the detectors mismatch, inter detector variability, improper calibration of detectors and low signal-to-noise ratio. These noises are more profound in images acquired from the homogeneous surfaces, which are processed at level 2. However, the existence of these noises render the interpretation of the data and extracting information from these images difficult. In this work, spatial moment matching method is proposed to modify these images. In this method, the statistical moments such as mean and standard deviation of columns in each band are used to balance the statistical specifications of the detector array to those of reference values. After the removal of the noise, some periodic diagonal stripes remain in the image where their removal by using the aforementioned method seems impossible. Therefore, to omit them, frequency domain Butterworth notch filter was applied. Finally to evaluate the results, the image statistical moments such as the mean and standard deviation were deployed. The study proves the effectiveness of the method in noise removal.

  18. Imaging Observations of Thermal Emissions from Augustine Volcano Using a Small Astronomical Camera

    USGS Publications Warehouse

    Sentman, Davis D.; McNutt, Stephen R.; Stenbaek-Nielsen, Hans C.; Tytgat, Guy; DeRoin, Nicole

    2010-01-01

    Long-exposure visible-light images of Augustine Volcano were obtained using a charge-coupled device (CCD) camera during several nights of the 2006 eruption. The camera was located 105 km away, at Homer, Alaska, yet showed persistent bright emissions from the north flank of the volcano corresponding to steam releases, pyroclastic flows, and rockfalls originating near the summit. The apparent brightness of the emissions substantially exceeded that of the background nighttime scene. The bright signatures in the images are shown to probably be thermal emissions detected near the long-wavelength limit (~1 (u or mu)) of the CCD. Modeling of the emissions as a black-body brightness yields an apparent temperature of 400 to 450 degrees C that likely reflects an unresolved combination of emissions from hot ejecta and cooler material.

  19. A curve fitting method for extrinsic camera calibration from a single image of a cylindrical object

    NASA Astrophysics Data System (ADS)

    Winkler, A. W.; Zagar, B. G.

    2013-08-01

    An important step in the process of optical steel coil quality assurance is to measure the proportions of width and radius of steel coils as well as the relative position and orientation of the camera. This work attempts to estimate these extrinsic parameters from single images by using the cylindrical coil itself as the calibration target. Therefore, an adaptive least-squares algorithm is applied to fit parametrized curves to the detected true coil outline in the acquisition. The employed model allows for strictly separating the intrinsic and the extrinsic parameters. Thus, the intrinsic camera parameters can be calibrated beforehand using available calibration software. Furthermore, a way to segment the true coil outline in the acquired images is motivated. The proposed optimization method yields highly accurate results and can be generalized even to measure other solids which cannot be characterized by the identification of simple geometric primitives.

  20. Temperature Dependent Operation of PSAPD-Based Compact Gamma Camera for SPECT Imaging

    Microsoft Academic Search

    Sangtaek Kim; Mickel McClish; Fares Alhassen; Youngho Seo; Kanai S. Shah; Robert G. Gould

    2011-01-01

    We investigated the dependence of image quality on the temperature of a position sensitive avalanche photodiode (PSAPD)-based small animal single photon emission computed tomography (SPECT) gamma camera with a CsI:Tl scintillator. Currently, nitrogen gas cooling is preferred to operate PSAPDs in order to minimize the dark current shot noise. Being able to op- erate a PSAPD at a relatively high

  1. Portable retinal imaging for eye disease screening using a consumer-grade digital camera

    NASA Astrophysics Data System (ADS)

    Barriga, Simon; Larichev, Andrey; Zamora, Gilberto; Soliz, Peter

    2012-03-01

    The development of affordable means to image the retina is an important step toward the implementation of eye disease screening programs. In this paper we present the i-RxCam, a low-cost, hand-held, retinal camera for widespread applications such as tele-retinal screening for eye diseases like diabetic retinopathy (DR), glaucoma, and age-related ocular diseases. Existing portable retinal imagers do not meet the requirements of a low-cost camera with sufficient technical capabilities (field of view, image quality, portability, battery power, and ease-of-use) to be distributed widely to low volume clinics, such as the offices of single primary care physicians serving rural communities. The i-RxCam uses a Nikon D3100 digital camera body. The camera has a CMOS sensor with 14.8 million pixels. We use a 50mm focal lens that gives a retinal field of view of 45 degrees. The internal autofocus can compensate for about 2D (diopters) of focusing error. The light source is an LED produced by Philips with a linear emitting area that is transformed using a light pipe to the optimal shape at the eye pupil, an annulus. To eliminate corneal reflex we use a polarization technique in which the light passes through a nano-wire polarizer plate. This is a novel type of polarizer featuring high polarization separation (contrast ratio of more than 1000) and very large acceptance angle (>45 degrees). The i-RxCam approach will yield a significantly more economical retinal imaging device that would allow mass screening of the at-risk population.

  2. Use of a compact pixellated gamma camera for small animal pinhole SPECT imaging

    Microsoft Academic Search

    Tsutomu Zeniya; Hiroshi Watabe; Toshiyuki Aoi; Kyeong Min Kim; Noboru Teramoto; Takeshi Takeno; Yoichiro Ohta; Takuya Hayashi; Hiroyuki Mashino; Toshihiro Ota; Seiichi Yamamoto; Hidehiro Iida

    2006-01-01

    Objectives  Pinhole SPECT which permitsin vivo high resolution 3D imaging of physiological functions in small animals facilitates objective assessment of pharmaceutical\\u000a development and regenerative therapy in pre-clinical trials. For handiness and mobility, the miniature size of the SPECT system\\u000a is useful. We developed a small animal SPECT system based on a compact high-resolution gamma camera fitted to a pinhole collimator\\u000a and

  3. Tomographic small-animal imaging using a high-resolution semiconductor camera

    Microsoft Academic Search

    George A. Kastis; Max C. Wu; Steve J. Balzer; Donald W. Wilson; Lars R. Furenlid; Gail Stevenson; H. Bradford Barber; Harrison H. Barrett; James M. Woolfenden; Patrick Kelly; Michael Appleby

    2002-01-01

    We have developed a high-resolution, compact semiconductor camera for nuclear medicine applications. The modular unit has been used to obtain tomographic images of phantoms and mice. The system consists of a 64 × 64 CdZnTe detector array and a parallel-hole tungsten collimator mounted inside a 17 cm × 5.3 cm × 3.7 cm tungsten-aluminum housing. The detector is a 2.5

  4. Observations of Mars and its satellites by the Mars Imaging Camera (MIC) on Planet-B

    Microsoft Academic Search

    Tadashi Mukai; Tokuhide Akabane; Tatsuaki Hashimoto; Hiroshi Ishimoto; Sho Sasaki; A. Inada; Anthony Toigo; Masato Nakamura; Yutaka Abe; Kei Kurita; Takeshi Imamura

    1998-01-01

    We present the specifications of the Mars Imaging Camera (MIC) on the Planet-B spin-stabilized spacecraft, and key scientific objectives of MIC observations. A non-sun-synchronous orbit of Planet-B with a large eccentricity of about 0.87 around Mars provides the opportunities (1) to observe the same region of Mars at various times of day and various solar phase angles with spatial resolution

  5. X-ray and gamma-ray imaging with multiple-pinhole cameras using a posteriori image synthesis.

    NASA Technical Reports Server (NTRS)

    Groh, G.; Hayat, G. S.; Stroke, G. W.

    1972-01-01

    In 1968, Dicke had suggested that multiple-pinhole camera systems would have significant advantages concerning the SNR in X-ray and gamma-ray astronomy if the multiple images could be somehow synthesized into a single image. The practical development of an image-synthesis method based on these suggestions is discussed. A formulation of the SNR gain theory which is particularly suited for dealing with the proposal by Dicke is considered. It is found that the SNR gain is by no means uniform in all X-ray astronomy applications.

  6. The Simultaneous Quad-Color Infrared Imaging Device (SQIID) - A leap forward in infrared cameras for astronomy

    Microsoft Academic Search

    Timothy Ellis; Raleigh Drake; A. M. Fowler; Ian Gatley; Jerry Heim; Roger Luce; K. M. Merrill; Ron Probst; Nick Buchholz

    1993-01-01

    The Simultaneous Quad-Color Infrared Imaging Device (SQIID) is the first of a new generation of infrared instruments to be put into service at the Kitt Peak National Observatory (KPNO). The camera has been configured to be modular in design and to accept new innovations in detector format as they become available. Currently the camera is equipped with four 256 x

  7. Volcano geodesy at Santiaguito using ground-based cameras and particle image velocimetry

    NASA Astrophysics Data System (ADS)

    Johnson, J.; Andrews, B. J.; Anderson, J.; Lyons, J. J.; Lees, J. M.

    2012-12-01

    The active Santiaguito dome in Guatemala is an exceptional field site for ground-based optical observations owing to the bird's-eye viewing perspective from neighboring Santa Maria Volcano. From the summit of Santa Maria the frequent (1 per hour) explosions and continuous lava flow effusion may be observed from a vantage point, which is at a ~30 degree elevation angle, 1200 m above and 2700 m distant from the active vent. At these distances both video cameras and SLR cameras fitted with high-power lenses can effectively track blocky features translating and uplifting on the surface of Santiaguito's dome. We employ particle image velocimetry in the spatial frequency domain to map movements of ~10x10 m^2 surface patches with better than 10 cm displacement resolution. During three field campaigns to Santiaguito in 2007, 2009, and 2012 we have used cameras to measure dome surface movements for a range of time scales. In 2007 and 2009 we used video cameras recording at 30 fps to track repeated rapid dome uplift (more than 1 m within 2 s) of the 30,000 m^2 dome associated with the onset of eruptive activity. We inferred that the these uplift events were responsible for both a seismic long period response and an infrasound bimodal pulse. In 2012 we returned to Santiaguito to quantify dome surface movements over hour-to-day-long time scales by recording time lapse imagery at one minute intervals. These longer time scales reveal dynamic structure to the uplift and subsidence trends, effusion rate, and surface flow patterns that are related to internal conduit pressurization. In 2012 we performed particle image velocimetry with multiple cameras spatially separated in order to reconstruct 3-dimensional surface movements.

  8. Imaging microscopic structures in pathological retinas using a flood-illumination adaptive optics retinal camera

    NASA Astrophysics Data System (ADS)

    Viard, Clément; Nakashima, Kiyoko; Lamory, Barbara; Pâques, Michel; Levecq, Xavier; Château, Nicolas

    2011-03-01

    This research is aimed at characterizing in vivo differences between healthy and pathological retinal tissues at the microscopic scale using a compact adaptive optics (AO) retinal camera. Tests were performed in 120 healthy eyes and 180 eyes suffering from 19 different pathological conditions, including age-related maculopathy (ARM), glaucoma and rare diseases such as inherited retinal dystrophies. Each patient was first examined using SD-OCT and infrared SLO. Retinal areas of 4°x4° were imaged using an AO flood-illumination retinal camera based on a large-stroke deformable mirror. Contrast was finally enhanced by registering and averaging rough images using classical algorithms. Cellular-resolution images could be obtained in most cases. In ARM, AO images revealed granular contents in drusen, which were invisible in SLO or OCT images, and allowed the observation of the cone mosaic between drusen. In glaucoma cases, visual field was correlated to changes in cone visibility. In inherited retinal dystrophies, AO helped to evaluate cone loss across the retina. Other microstructures, slightly larger in size than cones, were also visible in several retinas. AO provided potentially useful diagnostic and prognostic information in various diseases. In addition to cones, other microscopic structures revealed by AO images may also be of interest in monitoring retinal diseases.

  9. COMPACT CdZnTe-BASED GAMMA CAMERA FOR PROSTATE CANCER IMAGING

    SciTech Connect

    CUI, Y.; LALL, T.; TSUI, B.; YU, J.; MAHLER, G.; BOLOTNIKOV, A.; VASKA, P.; DeGERONIMO, G.; O'CONNOR, P.; MEINKEN, G.; JOYAL, J.; BARRETT, J.; CAMARDA, G.; HOSSAIN, A.; KIM, K.H.; YANG, G.; POMPER, M.; CHO, S.; WEISMAN, K.; SEO, Y.; BABICH, J.; LaFRANCE, N.; AND JAMES, R.B.

    2011-10-23

    In this paper, we discuss the design of a compact gamma camera for high-resolution prostate cancer imaging using Cadmium Zinc Telluride (CdZnTe or CZT) radiation detectors. Prostate cancer is a common disease in men. Nowadays, a blood test measuring the level of prostate specific antigen (PSA) is widely used for screening for the disease in males over 50, followed by (ultrasound) imaging-guided biopsy. However, PSA tests have a high false-positive rate and ultrasound-guided biopsy has a high likelihood of missing small cancerous tissues. Commercial methods of nuclear medical imaging, e.g. PET and SPECT, can functionally image the organs, and potentially find cancer tissues at early stages, but their applications in diagnosing prostate cancer has been limited by the smallness of the prostate gland and the long working distance between the organ and the detectors comprising these imaging systems. CZT is a semiconductor material with wide band-gap and relatively high electron mobility, and thus can operate at room temperature without additional cooling. CZT detectors are photon-electron direct-conversion devices, thus offering high energy-resolution in detecting gamma rays, enabling energy-resolved imaging, and reducing the background of Compton-scattering events. In addition, CZT material has high stopping power for gamma rays; for medical imaging, a few-mm-thick CZT material provides adequate detection efficiency for many SPECT radiotracers. Because of these advantages, CZT detectors are becoming popular for several SPECT medical-imaging applications. Most recently, we designed a compact gamma camera using CZT detectors coupled to an application-specific-integrated-circuit (ASIC). This camera functions as a trans-rectal probe to image the prostate gland from a distance of only 1-5 cm, thus offering higher detection efficiency and higher spatial resolution. Hence, it potentially can detect prostate cancers at their early stages. The performance tests of this camera have been completed. The results show better than 6-mm resolution at a distance of 1 cm. Details of the test results are discussed in this paper.

  10. Compact CdZnTe-based gamma camera for prostate cancer imaging

    NASA Astrophysics Data System (ADS)

    Cui, Yonggang; Lall, Terry; Tsui, Benjamin; Yu, Jianhua; Mahler, George; Bolotnikov, Aleksey; Vaska, Paul; De Geronimo, Gianluigi; O'Connor, Paul; Meinken, George; Joyal, John; Barrett, John; Camarda, Giuseppe; Hossain, Anwar; Kim, Ki Hyun; Yang, Ge; Pomper, Marty; Cho, Steve; Weisman, Ken; Seo, Youngho; Babich, John; LaFrance, Norman; James, Ralph B.

    2011-06-01

    In this paper, we discuss the design of a compact gamma camera for high-resolution prostate cancer imaging using Cadmium Zinc Telluride (CdZnTe or CZT) radiation detectors. Prostate cancer is a common disease in men. Nowadays, a blood test measuring the level of prostate specific antigen (PSA) is widely used for screening for the disease in males over 50, followed by (ultrasound) imaging-guided biopsy. However, PSA tests have a high falsepositive rate and ultrasound-guided biopsy has a high likelihood of missing small cancerous tissues. Commercial methods of nuclear medical imaging, e.g. PET and SPECT, can functionally image the organs, and potentially find cancer tissues at early stages, but their applications in diagnosing prostate cancer has been limited by the smallness of the prostate gland and the long working distance between the organ and the detectors comprising these imaging systems. CZT is a semiconductor material with wide band-gap and relatively high electron mobility, and thus can operate at room temperature without additional cooling. CZT detectors are photon-electron direct-conversion devices, thus offering high energy-resolution in detecting gamma rays, enabling energy-resolved imaging, and reducing the background of Compton-scattering events. In addition, CZT material has high stopping power for gamma rays; for medical imaging, a few-mm-thick CZT material provides adequate detection efficiency for many SPECT radiotracers. Because of these advantages, CZT detectors are becoming popular for several SPECT medical-imaging applications. Most recently, we designed a compact gamma camera using CZT detectors coupled to an application-specific-integratedcircuit (ASIC). This camera functions as a trans-rectal probe to image the prostate gland from a distance of only 1-5 cm, thus offering higher detection efficiency and higher spatial resolution. Hence, it potentially can detect prostate cancers at their early stages. The performance tests of this camera have been completed. The results show better than 6-mm resolution at a distance of 1 cm. Details of the test results are discussed in this paper.

  11. Experiences Supporting the Lunar Reconnaissance Orbiter Camera: the Devops Model

    NASA Astrophysics Data System (ADS)

    Licht, A.; Estes, N. M.; Bowman-Cisnesros, E.; Hanger, C. D.

    2013-12-01

    Introduction: The Lunar Reconnaissance Orbiter Camera (LROC) Science Operations Center (SOC) is responsible for instrument targeting, product processing, and archiving [1]. The LROC SOC maintains over 1,000,000 observations with over 300 TB of released data. Processing challenges compound with the acquisition of over 400 Gbits of observations daily creating the need for a robust, efficient, and reliable suite of specialized software. Development Environment: The LROC SOC's software development methodology has evolved over time. Today, the development team operates in close cooperation with the systems administration team in a model known in the IT industry as DevOps. The DevOps model enables a highly productive development environment that facilitates accomplishment of key goals within tight schedules[2]. The LROC SOC DevOps model incorporates industry best practices including prototyping, continuous integration, unit testing, code coverage analysis, version control, and utilizing existing open source software. Scientists and researchers at LROC often prototype algorithms and scripts in a high-level language such as MATLAB or IDL. After the prototype is functionally complete the solution is implemented as production ready software by the developers. Following this process ensures that all controls and requirements set by the LROC SOC DevOps team are met. The LROC SOC also strives to enhance the efficiency of the operations staff by way of weekly presentations and informal mentoring. Many small scripting tasks are assigned to the cognizant operations personnel (end users), allowing for the DevOps team to focus on more complex and mission critical tasks. In addition to leveraging open source software the LROC SOC has also contributed to the open source community by releasing Lunaserv [3]. Findings: The DevOps software model very efficiently provides smooth software releases and maintains team momentum. Scientists prototyping their work has proven to be very efficient as developers do not need to spend time iterating over small changes. Instead, these changes are realized in early prototypes and implemented before the task is seen by developers. The development practices followed by the LROC SOC DevOps team help facilitate a high level of software quality that is necessary for LROC SOC operations. Application to the Scientific Community: There is no replacement for having software developed by professional developers. While it is beneficial for scientists to write software, this activity should be seen as prototyping, which is then made production ready by professional developers. When constructed properly, even a small development team has the ability to increase the rate of software development for a research group while creating more efficient, reliable, and maintainable products. This strategy allows scientists to accomplish more, focusing on teamwork, rather than software development, which may not be their primary focus. 1. Robinson et al. (2010) Space Sci. Rev. 150, 81-124 2. DeGrandis. (2011) Cutter IT Journal. Vol 24, No. 8, 34-39 3. Estes, N.M.; Hanger, C.D.; Licht, A.A.; Bowman-Cisneros, E.; Lunaserv Web Map Service: History, Implementation Details, Development, and Uses, http://adsabs.harvard.edu/abs/2013LPICo1719.2609E.

  12. Intraoperative Imaging Guidance for Sentinel Node Biopsy in Melanoma Using a Mobile Gamma Camera

    SciTech Connect

    Dengel, Lynn T; Judy, Patricia G; Petroni, Gina R; Smolkin, Mark E; Rehm, Patrice K; Majewski, Stan; Williams, Mark B

    2011-04-01

    The objective is to evaluate the sensitivity and clinical utility of intraoperative mobile gamma camera (MGC) imaging in sentinel lymph node biopsy (SLNB) in melanoma. The false-negative rate for SLNB for melanoma is approximately 17%, for which failure to identify the sentinel lymph node (SLN) is a major cause. Intraoperative imaging may aid in detection of SLN near the primary site, in ambiguous locations, and after excision of each SLN. The present pilot study reports outcomes with a prototype MGC designed for rapid intraoperative image acquisition. We hypothesized that intraoperative use of the MGC would be feasible and that sensitivity would be at least 90%. From April to September 2008, 20 patients underwent Tc99 sulfur colloid lymphoscintigraphy, and SLNB was performed with use of a conventional fixed gamma camera (FGC), and gamma probe followed by intraoperative MGC imaging. Sensitivity was calculated for each detection method. Intraoperative logistical challenges were scored. Cases in which MGC provided clinical benefit were recorded. Sensitivity for detecting SLN basins was 97% for the FGC and 90% for the MGC. A total of 46 SLN were identified: 32 (70%) were identified as distinct hot spots by preoperative FGC imaging, 31 (67%) by preoperative MGC imaging, and 43 (93%) by MGC imaging pre- or intraoperatively. The gamma probe identified 44 (96%) independent of MGC imaging. The MGC provided defined clinical benefit as an addition to standard practice in 5 (25%) of 20 patients. Mean score for MGC logistic feasibility was 2 on a scale of 1-9 (1 = best). Intraoperative MGC imaging provides additional information when standard techniques fail or are ambiguous. Sensitivity is 90% and can be increased. This pilot study has identified ways to improve the usefulness of an MGC for intraoperative imaging, which holds promise for reducing false negatives of SLNB for melanoma.

  13. Formulation of image quality prediction criteria for the Viking lander camera

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Jobson, D. J.; Taylor, E. J.; Wall, S. D.

    1973-01-01

    Image quality criteria are defined and mathematically formulated for the prediction computer program which is to be developed for the Viking lander imaging experiment. The general objective of broad-band (black and white) imagery to resolve small spatial details and slopes is formulated as the detectability of a right-circular cone with surface properties of the surrounding terrain. The general objective of narrow-band (color and near-infrared) imagery to observe spectral characteristics if formulated as the minimum detectable albedo variation. The general goal to encompass, but not exceed, the range of the scene radiance distribution within single, commandable, camera dynamic range setting is also considered.

  14. Electron-tracking Compton gamma-ray camera for small animal and phantom imaging

    NASA Astrophysics Data System (ADS)

    Kabuki, Shigeto; Kimura, Hiroyuki; Amano, Hiroo; Nakamoto, Yuji; Kubo, Hidetoshi; Miuchi, Kentaro; Kurosawa, Shunsuke; Takahashi, Michiaki; Kawashima, Hidekazu; Ueda, Masashi; Okada, Tomohisa; Kubo, Atsushi; Kunieda, Etuso; Nakahara, Tadaki; Kohara, Ryota; Miyazaki, Osamu; Nakazawa, Tetsuo; Shirahata, Takashi; Yamamoto, Etsuji; Ogawa, Koichi; Togashi, Kaori; Saji, Hideo; Tanimori, Toru

    2010-11-01

    We have developed an electron-tracking Compton camera (ETCC) for medical use. Our ETCC has a wide energy dynamic range (200-1300 keV) and wide field of view (3 sr), and thus has potential for advanced medical use. To evaluate the ETCC, we imaged the head (brain) and bladder of mice that had been administered with F-18-FDG. We also imaged the head and thyroid gland of mice using double tracers of F-18-FDG and I-131 ions.

  15. Application of real-time single camera SLAM technology for image-guided targeting in neurosurgery

    NASA Astrophysics Data System (ADS)

    Chang, Yau-Zen; Hou, Jung-Fu; Tsao, Yi Hsiang; Lee, Shih-Tseng

    2012-10-01

    In this paper, we propose an application of augmented reality technology for targeting tumors or anatomical structures inside the skull. The application is a combination of the technologies of MonoSLAM (Single Camera Simultaneous Localization and Mapping) and computer graphics. A stereo vision system is developed to construct geometric data of human face for registration with CT images. Reliability and accuracy of the application is enhanced by the use of fiduciary markers fixed to the skull. The MonoSLAM keeps track of the current location of the camera with respect to an augmented reality (AR) marker using the extended Kalman filter. The fiduciary markers provide reference when the AR marker is invisible to the camera. Relationship between the markers on the face and the augmented reality marker is obtained by a registration procedure by the stereo vision system and is updated on-line. A commercially available Android based tablet PC equipped with a 320×240 front-facing camera was used for implementation. The system is able to provide a live view of the patient overlaid by the solid models of tumors or anatomical structures, as well as the missing part of the tool inside the skull.

  16. A gamma camera count rate saturation correction method for whole-body planar imaging

    PubMed Central

    Hobbs, Robert F; Baechler, Sébastien; Senthamizhchelvan, Srinivasan; Prideaux, Andrew R; Esaias, Caroline E; Reinhardt, Melvin; Frey, Eric C; Loeb, David M; Sgouros, George

    2010-01-01

    Whole-body (WB) planar imaging has long been one of the staple methods of dosimetry, and its quantification has been formalized by the MIRD Committee in pamphlet no 16. One of the issues not specifically addressed in the formalism occurs when the count rates reaching the detector are sufficiently high to result in camera count saturation. Camera dead-time effects have been extensively studied, but all of the developed correction methods assume static acquisitions. However, during WB planar (sweep) imaging, a variable amount of imaged activity exists in the detector’s field of view as a function of time and therefore the camera saturation is time dependent. A new time-dependent algorithm was developed to correct for dead-time effects during WB planar acquisitions that accounts for relative motion between detector heads and imaged object. Static camera dead-time parameters were acquired by imaging decaying activity in a phantom and obtaining a saturation curve. Using these parameters, an iterative algorithm akin to Newton’s method was developed, which takes into account the variable count rate seen by the detector as a function of time. The algorithm was tested on simulated data as well as on a whole-body scan of high activity Samarium-153 in an ellipsoid phantom. A complete set of parameters from unsaturated phantom data necessary for count rate to activity conversion was also obtained, including build-up and attenuation coefficients, in order to convert corrected count rate values to activity. The algorithm proved successful in accounting for motion-and time-dependent saturation effects in both the simulated and measured data and converged to any desired degree of precision. The clearance half-life calculated from the ellipsoid phantom data was calculated to be 45.1 h after dead-time correction and 51.4 h with no correction; the physical decay half-life of Samarium-153 is 46.3 h. Accurate WB planar dosimetry of high activities relies on successfully compensating for camera saturation which takes into account the variable activity in the field of view, i.e. time-dependent dead-time effects. The algorithm presented here accomplishes this task. PMID:20071766

  17. Linearisation of RGB Camera Responses for Quantitative Image Analysis of Visible and UV Photography: A Comparison of Two Techniques

    PubMed Central

    Garcia, Jair E.; Dyer, Adrian G.; Greentree, Andrew D.; Spring, Gale; Wilksch, Philip A.

    2013-01-01

    Linear camera responses are required for recovering the total amount of incident irradiance, quantitative image analysis, spectral reconstruction from camera responses and characterisation of spectral sensitivity curves. Two commercially-available digital cameras equipped with Bayer filter arrays and sensitive to visible and near-UV radiation were characterised using biexponential and Bézier curves. Both methods successfully fitted the entire characteristic curve of the tested devices, allowing for an accurate recovery of linear camera responses, particularly those corresponding to the middle of the exposure range. Nevertheless the two methods differ in the nature of the required input parameters and the uncertainty associated with the recovered linear camera responses obtained at the extreme ends of the exposure range. Here we demonstrate the use of both methods for retrieving information about scene irradiance, describing and quantifying the uncertainty involved in the estimation of linear camera responses. PMID:24260244

  18. Linearisation of RGB camera responses for quantitative image analysis of visible and UV photography: a comparison of two techniques.

    PubMed

    Garcia, Jair E; Dyer, Adrian G; Greentree, Andrew D; Spring, Gale; Wilksch, Philip A

    2013-01-01

    Linear camera responses are required for recovering the total amount of incident irradiance, quantitative image analysis, spectral reconstruction from camera responses and characterisation of spectral sensitivity curves. Two commercially-available digital cameras equipped with Bayer filter arrays and sensitive to visible and near-UV radiation were characterised using biexponential and Bézier curves. Both methods successfully fitted the entire characteristic curve of the tested devices, allowing for an accurate recovery of linear camera responses, particularly those corresponding to the middle of the exposure range. Nevertheless the two methods differ in the nature of the required input parameters and the uncertainty associated with the recovered linear camera responses obtained at the extreme ends of the exposure range. Here we demonstrate the use of both methods for retrieving information about scene irradiance, describing and quantifying the uncertainty involved in the estimation of linear camera responses. PMID:24260244

  19. Performance of CID camera X-ray imagers at NIF in a harsh neutron environment

    SciTech Connect

    Palmer, N. E. [LLNL; Schneider, M. B. [LLNL; Bell, P. M. [LLNL; Piston, K. W. [LLNL; Moody, J. D. [LLNL; James, D. L. [LLNL; Ness, R. A. [LLNL; Haugh, M. J. [NSTec; Lee, J. J. [NSTec; Romano, E. D. [NSTec

    2013-09-01

    Charge-injection devices (CIDs) are solid-state 2D imaging sensors similar to CCDs, but their distinct architecture makes CIDs more resistant to ionizing radiation.1–3 CID cameras have been used extensively for X-ray imaging at the OMEGA Laser Facility4,5 with neutron fluences at the sensor approaching 109 n/cm2 (DT, 14 MeV). A CID Camera X-ray Imager (CCXI) system has been designed and implemented at NIF that can be used as a rad-hard electronic-readout alternative for time-integrated X-ray imaging. This paper describes the design and implementation of the system, calibration of the sensor for X-rays in the 3 – 14 keV energy range, and preliminary data acquired on NIF shots over a range of neutron yields. The upper limit of neutron fluence at which CCXI can acquire useable images is ~ 108 n/cm2 and there are noise problems that need further improvement, but the sensor has proven to be very robust in surviving high yield shots (~ 1014 DT neutrons) with minimal damage.

  20. Improvement of a snapshot spectroscopic retinal multi-aperture imaging camera

    NASA Astrophysics Data System (ADS)

    Lemaillet, Paul; Lompado, Art; Ramella-Roman, Jessica C.

    2009-02-01

    Measurement of oxygen saturation has proved to give important information about the eye health and the onset of eye pathologies such as Diabetic Retinopathy. Recently, we have presented a multi-aperture system enabling snapshot acquisition of human fundus images at six different wavelengths. In our setup a commercial fundus ophthalmoscope was interfaced with the multi-aperture system to acquire spectroscopic sensitive images of the retina vessel, thus enabling assessment of the oxygen saturation in the retina. Snapshot spectroscopic acquisition is meant to minimize the effects of eye movements. Higher measurement accuracy can be achieved by increasing the number of wavelengths at which the fundus images are taken. In this study we present an improvement of our setup by introducing an other multi-aperture camera that enables us to take snapshot images of the fundus at nine different wavelengths. Careful consideration is taken to improve image transfer by measuring the optical properties of the fundus camera used in the setup and modeling the optical train in Zemax.

  1. Development of a Portable 3CCD Camera System for Multispectral Imaging of Biological Samples

    PubMed Central

    Lee, Hoyoung; Park, Soo Hyun; Noh, Sang Ha; Lim, Jongguk; Kim, Moon S.

    2014-01-01

    Recent studies have suggested the need for imaging devices capable of multispectral imaging beyond the visible region, to allow for quality and safety evaluations of agricultural commodities. Conventional multispectral imaging devices lack flexibility in spectral waveband selectivity for such applications. In this paper, a recently developed portable 3CCD camera with significant improvements over existing imaging devices is presented. A beam-splitter prism assembly for 3CCD was designed to accommodate three interference filters that can be easily changed for application-specific multispectral waveband selection in the 400 to 1000 nm region. We also designed and integrated electronic components on printed circuit boards with firmware programming, enabling parallel processing, synchronization, and independent control of the three CCD sensors, to ensure the transfer of data without significant delay or data loss due to buffering. The system can stream 30 frames (3-waveband images in each frame) per second. The potential utility of the 3CCD camera system was demonstrated in the laboratory for detecting defect spots on apples. PMID:25350510

  2. Development of a portable 3CCD camera system for multispectral imaging of biological samples.

    PubMed

    Lee, Hoyoung; Park, Soo Hyun; Noh, Sang Ha; Lim, Jongguk; Kim, Moon S

    2014-01-01

    Recent studies have suggested the need for imaging devices capable of multispectral imaging beyond the visible region, to allow for quality and safety evaluations of agricultural commodities. Conventional multispectral imaging devices lack flexibility in spectral waveband selectivity for such applications. In this paper, a recently developed portable 3CCD camera with significant improvements over existing imaging devices is presented. A beam-splitter prism assembly for 3CCD was designed to accommodate three interference filters that can be easily changed for application-specific multispectral waveband selection in the 400 to 1000 nm region. We also designed and integrated electronic components on printed circuit boards with firmware programming, enabling parallel processing, synchronization, and independent control of the three CCD sensors, to ensure the transfer of data without significant delay or data loss due to buffering. The system can stream 30 frames (3-waveband images in each frame) per second. The potential utility of the 3CCD camera system was demonstrated in the laboratory for detecting defect spots on apples. PMID:25350510

  3. Application of adaptive optics in retinal imaging: a quantitative and clinical comparison with standard cameras

    NASA Astrophysics Data System (ADS)

    Barriga, E. S.; Erry, G.; Yang, S.; Russell, S.; Raman, B.; Soliz, P.

    2005-04-01

    Aim: The objective of this project was to evaluate high resolution images from an adaptive optics retinal imager through comparisons with standard film-based and standard digital fundus imagers. Methods: A clinical prototype adaptive optics fundus imager (AOFI) was used to collect retinal images from subjects with various forms of retinopathy to determine whether improved visibility into the disease could be provided to the clinician. The AOFI achieves low-order correction of aberrations through a closed-loop wavefront sensor and an adaptive optics system. The remaining high-order aberrations are removed by direct deconvolution using the point spread function (PSF) or by blind deconvolution when the PSF is not available. An ophthalmologist compared the AOFI images with standard fundus images and provided a clinical evaluation of all the modalities and processing techniques. All images were also analyzed using a quantitative image quality index. Results: This system has been tested on three human subjects (one normal and two with retinopathy). In the diabetic patient vascular abnormalities were detected with the AOFI that cannot be resolved with the standard fundus camera. Very small features, such as the fine vascular structures on the optic disc and the individual nerve fiber bundles are easily resolved by the AOFI. Conclusion: This project demonstrated that adaptive optic images have great potential in providing clinically significant detail of anatomical and pathological structures to the ophthalmologist.

  4. 200 ps FWHM and 100 MHz repetition rate ultrafast gated camera for optical medical functional imaging

    NASA Astrophysics Data System (ADS)

    Uhring, Wilfried; Poulet, Patrick; Hanselmann, Walter; Glazenborg, René; Zint, Virginie; Nouizi, Farouk; Dubois, Benoit; Hirschi, Werner

    2012-04-01

    The paper describes the realization of a complete optical imaging device to clinical applications like brain functional imaging by time-resolved, spectroscopic diffuse optical tomography. The entire instrument is assembled in a unique setup that includes a light source, an ultrafast time-gated intensified camera and all the electronic control units. The light source is composed of four near infrared laser diodes driven by a nanosecond electrical pulse generator working in a sequential mode at a repetition rate of 100 MHz. The resulting light pulses, at four wavelengths, are less than 80 ps FWHM. They are injected in a four-furcated optical fiber ended with a frontal light distributor to obtain a uniform illumination spot directed towards the head of the patient. Photons back-scattered by the subject are detected by the intensified CCD camera; there are resolved according to their time of flight inside the head. The very core of the intensified camera system is the image intensifier tube and its associated electrical pulse generator. The ultrafast generator produces 50 V pulses, at a repetition rate of 100 MHz and a width corresponding to the 200 ps requested gate. The photocathode and the Micro-Channel-Plate of the intensifier have been specially designed to enhance the electromagnetic wave propagation and reduce the power loss and heat that are prejudicial to the quality of the image. The whole instrumentation system is controlled by an FPGA based module. The timing of the light pulses and the photocathode gating is precisely adjustable with a step of 9 ps. All the acquisition parameters are configurable via software through an USB plug and the image data are transferred to a PC via an Ethernet link. The compactness of the device makes it a perfect device for bedside clinical applications.

  5. A compact 16-module camera using 64-pixel CsI(Tl)\\/Si p-i-n photodiode imaging modules

    Microsoft Academic Search

    W.-S. Choong; G. J. Gruber; W. W. Moses; S. E. Derenzo; S. E. Holland; M. Pedrali-Noy; B. Krieger; E. Mandelli; G. Meddeler; N. W. Wang; E. K. Witt

    2002-01-01

    We present a compact, configurable scintillation camera employing a maximum of 16 individual 64-pixel imaging modules resulting in a 1024-pixel camera covering an area of 9.6 cm×9.6 cm. The 64-pixel imaging module consists of optically isolated 3 mm×3 mm×5 mm CsI(Tl) crystals coupled to a custom array of Si p-i-n photodiodes read out by a custom integrated circuit (IC). Each

  6. High performance gel imaging with a commercial single lens reflex camera

    NASA Astrophysics Data System (ADS)

    Slobodan, J.; Corbett, R.; Wye, N.; Schein, J. E.; Marra, M. A.; Coope, R. J. N.

    2011-03-01

    A high performance gel imaging system was constructed using a digital single lens reflex camera with epi-illumination to image 19 × 23 cm agarose gels with up to 10,000 DNA bands each. It was found to give equivalent performance to a laser scanner in this high throughput DNA fingerprinting application using the fluorophore SYBR Green®. The specificity and sensitivity of the imager and scanner were within 1% using the same band identification software. Low and high cost color filters were also compared and it was found that with care, good results could be obtained with inexpensive dyed acrylic filters in combination with more costly dielectric interference filters, but that very poor combinations were also possible. Methods for determining resolution, dynamic range, and optical efficiency for imagers are also proposed to facilitate comparison between systems.

  7. Dual-mode laparoscopic fluorescence image-guided surgery using a single camera

    PubMed Central

    Gray, Daniel C.; Kim, Evgenia M.; Cotero, Victoria E.; Bajaj, Anshika; Staudinger, V. Paul; Hehir, Cristina A. Tan; Yazdanfar, Siavash

    2012-01-01

    Iatrogenic nerve damage is a leading cause of morbidity associated with many common surgical procedures. Complications arising from these injuries may result in loss of function and/or sensation, muscle atrophy, and chronic neuropathy. Fluorescence image-guided surgery offers a potential solution for avoiding intraoperative nerve damage by highlighting nerves that are otherwise difficult to visualize. In this work we present the development of a single camera, dual-mode laparoscope that provides near simultaneous display of white-light and fluorescence images of nerves. The capability of the instrumentation is demonstrated through imaging several types of in situ rat nerves via a nerve specific contrast agent. Full color white light and high brightness fluorescence images and video of nerves as small as 100 µm in diameter are presented. PMID:22876351

  8. Non-contact imaging of venous compliance in humans using an RGB camera

    NASA Astrophysics Data System (ADS)

    Nakano, Kazuya; Satoh, Ryota; Hoshi, Akira; Matsuda, Ryohei; Suzuki, Hiroyuki; Nishidate, Izumi

    2015-03-01

    We propose a technique for non-contact imaging of venous compliance that uses the red, green, and blue (RGB) camera. Any change in blood concentration is estimated from an RGB image of the skin, and a regression formula is calculated from that change. Venous compliance is obtained from a differential form of the regression formula. In vivo experiments with human subjects confirmed that the proposed method does differentiate the venous compliances among individuals. In addition, the image of venous compliance is obtained by performing the above procedures for each pixel. Thus, we can measure venous compliance without physical contact with sensors and, from the resulting images, observe the spatial distribution of venous compliance, which correlates with the distribution of veins.

  9. In situ X-ray beam imaging using an off-axis magnifying coded aperture camera system

    PubMed Central

    Kachatkou, Anton; Kyele, Nicholas; Scott, Peter; van Silfhout, Roelof

    2013-01-01

    An imaging model and an image reconstruction algorithm for a transparent X-ray beam imaging and position measuring instrument are presented. The instrument relies on a coded aperture camera to record magnified images of the footprint of the incident beam on a thin foil placed in the beam at an oblique angle. The imaging model represents the instrument as a linear system whose impulse response takes into account the image blur owing to the finite thickness of the foil, the shape and size of camera’s aperture and detector’s point-spread function. The image reconstruction algorithm first removes the image blur using the modelled impulse response function and then corrects for geometrical distortions caused by the foil tilt. The performance of the image reconstruction algorithm was tested in experiments at synchrotron radiation beamlines. The results show that the proposed imaging system produces images of the X-ray beam cross section with a quality comparable with images obtained using X-ray cameras that are exposed to the direct beam. PMID:23765302

  10. 816 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 5, NO. 4, DECEMBER 2010 Camera Response Functions for Image Forensics

    E-print Network

    Chang, Shih-Fu

    ), a funda- mental property in cameras mapping input irradiance to output image intensity. A test image are computed and fed to statistical classifiers. Such segment level scores are further fused to infer the image level authenticity. Tests on two data sets reach performance levels of 70% precision and 70% recall

  11. Quantitative Evaluation of Scintillation Camera Imaging Characteristics of Isotopes Used in Liver Radioembolization

    PubMed Central

    Elschot, Mattijs; Nijsen, Johannes Franciscus Wilhelmus; Dam, Alida Johanna; de Jong, Hugo Wilhelmus Antonius Maria

    2011-01-01

    Background Scintillation camera imaging is used for treatment planning and post-treatment dosimetry in liver radioembolization (RE). In yttrium-90 (90Y) RE, scintigraphic images of technetium-99m (99mTc) are used for treatment planning, while 90Y Bremsstrahlung images are used for post-treatment dosimetry. In holmium-166 (166Ho) RE, scintigraphic images of 166Ho can be used for both treatment planning and post-treatment dosimetry. The aim of this study is to quantitatively evaluate and compare the imaging characteristics of these three isotopes, in order that imaging protocols can be optimized and RE studies with varying isotopes can be compared. Methodology/Principal Findings Phantom experiments were performed in line with NEMA guidelines to assess the spatial resolution, sensitivity, count rate linearity, and contrast recovery of 99mTc, 90Y and 166Ho. In addition, Monte Carlo simulations were performed to obtain detailed information about the history of detected photons. The results showed that the use of a broad energy window and the high-energy collimator gave optimal combination of sensitivity, spatial resolution, and primary photon fraction for 90Y Bremsstrahlung imaging, although differences with the medium-energy collimator were small. For 166Ho, the high-energy collimator also slightly outperformed the medium-energy collimator. In comparison with 99mTc, the image quality of both 90Y and 166Ho is degraded by a lower spatial resolution, a lower sensitivity, and larger scatter and collimator penetration fractions. Conclusions/Significance The quantitative evaluation of the scintillation camera characteristics presented in this study helps to optimize acquisition parameters and supports future analysis of clinical comparisons between RE studies. PMID:22073149

  12. Real-Time On-Board Processing Validation of MSPI Ground Camera Images

    NASA Technical Reports Server (NTRS)

    Pingree, Paula J.; Werne, Thomas A.; Bekker, Dmitriy L.

    2010-01-01

    The Earth Sciences Decadal Survey identifies a multiangle, multispectral, high-accuracy polarization imager as one requirement for the Aerosol-Cloud-Ecosystem (ACE) mission. JPL has been developing a Multiangle SpectroPolarimetric Imager (MSPI) as a candidate to fill this need. A key technology development needed for MSPI is on-board signal processing to calculate polarimetry data as imaged by each of the 9 cameras forming the instrument. With funding from NASA's Advanced Information Systems Technology (AIST) Program, JPL is solving the real-time data processing requirements to demonstrate, for the first time, how signal data at 95 Mbytes/sec over 16-channels for each of the 9 multiangle cameras in the spaceborne instrument can be reduced on-board to 0.45 Mbytes/sec. This will produce the intensity and polarization data needed to characterize aerosol and cloud microphysical properties. Using the Xilinx Virtex-5 FPGA including PowerPC440 processors we have implemented a least squares fitting algorithm that extracts intensity and polarimetric parameters in real-time, thereby substantially reducing the image data volume for spacecraft downlink without loss of science information.

  13. Orienting the camera and firing lasers to enhance large scale particle image velocimetry for streamflow monitoring

    NASA Astrophysics Data System (ADS)

    Tauro, Flavia; Porfiri, Maurizio; Grimaldi, Salvatore

    2014-09-01

    Large scale particle image velocimetry (LSPIV) is a nonintrusive methodology for continuous surface flow monitoring in natural environments. Recent experimental studies demonstrate that LSPIV is a promising technique to estimate flow discharge in riverine systems. Traditionally, LSPIV implementations are based on the use of angled cameras to capture extended fields of view; images are then orthorectified and calibrated through the acquisition of ground reference points. As widely documented in the literature, the identification of ground reference points and image orthorectification are major hurdles in LSPIV. Here we develop an experimental apparatus to address both of these issues. The proposed platform includes a laser system for remote frame calibration and a low-cost camera that is maintained orthogonal with respect to the water surface to minimize image distortions. We study the feasibility of the apparatus on two complex natural riverine environments where the acquisition of ground reference points is prevented and illumination and seeding density conditions are challenging. While our results confirm that velocity estimations can be severely affected by inhomogeneously seeded surface tracers and adverse illumination settings, they demonstrate that LSPIV implementations can benefit from the proposed apparatus. Specifically, the presented system opens novel avenues in the development of stand-alone platforms for remote surface flow monitoring.

  14. Modeling of three-dimensional camera imaging in a tokamak torus

    SciTech Connect

    Edmonds, P.H. [Fusion Research Center, University of Texas at Austin, Austin, Texas 08543 (United States)] [Fusion Research Center, University of Texas at Austin, Austin, Texas 08543 (United States); Medley, S.S. [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543 (United States)] [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543 (United States)

    1997-01-01

    A procedure is described for precision modeling of the views for imaging diagnostics monitoring tokamak internal components, particularly high heat flux divertor components. Such modeling is required to enable predictions of resolution and viewing angle for the available viewing locations. Since oblique views are typically expected for tokamak divertors, fully three-dimensional (3D) perspective imaging is required. A suite of matched 3D CAD, graphics and animation applications are used to provide a fast and flexible technique for reproducing these views. An analytic calculation of the resolution and viewing incidence angle is developed to validate the results of the modeling procedures. The tokamak physics experiment (TPX) diagnostics1 for infrared viewing are used as an example to demonstrate the implementation of the tools. As is generally the case in tokamak experiments, the available diagnostic locations for TPX are severely constrained by access limitations and the resulting images can be marginal in both resolution and viewing incidence angle. The procedures described here provide a complete design tool for in-vessel viewing, both for camera location and for identification of viewed surfaces. Additionally, these same tools can be used for the interpretation of the actual images obtained by the diagnostic cameras. {copyright} {ital 1997 American Institute of Physics.}

  15. Multi-frame image processing with panning cameras and moving subjects

    NASA Astrophysics Data System (ADS)

    Paolini, Aaron; Humphrey, John; Curt, Petersen; Kelmelis, Eric

    2014-06-01

    Imaging scenarios commonly involve erratic, unpredictable camera behavior or subjects that are prone to movement, complicating multi-frame image processing techniques. To address these issues, we developed three techniques that can be applied to multi-frame image processing algorithms in order to mitigate the adverse effects observed when cameras are panning or subjects within the scene are moving. We provide a detailed overview of the techniques and discuss the applicability of each to various movement types. In addition to this, we evaluated algorithm efficacy with demonstrated benefits using field test video, which has been processed using our commercially available surveillance product. Our results show that algorithm efficacy is significantly improved in common scenarios, expanding our software's operational scope. Our methods introduce little computational burden, enabling their use in real-time and low-power solutions, and are appropriate for long observation periods. Our test cases focus on imaging through turbulence, a common use case for multi-frame techniques. We present results of a field study designed to test the efficacy of these techniques under expanded use cases.

  16. Megapixel imaging camera for expanded H{sup {minus}} beam measurements

    SciTech Connect

    Simmons, J.E.; Lillberg, J.W.; McKee, R.J.; Slice, R.W.; Torrez, J.H. [Los Alamos National Lab., NM (United States); McCurnin, T.W.; Sanchez, P.G. [EG and G Energy Measurements, Inc., Los Alamos, NM (United States). Los Alamos Operations

    1994-02-01

    A charge coupled device (CCD) imaging camera system has been developed as part of the Ground Test Accelerator project at the Los Alamos National Laboratory to measure the properties of a large diameter, neutral particle beam. The camera is designed to operate in the accelerator vacuum system for extended periods of time. It would normally be cooled to reduce dark current. The CCD contains 1024 {times} 1024 pixels with pixel size of 19 {times} 19 {mu}m{sup 2} and with four phase parallel clocking and two phase serial clocking. The serial clock rate is 2.5{times}10{sup 5} pixels per second. Clock sequence and timing are controlled by an external logic-word generator. The DC bias voltages are likewise located externally. The camera contains circuitry to generate the analog clocks for the CCD and also contains the output video signal amplifier. Reset switching noise is removed by an external signal processor that employs delay elements to provide noise suppression by the method of double-correlated sampling. The video signal is digitized to 12 bits in an analog to digital converter (ADC) module controlled by a central processor module. Both modules are located in a VME-type computer crate that communicates via ethernet with a separate workstation where overall control is exercised and image processing occurs. Under cooled conditions the camera shows good linearity with dynamic range of 2000 and with dark noise fluctuations of about {plus_minus}1/2 ADC count. Full well capacity is about 5{times}10{sup 5} electron charges.

  17. Temperature dependent operation of PSAPD-based compact gamma camera for SPECT imaging

    PubMed Central

    Kim, Sangtaek; McClish, Mickel; Alhassen, Fares; Seo, Youngho; Shah, Kanai S.; Gould, Robert G.

    2011-01-01

    We investigated the dependence of image quality on the temperature of a position sensitive avalanche photodiode (PSAPD)-based small animal single photon emission computed tomography (SPECT) gamma camera with a CsI:Tl scintillator. Currently, nitrogen gas cooling is preferred to operate PSAPDs in order to minimize the dark current shot noise. Being able to operate a PSAPD at a relatively high temperature (e.g., 5 °C) would allow a more compact and simple cooling system for the PSAPD. In our investigation, the temperature of the PSAPD was controlled by varying the flow of cold nitrogen gas through the PSAPD module and varied from ?40 °C to 20 °C. Three experiments were performed to demonstrate the performance variation over this temperature range. The point spread function (PSF) of the gamma camera was measured at various temperatures, showing variation of full-width-half-maximum (FWHM) of the PSF. In addition, a 99mTc-pertechnetate (140 keV) flood source was imaged and the visibility of the scintillator segmentation (16×16 array, 8 mm × 8 mm area, 400 ?m pixel size) at different temperatures was evaluated. Comparison of image quality was made at ?25 °C and 5 °C using a mouse heart phantom filled with an aqueous solution of 99mTc-pertechnetate and imaged using a 0.5 mm pinhole collimator made of tungsten. The reconstructed image quality of the mouse heart phantom at 5 °C degraded in comparision to the reconstructed image quality at ?25 °C. However, the defect and structure of the mouse heart phantom were clearly observed, showing the feasibility of operating PSAPDs for SPECT imaging at 5 °C, a temperature that would not need the nitrogen cooling. All PSAPD evaluations were conducted with an applied bias voltage that allowed the highest gain at a given temperature. PMID:24465051

  18. Quantigraphic Imaging: Estimating the camera response and exposures from differently exposed images

    E-print Network

    Mann, Richard

    , however, cam­ era gain varies to compensate for varying quantity of light, by way of Automatic Gain frames. Almost all cameras have some kind of automatic exposure feature. Generally automatic exposure is center weighted, so that when a light object falls in the center of the frame the ex­ posure

  19. The co-imaging of gamma camera measurements of aerosol deposition and respiratory anatomy.

    PubMed

    Conway, Joy; Fleming, John; Bennett, Michael; Havelock, Tom

    2013-06-01

    The use of gamma camera imaging following the inhalation of a radiolabel has been widely used by researchers to investigate the fate of inhaled aerosols. The application of two-dimensional (2D) planar gamma scintigraphy and single-photon emission computed tomography (SPECT) to the study of inhaled aerosols is discussed in this review. Information on co-localized anatomy can be derived from other imaging techniques such as krypton ventilation scans and low- and high-resolution X-ray computed tomography (CT). Radionuclide imaging, combined with information on anatomy, is a potentially useful approach when the understanding of regional deposition within the lung is central to research objectives for following disease progression and for the evaluation of therapeutic intervention. PMID:23517170

  20. High-Contrast Exoplanet Imaging with CLIO2, the Magellan Adaptive Optics Infrared Camera

    NASA Astrophysics Data System (ADS)

    Morzinski, Katie; Close, Laird; Males, Jared; Hinz, Philip; Puglisi, Alfio; Esposito, Simone; Riccardi, Armando; Pinna, Enrico; Xompero, Marco; Briguglio, Runa; Follette, Kate; Kopon, Derek; Skemer, Andy; Gasho, Victor; Uomoto, Alan; Hare, Tyson; Arcidiacono, Carmelo; Quiros-Pacheco, Fernando; Argomedo, Javier; Busoni, Lorenzo; Rodigas, Timothy; Wu, Ya-Lin

    2013-12-01

    MagAO is the adaptive-secondary AO system on the 6.5-m Magellan Clay telescope. With a high actuator density and a sensitive pyramid WFS, MagAO achieves down to ~130 nm rms WFE on bright guide stars in median seeing conditions (0.7'' V band) at Las Campanas Observatory in Chile. MagAO's infrared camera, Clio2, has a comprehensive suite of narrow and broad band filters that allow direct imaging of faint companions from 1-5 um. We present first-light results from Clio2, including images of exoplanetary system Beta Pictoris. High-contrast imaging is an important goal of AO for ELTs, and results from MagAO/Clio2 are the next step along that path --- particularly true for the GMT which is located very close to the Magellan site.

  1. Improved directional weighted interpolation method for single-sensor camera imaging

    NASA Astrophysics Data System (ADS)

    He, Liwen; Chen, Xiangdong; Jeon, Gwanggil; Jeong, Jechang

    2014-09-01

    We present an improved directional weighted interpolation method for single-sensor camera imaging. By observing the fact that the conventional directional weighted interpolation methods are based on unreliable assumptions using spectral correlation, a contribution of this work is made using an antialiasing finite impulse response filter to improve the interpolation accuracy by exploiting robust spectral correlation. We also make a contribution toward refining the interpolation result by using the gradient inverse weighted filtering method. An experimental analysis of images revealed that our proposed algorithm provided superior performance in terms of both objective and subjective image quality compared to conventional directional weighted demosaicking algorithms. Our implementation has very low complexity and is, therefore, well suited for real-time applications.

  2. Toward Real-time quantum imaging with a single pixel camera

    SciTech Connect

    Lawrie, Benjamin J [ORNL; Pooser, Raphael C [ORNL

    2013-01-01

    We present a workbench for the study of real-time quantum imaging by measuring the frame-by-frame quantum noise reduction of multi-spatial-mode twin beams generated by four wave mixing in Rb vapor. Exploiting the multiple spatial modes of this squeezed light source, we utilize spatial light modulators to selectively transmit macropixels of quantum correlated modes from each of the twin beams to a high quantum efficiency balanced detector. In low-light-level imaging applications, the ability to measure the quantum correlations between individual spatial modes and macropixels of spatial modes with a single pixel camera will facilitate compressive quantum imaging with sensitivity below the photon shot noise limit.

  3. Toward ultrafast high-DQE and multi-image CZT gamma-camera prospecting

    NASA Astrophysics Data System (ADS)

    Gerstenmayer, Jean-Louis; Glasser, Francis; Desbat, Laurent; Allouche, Virginie

    2003-07-01

    The development of high frame rate imaging high energy X-rays detector system is discussed. The purpose of this paper is to highlight some of the issues involved in the development of high performance position sensitive X- and gamma-ray cameras for high frame rate imaging. New CZT technology has provided some prototypes offering more than 50% stopping power (and millimetric spatial resolution) for 5 MeV X-ray pulses. Some different CdTe and CdZnTe sensors were tested with MeV energy photons produced by the accelerators ELSA and ARCO (CEA Bruyeres-le-Chatel). The first experimental results obtained at CEA with 20 ps long are very encouraging for high energy high frame rate imaging applications.

  4. SWIR Geiger-mode APD detectors and cameras for 3D imaging

    NASA Astrophysics Data System (ADS)

    Itzler, Mark A.; Entwistle, Mark; Krishnamachari, Uppili; Owens, Mark; Jiang, Xudong; Slomkowski, Krystyna; Rangwala, Sabbir

    2014-06-01

    The operation of avalanche photodiodes in Geiger mode by arming these detectors above their breakdown voltage provides high-performance single photon detection in a robust solid-state device platform. Moreover, these devices are ideally suited for integration into large format focal plane arrays enabling single photon imaging. We describe the design and performance of short-wave infrared 3D imaging cameras with focal plane arrays (FPAs) based on Geigermode avalanche photodiodes (GmAPDs) with single photon sensitivity for laser radar imaging applications. The FPA pixels incorporate InP/InGaAs(P) GmAPDs for the detection of single photons with high efficiency and low dark count rates. We present results and attributes of fully integrated camera sub-systems with 32 × 32 and 128 × 32 formats, which have 100 ?m pitch and 50 ?m pitch, respectively. We also address the sensitivity of the fundamental GmAPD detectors to radiation exposure, including recent results that correlate detector active region volume to sustainable radiation tolerance levels.

  5. Light sources and cameras for standard in vitro membrane potential and high-speed ion imaging.

    PubMed

    Davies, R; Graham, J; Canepari, M

    2013-07-01

    Membrane potential and fast ion imaging are now standard optical techniques routinely used to record dynamic physiological signals in several preparations in vitro. Although detailed resolution of optical signals can be improved by confocal or two-photon microscopy, high spatial and temporal resolution can be obtained using conventional microscopy and affordable light sources and cameras. Thus, standard wide-field imaging methods are still the most common in research laboratories and can often produce measurements with a signal-to-noise ratio that is superior to other optical approaches. This paper seeks to review the most important instrumentation used in these experiments, with particular reference to recent technological advances. We analyse in detail the optical constraints dictating the type of signals that are obtained with voltage and ion imaging and we discuss how to use this information to choose the optimal apparatus. Then, we discuss the available light sources with specific attention to light emitting diodes and solid state lasers. We then address the current state-of-the-art of available charge coupled device, electron multiplying charge coupled device and complementary metal oxide semiconductor cameras and we analyse the characteristics that need to be taken into account for the choice of optimal detector. Finally, we conclude by discussing prospective future developments that are likely to further improve the quality of the signals expanding the capability of the techniques and opening the gate to novel applications. PMID:23692638

  6. Mars Orbiter Camera High Resolution Images: Some Results From The First 6 Weeks In Orbit

    NASA Technical Reports Server (NTRS)

    1997-01-01

    The Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) images acquired shortly after orbit insertion were relatively poor in both resolution and image quality. This poor performance was solely the result of low sunlight conditions and the relative distance to the planet, both of which have been progressively improving over the past six weeks. Some of the better images are used here (see PIA01021 through PIA01029) to illustrate how the MOC images provide substantially better views of the martian surface than have ever been recorded previously from orbit.

    This U.S. Geological Survey shaded relief map provides an overall context for the MGS MOC images of the Tithonium/Ius Chasma, Ganges Chasma, and Schiaparelli Crater. Closeup images of the Tithonium/Ius Chasma area are visible in PIA01021 through PIA01023. Closeups of Ganges Chasma are available as PIA01027 through PIA01029, and Schiaparelli Crater is shown in PIA01024 through PIA01026. The Mars Pathfinder landing site is shown to the north of the sites of the MGS images.

    Launched on November 7, 1996, Mars Global Surveyor entered Mars orbit on Thursday, September 11, 1997. The original mission plan called for using friction with the planet's atmosphere to reduce the orbital energy, leading to a two-year mapping mission from close, circular orbit (beginning in March 1998). Owing to difficulties with one of the two solar panels, aerobraking was suspended in mid-October and resumed in November 8. Many of the original objectives of the mission, and in particular those of the camera, are likely to be accomplished as the mission progresses.

    Malin Space Science Systems and the California Institute of Technology built the MOC using spare hardware from the Mars Observer mission. MSSS operates the camera from its facilities in San Diego, CA. The Jet Propulsion Laboratory's Mars Surveyor Operations Project operates the Mars Global Surveyor spacecraft with its industrial partner, Lockheed Martin Astronautics, from facilities in Pasadena, CA and Denver, CO.

  7. Correlating objective and subjective evaluation of texture appearance with applications to camera phone imaging

    NASA Astrophysics Data System (ADS)

    Phillips, Jonathan B.; Coppola, Stephen M.; Jin, Elaine W.; Chen, Ying; Clark, James H.; Mauer, Timothy A.

    2009-01-01

    Texture appearance is an important component of photographic image quality as well as object recognition. Noise cleaning algorithms are used to decrease sensor noise of digital images, but can hinder texture elements in the process. The Camera Phone Image Quality (CPIQ) initiative of the International Imaging Industry Association (I3A) is developing metrics to quantify texture appearance. Objective and subjective experimental results of the texture metric development are presented in this paper. Eight levels of noise cleaning were applied to ten photographic scenes that included texture elements such as faces, landscapes, architecture, and foliage. Four companies (Aptina Imaging, LLC, Hewlett-Packard, Eastman Kodak Company, and Vista Point Technologies) have performed psychophysical evaluations of overall image quality using one of two methods of evaluation. Both methods presented paired comparisons of images on thin film transistor liquid crystal displays (TFT-LCD), but the display pixel pitch and viewing distance differed. CPIQ has also been developing objective texture metrics and targets that were used to analyze the same eight levels of noise cleaning. The correlation of the subjective and objective test results indicates that texture perception can be modeled with an objective metric. The two methods of psychophysical evaluation exhibited high correlation despite the differences in methodology.

  8. Using different interpolation techniques in unwrapping the distorted images from panoramic annular lens camera

    NASA Astrophysics Data System (ADS)

    Yu, Guo; Fu, Lingjin; Bai, Jian

    2010-11-01

    The camera using panoramic annular lens (PAL) can capture the surrounding scene in a view of 360° without any scanning component. Due to severe distortions, the image formed by PAL must be unwrapped into a perspective-view image in order to get consistency with the human's visual custom. However the unfilled pixels would probably exist after unwrapping as a result of the non-uniform resolution in the PAL image, hence the interpolation should be employed in the phase of the forward projection unwrapping. We also evaluated the performance of several interpolation techniques for unwrapping the PAL image on a series of frequency-patterned images as a simulation by using three image quality indexes: MSE, SSIM and S-CIELAB. The experiment result revealed that those interpolation methods had better capability for the low frequent PAL images. The Bicubic, Ferguson and Newton interpolations performed relatively better at higher frequencies, while Bilinear and Bezier could achieve better result at lower frequency. Besides, the Nearest method had poorest performance in general and the Ferguson interpolation was excellent in both high and low frequencies.

  9. Noise Reduction in Brainwaves by Using Both EEG Signals and Frontal Viewing Camera Images

    PubMed Central

    Bang, Jae Won; Choi, Jong-Suk; Park, Kang Ryoung

    2013-01-01

    Electroencephalogram (EEG)-based brain-computer interfaces (BCIs) have been used in various applications, including human–computer interfaces, diagnosis of brain diseases, and measurement of cognitive status. However, EEG signals can be contaminated with noise caused by user's head movements. Therefore, we propose a new method that combines an EEG acquisition device and a frontal viewing camera to isolate and exclude the sections of EEG data containing these noises. This method is novel in the following three ways. First, we compare the accuracies of detecting head movements based on the features of EEG signals in the frequency and time domains and on the motion features of images captured by the frontal viewing camera. Second, the features of EEG signals in the frequency domain and the motion features captured by the frontal viewing camera are selected as optimal ones. The dimension reduction of the features and feature selection are performed using linear discriminant analysis. Third, the combined features are used as inputs to support vector machine (SVM), which improves the accuracy in detecting head movements. The experimental results show that the proposed method can detect head movements with an average error rate of approximately 3.22%, which is smaller than that of other methods. PMID:23669713

  10. Intensified array camera imaging of solid surface combustion aboard the NASA Learjet

    NASA Technical Reports Server (NTRS)

    Weiland, Karen J.

    1992-01-01

    An intensified array camera was used to image weakly luminous flames spreading over thermally thin paper samples in a low gravity environment aboard the NASA-Lewis Learjet. The aircraft offers 10 to 20 sec of reduced gravity during execution of a Keplerian trajectory and allows the use of instrumentation that is delicate or requires higher electrical power than is available in drop towers. The intensified array camera is a charge intensified device type that responds to light between 400 and 900 nm and has a minimum sensitivity of 10(exp 6) footcandles. The paper sample, either ashless filter paper or a lab wiper, burns inside a sealed chamber which is filled with 21, 18, or 15 pct. oxygen in nitrogen at one atmosphere. The camera views the edge of the paper and its output is recorded on videotape. Flame positions are measured every 0.1 sec to calculate flame spread rates. Comparisons with drop tower data indicate that the flame shapes and spread rates are affected by the residual g level in the aircraft.

  11. Intensified array camera imaging of solid surface combustion aboard the NASA Learjet

    NASA Technical Reports Server (NTRS)

    Weiland, Karen J.

    1992-01-01

    An intensified array camera has been used to image weakly luminous flames spreading over thermally thin paper samples in a low-gravity environment aboard the NASA-Lewis Learjet. The aircraft offers 10 to 20 sec of reduced gravity during execution of a Keplerian trajectory and allows the use of instrumentation that is delicate or requires higher electrical power than is available in drop towers. The intensified array camera is a charge intensified device type that responds to light between 400 and 900 nm and has a minimum sensitivity of 10(exp 6) footcandles. The paper sample, either ashless filter paper or a lab wiper, burns inside a sealed chamber which is filled with 21, 18, or 15 pct. oxygen in nitrogen at one atmosphere. The camera views the edge of the paper and its output is recorded on videotape. Flame positions are measured every 0.1 sec to calculate flame spread rates. Comparisons with drop tower data indicate that the flame shapes and spread rates are affected by the residual g level in the aircraft.

  12. A 3D HIDAC-PET camera with sub-millimetre resolution for imaging small animals

    Microsoft Academic Search

    A. P. Jeavons; R. A. Chandler; C. A. R. Dettmar

    1999-01-01

    A HIDAC-PET camera consisting essentially of 5 million 0.5 mm gas avalanching detectors has been constructed for small-animal imaging. The particular HIDAC advantage-a high 3D spatial resolution-has been improved to 0.95 mm fwhm and to 0.7 mm fwhm when reconstructing with 3D-OSEM methods incorporating resolution recovery. A depth-of-interaction resolution of 2.5 mm is implicit, due to the laminar construction. Scatter-corrected

  13. Digital imaging microscopy: the marriage of spectroscopy and the solid state CCD camera

    NASA Astrophysics Data System (ADS)

    Jovin, Thomas M.; Arndt-Jovin, Donna J.

    1991-12-01

    Biological samples have been imaged using microscopes equipped with slow-scan CCD cameras. Examples are presented of studies based on the detection of light emission signals in the form of fluorescence and phosphorescence. They include applications in the field of cell biology: (a) replication and topology of mammalian cell nuclei; (b) cytogenetic analysis of human metaphase chromosomes; and (c) time-resolved measurements of DNA-binding dyes in cells and on isolated chromosomes, as well as of mammalian cell surface antigens, using the phosphorescence of acridine orange and fluorescence resonance energy transfer of labeled lectins, respectively.

  14. Camera Image Transformation and Registration for Safe Spacecraft Landing and Hazard Avoidance

    NASA Technical Reports Server (NTRS)

    Jones, Brandon M.

    2005-01-01

    Inherent geographical hazards of Martian terrain may impede a safe landing for science exploration spacecraft. Surface visualization software for hazard detection and avoidance may accordingly be applied in vehicles such as the Mars Exploration Rover (MER) to induce an autonomous and intelligent descent upon entering the planetary atmosphere. The focus of this project is to develop an image transformation algorithm for coordinate system matching between consecutive frames of terrain imagery taken throughout descent. The methodology involves integrating computer vision and graphics techniques, including affine transformation and projective geometry of an object, with the intrinsic parameters governing spacecraft dynamic motion and camera calibration.

  15. Tomographic small-animal imaging using a high-resolution semiconductor camera

    Microsoft Academic Search

    G. A. Kastis; M. C. Wu; S. J. Balzer; D. W. Wilson; L. R. Furenlid; G. Stevenson; H. B. Barber; H. H. Barrett; J. M. Woolfenden; P. Kelly; M. Appleby

    2000-01-01

    The authors have developed a high-resolution, compact semiconductor camera for nuclear medicine applications. The modular unit has been used to obtain tomographic images of phantoms and mice. The system consists of a 64×64 CdZnTe detector array and a parallel-hole tungsten collimator mounted inside a 17 cm×5.3 cm×3.7 cm tungsten-aluminum housing. The detector is a 2.5 cm×2.5 cm×0.15 cm slab of

  16. Evaluation of a large format image tube camera for the shuttle sortie mission

    NASA Technical Reports Server (NTRS)

    Tifft, W. C.

    1976-01-01

    A large format image tube camera of a type under consideration for use on the Space Shuttle Sortie Missions is evaluated. The evaluation covers the following subjects: (1) resolving power of the system (2) geometrical characteristics of the system (distortion etc.) (3) shear characteristics of the fiber optic coupling (4) background effects in the tube (5) uniformity of response of the tube (as a function of wavelength) (6) detective quantum efficiency of the system (7) astronomical applications of the system. It must be noted that many of these characteristics are quantitatively unique to the particular tube under discussion and serve primarily to suggest what is possible with this type of tube.

  17. New Mars Camera's First Image of Mars from Mapping Orbit (Full Frame)

    NASA Technical Reports Server (NTRS)

    2006-01-01

    The high resolution camera on NASA's Mars Reconnaissance Orbiter captured its first image of Mars in the mapping orbit, demonstrating the full resolution capability, on Sept. 29, 2006. The High Resolution Imaging Science Experiment (HiRISE) acquired this first image at 8:16 AM (Pacific Time). With the spacecraft at an altitude of 280 kilometers (174 miles), the image scale is 25 centimeters per pixel (10 inches per pixel). If a person were located on this part of Mars, he or she would just barely be visible in this image.

    The image covers a small portion of the floor of Ius Chasma, one branch of the giant Valles Marineris system of canyons. The image illustrates a variety of processes that have shaped the Martian surface. There are bedrock exposures of layered materials, which could be sedimentary rocks deposited in water or from the air. Some of the bedrock has been faulted and folded, perhaps the result of large-scale forces in the crust or from a giant landslide. The image resolves rocks as small as small as 90 centimeters (3 feet) in diameter. It includes many dunes or ridges of windblown sand.

    This image (TRA_000823_1720) was taken by the High Resolution Imaging Science Experiment camera onboard the Mars Reconnaissance Orbiter spacecraft on Sept. 29, 2006. Shown here is the full image, centered at minus 7.8 degrees latitude, 279.5 degrees east longitude. The image is oriented such that north is to the top. The range to the target site was 297 kilometers (185.6 miles). At this distance the image scale is 25 centimeters (10 inches) per pixel (with one-by-one binning) so objects about 75 centimeters (30 inches) across are resolved. The image was taken at a local Mars time of 3:30 PM and the scene is illuminated from the west with a solar incidence angle of 59.7 degrees, thus the sun was about 30.3 degrees above the horizon. The season on Mars is northern winter, southern summer.

    [Photojournal note: Due to the large sizes of the high-resolution TIFF and JPEG files, some systems may experience extremely slow downlink time while viewing or downloading these images; some systems may be incapable of handling the download entirely.]

    NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the Mars Reconnaissance Orbiter for NASA's Science Mission Directorate, Washington. Lockheed Martin Space Systems, Denver, is the prime contractor for the project and built the spacecraft. The HiRISE camera was built by Ball Aerospace & Technologies Corporation, Boulder, Colo., and is operated by the University of Arizona, Tucson.

  18. System for photometric calibration of optoelectronic imaging devices especially streak cameras

    DOEpatents

    Boni, Robert; Jaanimagi, Paul

    2003-11-04

    A system for the photometric calibration of streak cameras and similar imaging devices provides a precise knowledge of the camera's flat-field response as well as a mapping of the geometric distortions. The system provides the flat-field response, representing the spatial variations in the sensitivity of the recorded output, with a signal-to-noise ratio (SNR) greater than can be achieved in a single submicrosecond streak record. The measurement of the flat-field response is carried out by illuminating the input slit of the streak camera with a signal that is uniform in space and constant in time. This signal is generated by passing a continuous wave source through an optical homogenizer made up of a light pipe or pipes in which the illumination typically makes several bounces before exiting as a spatially uniform source field. The rectangular cross-section of the homogenizer is matched to the usable photocathode area of the streak tube. The flat-field data set is obtained by using a slow streak ramp that may have a period from one millisecond (ms) to ten seconds (s), but may be nominally one second in duration. The system also provides a mapping of the geometric distortions, by spatially and temporarily modulating the output of the homogenizer and obtaining a data set using the slow streak ramps. All data sets are acquired using a CCD camera and stored on a computer, which is used to calculate all relevant corrections to the signal data sets. The signal and flat-field data sets are both corrected for geometric distortions prior to applying the flat-field correction. Absolute photometric calibration is obtained by measuring the output fluence of the homogenizer with a "standard-traceable" meter and relating that to the CCD pixel values for a self-corrected flat-field data set.

  19. Imaging system for cardiac planar imaging using a dedicated dual-head gamma camera

    Microsoft Academic Search

    Stanislaw Majewski; Marc M. Umeno

    2011-01-01

    A cardiac imaging system employing dual gamma imaging heads co-registered with one another to provide two dynamic simultaneous views of the heart sector of a patient torso. A first gamma imaging head is positioned in a first orientation with respect to the heart sector and a second gamma imaging head is positioned in a second orientation with respect to the

  20. Minimal camera networks for 3D image based modeling of cultural heritage objects.

    PubMed

    Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma

    2014-01-01

    3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue "Lamassu". Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883-859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm. PMID:24670718

  1. Minimal Camera Networks for 3D Image Based Modeling of Cultural Heritage Objects

    PubMed Central

    Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma

    2014-01-01

    3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue “Lamassu”. Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883–859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm. PMID:24670718

  2. Sensors for 3D Imaging: Metric Evaluation and Calibration of a CCD/CMOS Time-of-Flight Camera

    PubMed Central

    Chiabrando, Filiberto; Chiabrando, Roberto; Piatti, Dario; Rinaudo, Fulvio

    2009-01-01

    3D imaging with Time-of-Flight (ToF) cameras is a promising recent technique which allows 3D point clouds to be acquired at video frame rates. However, the distance measurements of these devices are often affected by some systematic errors which decrease the quality of the acquired data. In order to evaluate these errors, some experimental tests on a CCD/CMOS ToF camera sensor, the SwissRanger (SR)-4000 camera, were performed and reported in this paper. In particular, two main aspects are treated: the calibration of the distance measurements of the SR-4000 camera, which deals with evaluation of the camera warm up time period, the distance measurement error evaluation and a study of the influence on distance measurements of the camera orientation with respect to the observed object; the second aspect concerns the photogrammetric calibration of the amplitude images delivered by the camera using a purpose-built multi-resolution field made of high contrast targets. PMID:22303163

  3. Two Years of Digital Terrain Model Production Using the Lunar Reconnaissance Orbiter Narrow Angle Camera

    NASA Astrophysics Data System (ADS)

    Burns, K.; Robinson, M. S.; Speyerer, E.; LROC Science Team

    2011-12-01

    One of the primary objectives of the Lunar Reconnaissance Orbiter Camera (LROC) is to gather stereo observations with the Narrow Angle Camera (NAC). These stereo observations are used to generate digital terrain models (DTMs). The NAC has a pixel scale of 0.5 to 2.0 meters but was not designed for stereo observations and thus requires the spacecraft to roll off-nadir to acquire these images. Slews interfere with the data collection of the other instruments, so opportunities are currently limited to four per day. Arizona State University has produced DTMs from 95 stereo pairs for 11 Constellation Project (CxP) sites (Aristarchus, Copernicus crater, Gruithuisen domes, Hortensius domes, Ina D-caldera, Lichtenberg crater, Mare Ingenii, Marius hills, Reiner Gamma, South Pole-Aitkin Rim, Sulpicius Gallus) as well as 30 other regions of scientific interest (including: Bhabha crater, highest and lowest elevation points, Highland Ponds, Kugler Anuchin, Linne Crater, Planck Crater, Slipher crater, Sears Crater, Mandel'shtam Crater, Virtanen Graben, Compton/Belkovich, Rumker Domes, King Crater, Luna 16/20/23/24 landing sites, Ranger 6 landing site, Wiener F Crater, Apollo 11/14/15/17, fresh craters, impact melt flows, Larmor Q crater, Mare Tranquillitatis pit, Hansteen Alpha, Moore F Crater, and Lassell Massif). To generate DTMs, the USGS ISIS software and SOCET SET° from BAE Systems are used. To increase the absolute accuracy of the DTMs, data obtained from the Lunar Orbiter Laser Altimeter (LOLA) is used to coregister the NAC images and define the geodetic reference frame. NAC DTMs have been used in examination of several sites, e.g. Compton-Belkovich, Marius Hills and Ina D-caldera [1-3]. LROC will continue to acquire high-resolution stereo images throughout the science phase of the mission and any extended mission opportunities, thus providing a vital dataset for scientific research as well as future human and robotic exploration. [1] B.L. Jolliff (2011) Nature Geoscience, in press. [2] Lawrence et al. (2011) LPSC XLII, Abst 2228. [3] Garry et al. (2011) LPSC XLII, Abst 2605.

  4. A camera for imaging hard x-rays from suprathermal electrons during lower hybrid current drive on PBX-M

    SciTech Connect

    von Goeler, S.; Kaita, R.; Bernabei, S.; Davis, W.; Fishman, H.; Gettelfinger, G.; Ignat, D.; Roney, P.; Stevens, J.; Stodiek, W. [Princeton Univ., NJ (United States). Plasma Physics Lab.; Jones, S.; Paoletti, F. [Massachusetts Inst. of Tech., Cambridge, MA (United States). Plasma Fusion Center; Petravich, G. [Hungarian Academy of Sciences, Budapest (Hungary). Central Research Inst. for Physics; Rimini, F. [JET Joint Undertaking, Abingdon (United Kingdom)

    1993-05-01

    During lower hybrid current drive (LHCD), suprathermal electrons are generated that emit hard X-ray bremsstrahlung. A pinhole camera has been installed on the PBX-M tokamak that records 128 {times} 128 pixel images of the bremsstrahlung with a 3 ms time resolution. This camera has identified hollow radiation profiles on PBX-M, indicating off-axis current drive. The detector is a 9in. dia. intensifier. A detailed account of the construction of the Hard X-ray Camera, its operation, and its performance is given.

  5. A camera for imaging hard x-rays from suprathermal electrons during lower hybrid current drive on PBX-M

    SciTech Connect

    von Goeler, S.; Kaita, R.; Bernabei, S.; Davis, W.; Fishman, H.; Gettelfinger, G.; Ignat, D.; Roney, P.; Stevens, J.; Stodiek, W. (Princeton Univ., NJ (United States). Plasma Physics Lab.); Jones, S.; Paoletti, F. (Massachusetts Inst. of Tech., Cambridge, MA (United States). Plasma Fusion Center); Petravich, G. (Hungarian Academy of Sciences, Budapest (Hungary). Central Research Inst. for Physics); Rimini,

    1993-05-01

    During lower hybrid current drive (LHCD), suprathermal electrons are generated that emit hard X-ray bremsstrahlung. A pinhole camera has been installed on the PBX-M tokamak that records 128 [times] 128 pixel images of the bremsstrahlung with a 3 ms time resolution. This camera has identified hollow radiation profiles on PBX-M, indicating off-axis current drive. The detector is a 9in. dia. intensifier. A detailed account of the construction of the Hard X-ray Camera, its operation, and its performance is given.

  6. Improved Digitization of Lunar Mare Ridges with LROC Derived Products

    NASA Astrophysics Data System (ADS)

    Crowell, J. M.; Robinson, M. S.; Watters, T. R.; Bowman-Cisneros, E.; Enns, A. C.; Lawrence, S.

    2011-12-01

    Lunar wrinkle ridges (mare ridges) are positive-relief structures formed from compressional stress in basin-filling flood basalt deposits [1]. Previous workers have measured wrinkle ridge orientations and lengths to investigate their spatial distribution and infer basin-localized stress fields [2,3]. Although these plots include the most prominent mare ridges and their general trends, they may not have fully captured all of the ridges, particularly the smaller-scale ridges. Using Lunar Reconnaissance Orbiter Wide Angle Camera (WAC) global mosaics and derived topography (100m pixel scale) [4], we systematically remapped wrinkle ridges in Mare Serenitatis. By comparing two WAC mosaics with different lighting geometry, and shaded relief maps made from a WAC digital elevation model (DEM) [5], we observed that some ridge segments and some smaller ridges are not visible in previous structure maps [2,3]. In the past, mapping efforts were limited by a fixed Sun direction [6,7]. For systematic mapping we created three shaded relief maps from the WAC DEM with solar azimuth angles of 0°, 45°, and 90°, and a fourth map was created by combining the three shaded reliefs into one, using a simple averaging scheme. Along with the original WAC mosaic and the WAC DEM, these four datasets were imported into ArcGIS, and the mare ridges of Imbrium, Serenitatis, and Tranquillitatis were digitized from each of the six maps. Since the mare ridges are often divided into many ridge segments [8], each major component was digitized separately, as opposed to the ridge as a whole. This strategy enhanced our ability to analyze the lengths, orientations, and abundances of these ridges. After the initial mapping was completed, the six products were viewed together to identify and resolve discrepancies in order to produce a final wrinkle ridge map. Comparing this new mare ridge map with past lunar tectonic maps, we found that many mare ridges were not recorded in the previous works. It was noted in some cases, the lengths and orientations of previously digitized ridges were different than those of the ridges digitized in this study. This method of multi-map digitizing allows for a greater accuracy in spatial characterization of mare ridges than previous methods. We intend to map mare ridges on a global scale, creating a more comprehensive ridge map due to higher resolution. References Cited: [1] Schultz P.H. (1976) Moon Morphology, 308. [2] Wilhelms D.E. (1987) USGS Prof. Paper 1348, 5A-B. [3] Carr, M.H. (1966) USGS Geologic Atlas of the Moon, I-498. [4] Robinson M.S. (2010) Space Sci. Rev., 150:82. [5] Scholten F. et al. (2011) LPSC XLII, 2046. [6] Fielder G. and Kiang T. (1962) The Observatory: No. 926, 8. [7] Watters T.R. and Konopliv A.S. (2001) Planetary and Space Sci. 49. 743-748. [8] Aubele J.C. (1988) LPSC XIX, 19.

  7. Control design for image tracking with an inertially stabilized airborne camera platform

    NASA Astrophysics Data System (ADS)

    Hurák, Zdenek; Rezá?, Martin

    2010-04-01

    The paper reports on a few control engineering issues related to design and implementation of an image-based pointing and tracking system for an inertially stabilized airborne camera platform. A medium-sized platform has been developed by the authors and a few more team members within a joint governmental project coordinated by Czech Air Force Research Institute. The resulting experimental platform is based on a common double gimbal configuration with two direct drive motors and off-the-shelf MEMS gyros. Automatic vision-based tracking system is built on top of the inertial stabilization. Choice of a suitable control configuration is discussed first, because the decoupled structure for the inner inertial rate controllers does not extend easily to the outer imagebased pointing and tracking loop. It appears that the pointing and tracking controller can benefit much from availability of measurements of an inertial rate of the camera around its optical axis. The proposed pointing and tracking controller relies on feedback linearization well known in image-based visual servoing. Simple compensation of a one sample delay introduced into the (slow) visual pointing and tracking loop by the computer vision system is proposed. It relies on a simple modification of the well-known Smith predictor scheme where the prediction takes advantage of availability of the (fast and undelayed) inertial rate measurements.

  8. Ultra-fast MTF Test for High-Volume production of CMOS Imaging Cameras

    NASA Astrophysics Data System (ADS)

    Dahl, Michael; Heinisch, Josef; Krey, Stefan; Bäumer, Stefan M.; Lurquin, Johan; Chen, Linghua

    2004-01-01

    During the last years compact CMOS imaging cameras have grown into high volume applications such as mobile phones, PDAs, etc. In order to insure a constant quality of the lenses of the cameras, MTF is used as a figure of merit. MTF is a polychromatic, objective test for imaging lens quality including diffraction effects, system aberrations and surface defects as well. The draw back of MTF testing is that the proper measurement of the lens MTF is quite cumbersome and time consuming. In the current investigation we designed, produced and tested a new semi-automated MTF set up that is able to measure the polychromatic lens system MTF at 6 or more field points at best focus in less than 6 seconds. The computed MTF is a real diffraction MTF derived from a line spread function (not merely a contrast measurement). This enables lens manufacturers to perform 100% MTF testing even in high volume applications. Using statistic tools to analyze the data also gives possibility to find even small systematic errors in the production like shift or tilt of lenses and lens elements. Using this as feedback the quality of the product can be increased. The system is very compact and can be put easily in an assembly line. Besides design and test of the MTF set up correlation experiments between several testers have been carried out. A correlation of better than 6% points for all tested systems at all fields has been achieved.

  9. Defect imaging in multicrystalline silicon using a lock-in infrared camera technique

    NASA Astrophysics Data System (ADS)

    Pohl, Peter; Schmidt, Jan; Schmiga, Christian; Brendel, Rolf

    2007-04-01

    We image the lifetime distribution of multicrystalline silicon wafers by means of calibrated measurements of the free-carrier emission using an infrared camera. The spatially resolved lifetime measurements are performed as a function of the light-generated excess carrier density, showing a pronounced increase in lifetime with decreasing injection density at very low injection levels. Two theoretical models are applied to describe the abnormal lifetime increase: (i) minority-carrier trapping and (ii) depletion region modulation around charged bulk defects. The trapping model is found to give better agreement with the experimental data. By fitting the trapping model to each point of the lifetime image recorded at different injection levels, we generate a trap density mapping. On multicrystalline silicon wafers we find a clear correlation between trap and dislocation density mappings.

  10. Geometric calibration of multi-sensor image fusion system with thermal infrared and low-light camera

    NASA Astrophysics Data System (ADS)

    Peric, Dragana; Lukic, Vojislav; Spanovic, Milana; Sekulic, Radmila; Kocic, Jelena

    2014-10-01

    A calibration platform for geometric calibration of multi-sensor image fusion system is presented in this paper. The accurate geometric calibration of the extrinsic geometric parameters of cameras that uses planar calibration pattern is applied. For calibration procedure specific software is made. Patterns used in geometric calibration are prepared with aim to obtain maximum contrast in both visible and infrared spectral range - using chessboards which fields are made of different emissivity materials. Experiments were held in both indoor and outdoor scenarios. Important results of geometric calibration for multi-sensor image fusion system are extrinsic parameters in form of homography matrices used for homography transformation of the object plane to the image plane. For each camera a corresponding homography matrix is calculated. These matrices can be used for image registration of images from thermal and low light camera. We implemented such image registration algorithm to confirm accuracy of geometric calibration procedure in multi-sensor image fusion system. Results are given for selected patterns - chessboard with fields made of different emissivity materials. For the final image registration algorithm in surveillance system for object tracking we have chosen multi-resolution image registration algorithm which naturally combines with a pyramidal fusion scheme. The image pyramids which are generated at each time step of image registration algorithm may be reused at the fusion stage so that overall number of calculations that must be performed is greatly reduced.

  11. Imaging system for cardiac planar imaging using a dedicated dual-head gamma camera

    DOEpatents

    Majewski, Stanislaw (Morgantown, VA); Umeno, Marc M. (Woodinville, WA)

    2011-09-13

    A cardiac imaging system employing dual gamma imaging heads co-registered with one another to provide two dynamic simultaneous views of the heart sector of a patient torso. A first gamma imaging head is positioned in a first orientation with respect to the heart sector and a second gamma imaging head is positioned in a second orientation with respect to the heart sector. An adjustment arrangement is capable of adjusting the distance between the separate imaging heads and the angle between the heads. With the angle between the imaging heads set to 180 degrees and operating in a range of 140-159 keV and at a rate of up to 500kHz, the imaging heads are co-registered to produce simultaneous dynamic recording of two stereotactic views of the heart. The use of co-registered imaging heads maximizes the uniformity of detection sensitivity of blood flow in and around the heart over the whole heart volume and minimizes radiation absorption effects. A normalization/image fusion technique is implemented pixel-by-corresponding pixel to increase signal for any cardiac region viewed in two images obtained from the two opposed detector heads for the same time bin. The imaging system is capable of producing enhanced first pass studies, bloodpool studies including planar, gated and non-gated EKG studies, planar EKG perfusion studies, and planar hot spot imaging.

  12. Performance of the Aspect Camera Assembly for the Advanced X-Ray Astrophysics Facility: Imaging

    NASA Technical Reports Server (NTRS)

    Michaels, Dan

    1998-01-01

    The Aspect Camera Assembly (ACA) is a "state-of-the-art" star tracker that provides real-time attitude information to the Advanced X-Ray Astrophysics Facility - Imaging (AXAF-I), and provides imaging data for "post-facto" ground processing. The ACA consists of a telescope with a CCD focal plane, associated focal plane read-out electronics, and an on-board processor that processes the focal plane data to produce star image location reports. On-board star image locations are resolved to 0.8 arcsec, and post-facto algorithms yield 0.2 arcsec star location accuracies (at end of life). The protoflight ACA has been built, along with a high accuracy vacuum test facility. Image position determination has been verified to < 0.2 arcsec accuracies. This paper is a follow-on paper to one presented by the author at the AeroSense '95 conference. This paper presents the "as built" configuration, the tested performance, and the test facility's design and demonstrated accuracy. The ACA has been delivered in anticipation of a August, 1998 shuttle launch.

  13. Self-coherent camera: first results of a high-contrast imaging bench in visible light

    NASA Astrophysics Data System (ADS)

    Mas, Marion; Baudoz, Pierre; Rousset, Gerard; Galicher, Rapha"l.; Baudrand, Jacques

    2010-07-01

    Extreme adaptive optics and coronagraphy are mandatory for direct imaging of exoplanets. Quasi-static aberrations limit the instrument performance producing speckle noise in the focal plane. We propose a Self-Coherent Camera (SCC) to both control a deformable mirror that actively compensates wavefront error, and calibrate the speckle noise. We create a reference beam to spatially modulate the coronagraphic speckle pattern with Fizeau fringes. In a first step, we are able to extract wavefront aberrations from the science image and correct for them using a deformable mirror. In a second step, we apply a post-processing algorithm to discriminate the companion image from the residual speckle field. To validate the instrumental concept, we developed a high contrast imaging bench in visible light. We associated a SCC to a four quadrant phase mask coronagraph and a deformable mirror (DM) with a high number of actuators (32x32 Boston Michromachines MEMS). We will present this bench and show first experimental results of focal plane wavefront sensing and high contrast imaging. The measurements are compared to numerical simulations.

  14. Incremental Multi-view 3D Reconstruction Starting from Two Images Taken by a Stereo Pair of Cameras

    NASA Astrophysics Data System (ADS)

    El hazzat, Soulaiman; Saaidi, Abderrahim; Karam, Antoine; Satori, Khalid

    2015-03-01

    In this paper, we present a new method for multi-view 3D reconstruction based on the use of a binocular stereo vision system constituted of two unattached cameras to initialize the reconstruction process. Afterwards , the second camera of stereo vision system (characterized by varying parameters) moves to capture more images at different times which are used to obtain an almost complete 3D reconstruction. The first two projection matrices are estimated by using a 3D pattern with known properties. After that, 3D scene points are recovered by triangulation of the matched interest points between these two images. The proposed approach is incremental. At each insertion of a new image, the camera projection matrix is estimated using the 3D information already calculated and new 3D points are recovered by triangulation from the result of the matching of interest points between the inserted image and the previous image. For the refinement of the new projection matrix and the new 3D points, a local bundle adjustment is performed. At first, all projection matrices are estimated, the matches between consecutive images are detected and Euclidean sparse 3D reconstruction is obtained. So, to increase the number of matches and have a more dense reconstruction, the Match propagation algorithm, more suitable for interesting movement of the camera, was applied on the pairs of consecutive images. The experimental results show the power and robustness of the proposed approach.

  15. A Semi-Automatic Image-Based Close Range 3D Modeling Pipeline Using a Multi-Camera Configuration

    PubMed Central

    Rau, Jiann-Yeou; Yeh, Po-Chia

    2012-01-01

    The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum. PMID:23112656

  16. Light field sensor and real-time panorama imaging multi-camera system and the design of data acquisition

    NASA Astrophysics Data System (ADS)

    Lu, Yu; Tao, Jiayuan; Wang, Keyi

    2014-09-01

    Advanced image sensor and powerful parallel data acquisition chip can be used to collect more detailed and comprehensive light field information. Using multiple single aperture and high resolution sensor record light field data, and processing the light field data real time, we can obtain wide field-of-view (FOV) and high resolution image. Wide FOV and high-resolution imaging has promising application in areas of navigation, surveillance and robotics. Qualityenhanced 3D rending, very high resolution depth map estimation, high dynamic-range and other applications we can obtained when we post-process these large light field data. The FOV and resolution are contradictions in traditional single aperture optic imaging system, and can't be solved very well. We have designed a multi-camera light field data acquisition system, and optimized each sensor's spatial location and relations. It can be used to wide FOV and high resolution real-time image. Using 5 megapixel CMOS sensors, and field programmable Gate Array (FPGA) acquisition light field data, paralleled processing and transmission to PC. A common clock signal is distributed to all of the cameras, and the precision of synchronization each camera achieved 40ns. Using 9 CMOSs build an initial system and obtained high resolution 360°×60° FOV image. It is intended to be flexible, modular and scalable, with much visibility and control over the cameras. In the system we used high speed dedicated camera interface CameraLink for system data transfer. The detail of the hardware architecture, its internal blocks, the algorithms, and the device calibration procedure are presented, along with imaging results.

  17. Calibration and development for increased accuracy of 3D range imaging cameras

    NASA Astrophysics Data System (ADS)

    Kahlmann, Timo; Ingensand, Hilmar

    2008-04-01

    Range imaging has become a valuable technology for many kinds of applications in recent years. Numerous systematic deviations occur during the measurement process carried out by available systems. These systematics are partly excited by external and partly excited by internal influences. In this paper the following investigations will be presented in closer detail. First the statistics of the distance measurement of the analyzed range imaging cameras SwissRangerTM SR-2 and SR-3000 will be shown. Besides the question if the measurements are Gaussian distributed, the precision of the measurements will be shown. This aspect is of importance to answer the question if the mean value of a series of measurements leads to more precise data. Second diverse influencing parameters like the target's reflectivity and external as well as internal temperature are aimed. The dependency of the distance measurements with respect to the amplitude is one of the main aspects in this paper. A specialized set up has been developed in order to derive experimentally the detailed correlation, which is expressed in terms of linearity deviations. Besides the results of some specific aspects, an overview of the recommended calibration procedure is given. The reader of this paper will be enabled to understand the calibration steps needed to gain highly accurate data from the investigated range imaging cameras. Due to the fact that range imaging cameras are on their way to become state of the art in 3D capturing of the environment, it is of importance to develop strategies for the calibration of such sensors in order to enable users to revert to these principles for the sake of simplicity. Therefore, these strategies long for sophisticated approaches and reliable results of investigations. This paper will introduce such an approach to be discussed within the scientific and user environment. One of the main achievements of this work is the introduction of a method to significantly decrease the influence of temperature on the distance measurements by means of a differential measurement principle setup. The verification of the functionality is presented, as well.

  18. Experimental Comparison of the High-Speed Imaging Performance of an EM-CCD and sCMOS Camera in a Dynamic Live-Cell Imaging Test Case

    PubMed Central

    Beier, Hope T.; Ibey, Bennett L.

    2014-01-01

    The study of living cells may require advanced imaging techniques to track weak and rapidly changing signals. Fundamental to this need is the recent advancement in camera technology. Two camera types, specifically sCMOS and EM-CCD, promise both high signal-to-noise and high speed (>100 fps), leaving researchers with a critical decision when determining the best technology for their application. In this article, we compare two cameras using a live-cell imaging test case in which small changes in cellular fluorescence must be rapidly detected with high spatial resolution. The EM-CCD maintained an advantage of being able to acquire discernible images with a lower number of photons due to its EM-enhancement. However, if high-resolution images at speeds approaching or exceeding 1000 fps are desired, the flexibility of the full-frame imaging capabilities of sCMOS is superior. PMID:24404178

  19. Automated co-registration of images from multiple bands of Liss-4 camera

    NASA Astrophysics Data System (ADS)

    Radhadevi, P. V.; Solanki, S. S.; Jyothi, M. V.; Nagasubramanian, V.; Varadan, Geeta

    Three multi-spectral bands of the Liss-4 camera of IRS-P6 satellite are physically separated in the focal plane in the along-track direction. The time separation of 2.1 s between the acquisition of first and last bands causes scan lines acquired by different bands to lie along different lines on the ground which are not parallel. Therefore, the raw images of multi-spectral bands need to be registered prior to any simple application like data visualization. This paper describes a method for co-registration of multiple bands of Liss-4 camera through photogrammetric means using the collinearity equations. A trajectory fit using the given ephemeris and attitude data, followed by direct georeferencing is being employed in this model. It is also augmented with a public domain DEM for the terrain dependent input to the model. Finer offsets after the application of this parametric technique are addressed by matching a small subsection of the bands (100×100 pixels) using an image-based method. Resampling is done by going back to original raw data when creating the product after refining image coordinates with the offsets. Two types of aligned products are defined in this paper and their operational flow is described. Datasets covering different types of terrain and also viewed with different geometries are studied with extensive number of points. The band-to-band registration (BBR) accuracies are reported. The algorithm described in this paper for co-registration of Liss-4 bands is an integral part of the software package Value Added Products generation System (VAPS) for operational generation of IRS-P6 data products.

  20. Imaging Early Demineralization on Tooth Occlusal Surfaces with a High Definition InGaAs Camera

    PubMed Central

    Fried, William A.; Fried, Daniel; Chan, Kenneth H.; Darling, Cynthia L.

    2013-01-01

    In vivo and in vitro studies have shown that high contrast images of tooth demineralization can be acquired in the near-IR due to the high transparency of dental enamel. The purpose of this study is to compare the lesion contrast in reflectance at near-IR wavelengths coincident with high water absorption with those in the visible, the near-IR at 1300-nm and with fluorescence measurements for early lesions in occlusal surfaces. Twenty-four human molars were used in this in vitro study. Teeth were painted with an acid-resistant varnish, leaving a 4×4 mm window in the occlusal surface of each tooth exposed for demineralization. Artificial lesions were produced in the exposed windows after 1 & 2-day exposure to a demineralizing solution at pH 4.5. Lesions were imaged using NIR reflectance at 3 wavelengths, 1310, 1460 and 1600-nm using a high definition InGaAs camera. Visible light reflectance, and fluorescence with 405-nm excitation and detection at wavelengths greater than 500-nm were also used to acquire images for comparison. Crossed polarizers were used for reflectance measurements to reduce interference from specular reflectance. The contrast of both the 24 hr and 48 hr lesions were significantly higher (P<0.05) for NIR reflectance imaging at 1460-nm and 1600-nm than it was for NIR reflectance imaging at 1300-nm, visible reflectance imaging, and fluorescence. The results of this study suggest that NIR reflectance measurements at longer near-IR wavelengths coincident with higher water absorption are better suited for imaging early caries lesions. PMID:24357911

  1. Performance assessment of a slat gamma camera collimator for 511 keV imaging.

    PubMed

    Britten, A J; Klie, R

    1999-07-01

    The physical performance of a prototype slat collimator is described for gamma camera planar imaging at 511 keV. Measurements were made of sensitivity, spatial resolution and a septal penetration index at 511 keV. These measurements were repeated with a commercial parallel hole collimator designed for 511 keV imaging. The slat collimator sensitivity was 22.9 times that of the parallel hole collimator with 10 cm tissue equivalent scatter material, and 16.8 times the parallel hole collimator sensitivity in air. Spatial resolution was also better for the slat collimator than the parallel hole collimator (FWHM at 10 cm in air 17.9 mm and 21.2 mm respectively). Septal penetration was compared by a single value for the counts at 120 mm from the point source profile peak, expressed as a percentage of the peak counts, showing less penetration for the slat collimator than the parallel hole collimator (1.9% versus 3.6% respectively). In conclusion, these results show that the slat collimator may have advantages over the parallel hole collimator for 511 keV imaging, though the greater complexity of operation of the slat collimator and potential sources of artefact in slat collimator imaging are recognized. PMID:10442709

  2. Calibration of the Lunar Reconnaissance Orbiter Camera

    NASA Astrophysics Data System (ADS)

    Tschimmel, M.; Robinson, M. S.; Humm, D. C.; Denevi, B. W.; Lawrence, S. J.; Brylow, S.; Ravine, M.; Ghaemi, T.

    2008-12-01

    The Lunar Reconnaissance Orbiter Camera (LROC) onboard the NASA Lunar Reconnaissance Orbiter (LRO) spacecraft consists of three cameras: the Wide-Angle Camera (WAC) and two identical Narrow Angle Cameras (NAC-L, NAC-R). The WAC is push-frame imager with 5 visible wavelength filters (415 to 680 nm) at a spatial resolution of 100 m/pixel and 2 UV filters (315 and 360 nm) with a resolution of 400 m/pixel. In addition to the multicolor imaging the WAC can operate in monochrome mode to provide a global large- incidence angle basemap and a time-lapse movie of the illumination conditions at both poles. The WAC has a highly linear response, a read noise of 72 e- and a full well capacity of 47,200 e-. The signal-to-noise ratio in each band is 140 in the worst case. There are no out-of-band leaks and the spectral response of each filter is well characterized. Each NAC is a monochrome pushbroom scanner, providing images with a resolution of 50 cm/pixel from a 50-km orbit. A single NAC image has a swath width of 2.5 km and a length of up to 26 km. The NACs are mounted to acquire side-by-side imaging for a combined swath width of 5 km. The NAC is designed to fully characterize future human and robotic landing sites in terms of topography and hazard risks. The North and South poles will be mapped on a 1-meter-scale poleward of 85.5° latitude. Stereo coverage can be provided by pointing the NACs off-nadir. The NACs are also highly linear. Read noise is 71 e- for NAC-L and 74 e- for NAC-R and the full well capacity is 248,500 e- for NAC-L and 262,500 e- for NAC- R. The focal lengths are 699.6 mm for NAC-L and 701.6 mm for NAC-R; the system MTF is 28% for NAC-L and 26% for NAC-R. The signal-to-noise ratio is at least 46 (terminator scene) and can be higher than 200 (high sun scene). Both NACs exhibit a straylight feature, which is caused by out-of-field sources and is of a magnitude of 1-3%. However, as this feature is well understood it can be greatly reduced during ground processing. All three cameras were calibrated in the laboratory under ambient conditions. Future thermal vacuum tests will characterize critical behaviors across the full range of lunar operating temperatures. In-flight tests will check for changes in response after launch and provide key data for meeting the requirements of 1% relative and 10% absolute radiometric calibration.

  3. A Novel Method of Object Detection from a Moving Camera Based on Image Matching and Frame Coupling

    PubMed Central

    Chen, Yong; Zhang, Rong hua; Shang, Lei

    2014-01-01

    A new method based on image matching and frame coupling to handle the problems of object detection caused by a moving camera and object motion is presented in this paper. First, feature points are extracted from each frame. Then, motion parameters can be obtained. Sub-images are extracted from the corresponding frame via these motion parameters. Furthermore, a novel searching method for potential orientations improves efficiency and accuracy. Finally, a method based on frame coupling is adopted, which improves the accuracy of object detection. The results demonstrate the effectiveness and feasibility of our proposed method for a moving object with changing posture and with a moving camera. PMID:25354301

  4. ANTS — a simulation package for secondary scintillation Anger-camera type detector in thermal neutron imaging

    NASA Astrophysics Data System (ADS)

    Morozov, A.; Defendi, I.; Engels, R.; Fraga, F. A. F.; Fraga, M. M. F. R.; Guerard, B.; Jurkovic, M.; Kemmerling, G.; Manzin, G.; Margato, L. M. S.; Niko, H.; Pereira, L.; Petrillo, C.; Peyaud, A.; Piscitelli, F.; Raspino, D.; Rhodes, N. J.; Sacchetti, F.; Schooneveld, E. M.; Van Esch, P.; Zeitelhack, K.

    2012-08-01

    A custom and fully interactive simulation package ANTS (Anger-camera type Neutron detector: Toolkit for Simulations) has been developed to optimize the design and operation conditions of secondary scintillation Anger-camera type gaseous detectors for thermal neutron imaging. The simulation code accounts for all physical processes related to the neutron capture, energy deposition pattern, drift of electrons of the primary ionization and secondary scintillation. The photons are traced considering the wavelength-resolved refraction and transmission of the output window. Photo-detection accounts for the wavelength-resolved quantum efficiency, angular response, area sensitivity, gain and single-photoelectron spectra of the photomultipliers (PMTs). The package allows for several geometrical shapes of the PMT photocathode (round, hexagonal and square) and offers a flexible PMT array configuration: up to 100 PMTs in a custom arrangement with the square or hexagonal packing. Several read-out patterns of the PMT array are implemented. Reconstruction of the neutron capture position (projection on the plane of the light emission) is performed using the center of gravity, maximum likelihood or weighted least squares algorithm. Simulation results reproduce well the preliminary results obtained with a small-scale detector prototype. ANTS executables can be downloaded from http://coimbra.lip.pt/~andrei/.

  5. Human detection based on the generation of a background image by using a far-infrared light camera.

    PubMed

    Jeon, Eun Som; Choi, Jong-Suk; Lee, Ji Hoon; Shin, Kwang Yong; Kim, Yeong Gon; Le, Toan Thanh; Park, Kang Ryoung

    2015-01-01

    The need for computer vision-based human detection has increased in fields, such as security, intelligent surveillance and monitoring systems. However, performance enhancement of human detection based on visible light cameras is limited, because of factors, such as nonuniform illumination, shadows and low external light in the evening and night. Consequently, human detection based on thermal (far-infrared light) cameras has been considered as an alternative. However, its performance is influenced by the factors, such as low image resolution, low contrast and the large noises of thermal images. It is also affected by the high temperature of backgrounds during the day. To solve these problems, we propose a new method for detecting human areas in thermal camera images. Compared to previous works, the proposed research is novel in the following four aspects. One background image is generated by median and average filtering. Additional filtering procedures based on maximum gray level, size filtering and region erasing are applied to remove the human areas from the background image. Secondly, candidate human regions in the input image are located by combining the pixel and edge difference images between the input and background images. The thresholds for the difference images are adaptively determined based on the brightness of the generated background image. Noise components are removed by component labeling, a morphological operation and size filtering. Third, detected areas that may have more than two human regions are merged or separated based on the information in the horizontal and vertical histograms of the detected area. This procedure is adaptively operated based on the brightness of the generated background image. Fourth, a further procedure for the separation and removal of the candidate human regions is performed based on the size and ratio of the height to width information of the candidate regions considering the camera viewing direction and perspective projection. Experimental results with two types of databases confirm that the proposed method outperforms other methods. PMID:25808774

  6. An accelerated threshold-based back-projection algorithm for Compton camera image reconstruction

    SciTech Connect

    Mundy, Daniel W.; Herman, Michael G. [Department of Radiation Oncology, Mayo Clinic, Rochester, Minnesota 55905 (United States)

    2011-01-15

    Purpose: Compton camera imaging (CCI) systems are currently under investigation for radiotherapy dose reconstruction and verification. The ability of such a system to provide real-time images during dose delivery will be limited by the computational speed of the image reconstruction algorithm. In this work, the authors present a fast and simple method by which to generate an initial back-projected image from acquired CCI data, suitable for use in a filtered back-projection algorithm or as a starting point for iterative reconstruction algorithms, and compare its performance to the current state of the art. Methods: Each detector event in a CCI system describes a conical surface that includes the true point of origin of the detected photon. Numerical image reconstruction algorithms require, as a first step, the back-projection of each of these conical surfaces into an image space. The algorithm presented here first generates a solution matrix for each slice of the image space by solving the intersection of the conical surface with the image plane. Each element of the solution matrix is proportional to the distance of the corresponding voxel from the true intersection curve. A threshold function was developed to extract those pixels sufficiently close to the true intersection to generate a binary intersection curve. This process is repeated for each image plane for each CCI detector event, resulting in a three-dimensional back-projection image. The performance of this algorithm was tested against a marching algorithm known for speed and accuracy. Results: The threshold-based algorithm was found to be approximately four times faster than the current state of the art with minimal deficit to image quality, arising from the fact that a generically applicable threshold function cannot provide perfect results in all situations. The algorithm fails to extract a complete intersection curve in image slices near the detector surface for detector event cones having axes nearly parallel to the image plane. This effect decreases the sum of the image, thereby also affecting the mean, standard deviation, and SNR of the image. All back-projected events associated with a simulated point source intersected the voxel containing the source and the FWHM of the back-projected image was similar to that obtained from the marching method. Conclusions: The slight deficit to image quality observed with the threshold-based back-projection algorithm described here is outweighed by the 75% reduction in computation time. The implementation of this method requires the development of an optimum threshold function, which determines the overall accuracy of the method. This makes the algorithm well-suited to applications involving the reconstruction of many large images, where the time invested in threshold development is offset by the decreased image reconstruction time. Implemented in a parallel-computing environment, the threshold-based algorithm has the potential to provide real-time dose verification for radiation therapy.

  7. Multi-temporal database of High Resolution Stereo Camera (HRSC) images - Alpha version

    NASA Astrophysics Data System (ADS)

    Erkeling, G.; Luesebrink, D.; Hiesinger, H.; Reiss, D.; Jaumann, R.

    2014-04-01

    Image data transmitted to Earth by Martian spacecraft since the 1970s, for example by Mariner and Viking, Mars Global Surveyor (MGS), Mars Express (MEx) and the Mars Reconnaissance Orbiter (MRO) showed, that the surface of Mars has changed dramatically and actually is continually changing [e.g., 1-8]. The changes are attributed to a large variety of atmospherical, geological and morphological processes, including eolian processes [9,10], mass wasting processes [11], changes of the polar caps [12] and impact cratering processes [13]. In addition, comparisons between Mariner, Viking and Mars Global Surveyor images suggest that more than one third of the Martian surface has brightened or darkened by at least 10% [6]. Albedo changes can have effects on the global heat balance and the circulation of winds, which can result in further surface changes [14-15]. The High Resolution Stereo Camera (HRSC) [16,17] on board Mars Express (MEx) covers large areas at high resolution and is therefore suited to detect the frequency, extent and origin of Martian surface changes. Since 2003 HRSC acquires highresolution images of the Martian surface and contributes to Martian research, with focus on the surface morphology, the geology and mineralogy, the role of liquid water on the surface and in the atmosphere, on volcanism, as well as on the proposed climate change throughout the Martian history and has improved our understanding of the evolution of Mars significantly [18-21]. The HRSC data are available at ESA's Planetary Science Archive (PSA) as well as through the NASA Planetary Data System (PDS). Both data platforms are frequently used by the scientific community and provide additional software and environments to further generate map-projected and geometrically calibrated HRSC data. However, while previews of the images are available, there is no possibility to quickly and conveniently see the spatial and temporal availability of HRSC images in a specific region, which is important to detect the surface changes that occurred between two or more images.

  8. Strong Lensing Analysis of A1689 from Deep Advanced Camera Images

    E-print Network

    Tom Broadhurst; Narciso Benitez; Dan Coe; Keren Sharon; Kerry Zekser; Rick White; Holland Ford; Rychard Bouwens; the ACS team

    2004-09-06

    We analyse deep multi-colour Advanced Camera images of the largest known gravitational lens, A1689. Radial and tangential arcs delineate the critical curves in unprecedented detail and many small counter-images are found near the center of mass. We construct a flexible light deflection field to predict the appearance and positions of counter-images. The model is refined as new counter-images are identified and incorporated to improve the model, yielding a total of 106 images of 30 multiply lensed background galaxies, spanning a wide redshift range, 1.0$<$z$<$5.5. The resulting mass map is more circular in projection than the clumpy distribution of cluster galaxies and the light is more concentrated than the mass within $r<50kpc/h$. The projected mass profile flattens steadily towards the center with a shallow mean slope of $d\\log\\Sigma/d\\log r \\simeq -0.55\\pm0.1$, over the observed range, r$<250kpc/h$, matching well an NFW profile, but with a relatively high concentration, $C_{vir}=8.2^{+2.1}_{-1.8}$. A softened isothermal profile ($r_{core}=20\\pm2$\\arcs) is not conclusively excluded, illustrating that lensing constrains only projected quantities. Regarding cosmology, we clearly detect the purely geometric increase of bend-angles with redshift. The dependence on the cosmological parameters is weak due to the proximity of A1689, $z=0.18$, constraining the locus, $\\Omega_M+\\Omega_{\\Lambda} \\leq 1.2$. This consistency with standard cosmology provides independent support for our model, because the redshift information is not required to derive an accurate mass map. Similarly, the relative fluxes of the multiple images are reproduced well by our best fitting lens model.

  9. Computer-vision-based weed identification of images acquired by 3CCD camera

    NASA Astrophysics Data System (ADS)

    Zhang, Yun; He, Yong; Fang, Hui

    2006-09-01

    Selective application of herbicide to weeds at an earlier stage in crop growth is an important aspect of site-specific management of field crops. For approaches more adaptive in developing the on-line weed detecting application, more researchers involves in studies on image processing techniques for intensive computation and feature extraction tasks to identify the weeds from the other crops and soil background. This paper investigated the potentiality of applying the digital images acquired by the MegaPlus TM MS3100 3-CCD camera to segment the background soil from the plants in question and further recognize weeds from the crops using the Matlab script language. The image of the near-infrared waveband (center 800 nm; width 65 nm) was selected principally for segmenting soil and identifying the cottons from the thistles was achieved based on their respective relative area (pixel amount) in the whole image. The results show adequate recognition that the pixel proportion of soil, cotton leaves and thistle leaves were 78.24%(-0.20% deviation), 16.66% (+ 2.71% SD) and 4.68% (-4.19% SD). However, problems still exists by separating and allocating single plants for their clustering in the images. The information in the images acquired via the other two channels, i.e., the green and the red bands, need to be extracted to help the crop/weed discrimination. More optical specimens should be acquired for calibration and validation to establish the weed-detection model that could be effectively applied in fields.

  10. A Powerful New Imager for HST: Performance and Early Science Results from Wide Field Camera 3

    NASA Technical Reports Server (NTRS)

    Kimble, Randy

    2009-01-01

    Wide Field Camera 3 (WFC3) was installed into the Hubble Space Telescope during the highly successful Servicing Mission 4 in May, 2009. WFC3 offers sensitive, high resolution imaging over a broad wavelength range from the near UV through the visible to the near IR (200nm - 1700nm). Its capabilities in the near UV and near IR ends of that range represent particularly large advances vs. those of previous HST instruments. In this talk, I will review the purpose and design of the instrument, describe its performance in flight, and highlight some of the initial scientific results from the instrument, including its use in deep infrared surveys in search of galaxies at very high redshift, in investigations of the global processes of star formation in nearby galaxies, and in the study of the recent impact on Jupiter.

  11. Results of shuttle EMU thermal vacuum tests incorporating an infrared imaging camera data acquisition system

    NASA Technical Reports Server (NTRS)

    Anderson, James E.; Tepper, Edward H.; Trevino, Louis A.

    1991-01-01

    Manned tests in Chamber B at NASA JSC were conducted in May and June of 1990 to better quantify the Space Shuttle Extravehicular Mobility Unit's (EMU) thermal performance in the cold environmental extremes of space. Use of an infrared imaging camera with real-time video monitoring of the output significantly added to the scope, quality and interpretation of the test conduct and data acquisition. Results of this test program have been effective in the thermal certification of a new insulation configuration and the '5000 Series' glove. In addition, the acceptable thermal performance of flight garments with visually deteriorated insulation was successfully demonstrated, thereby saving significant inspection and garment replacement cost. This test program also established a new method for collecting data vital to improving crew thermal comfort in a cold environment.

  12. Image quality analysis of a color LCD as well as a monochrome LCD using a Foveon color CMOS camera

    NASA Astrophysics Data System (ADS)

    Dallas, William J.; Roehrig, Hans; Krupinski, Elizabeth A.

    2007-09-01

    We have combined a CMOS color camera with special software to compose a multi-functional image-quality analysis instrument. It functions as a colorimeter as well as measuring modulation transfer functions (MTF) and noise power spectra (NPS). It is presently being expanded to examine fixed-pattern noise and temporal noise. The CMOS camera has 9 ?m square pixels and a pixel matrix of 2268 x 1512 x 3. The camera uses a sensor that has co-located pixels for all three primary colors. We have imaged sections of both a color and a monochrome LCD monitor onto the camera sensor with LCD-pixel-size to camera-pixel-size ratios of both 12:1 and 17.6:1. When used as an imaging colorimeter, each camera pixel is calibrated to provide CIE color coordinates and tristimulus values. This capability permits the camera to simultaneously determine chromaticity in different locations on the LCD display. After the color calibration with a CS-200 colorimeter the color coordinates of the display's primaries determined from the camera's luminance response are very close to those found from the CS-200. Only the color coordinates of the display's white point were in error. For calculating the MTF a vertical or horizontal line is displayed on the monitor. The captured image is color-matrix preprocessed, Fourier transformed then post-processed. For NPS, a uniform image is displayed on the monitor. Again, the image is pre-processed, transformed and processed. Our measurements show that the horizontal MTF's of both displays have a larger negative slope than that of the vertical MTF's. This behavior indicates that the horizontal MTF's are poorer than the vertical MTF's. However the modulations at the Nyquist frequency seem lower for the color LCD than for the monochrome LCD. The spatial noise of the color display in both directions is larger than that of the monochrome display. Attempts were also made to analyze the total noise in terms of spatial and temporal noise by applying subtractions of images taken at exactly the same exposure. Temporal noise seems to be significantly lower than spatial noise.

  13. High-Spatial-Resolution Medical-Imaging System Using a HARPICON Camera Coupled with a Fluorescent Screen.

    PubMed

    Umetani, K; Ueki, H; Ueda, K; Hirai, T; Takeda, T; Doi, T; Wu, J; Itai, Y; Akisada, M

    1996-05-01

    A high-sensitivity HARPICON camera was developed for medical X-ray imaging using a fluorescent screen. It is an avalanche-multiplication-type image pick-up tube and is 32 times more sensitive than conventional tubes. The camera also has a wider dynamic range than conventional medical-imaging cameras because a maximum output signal current of 2.3 muA is obtained and, in high-illumination-intensity regions, photocurrent is not proportional to illumination intensity. The fluorescent screen is an intensifying screen of the type used for radiographic screen-film combinations in medical examination. An X-ray image on the screen is focused on the photoconductive layer of the pick-up tube using a coupling lens with f/0.65. Experiments were performed using monochromated X-rays at the Photon Factory. An image of a spatial resolution test chart was taken in a 525 scanning-line mode of the camera. The chart pattern of 5 line-pairs mm(-1 )(spatial resolution of 100 mum) was observed at an X-ray input field of 50 x 50 mm. Real-time digital images of the heart of a 12 kg dog were obtained at a frame rate of 60 images s(-1) after injection of a contrast medium into an artery. The images were stored in digital format at 512 x 480 pixels with 12 bits pixel(-1). High-spatial-resolution and high-contrast images of coronary arteries were obtained in aortography using X-rays with energy above that of the iodine K edge; the image quality was comparable with that of conventional selective coronary angiography. PMID:16702671

  14. Estimating the camera direction of a geotagged image using reference images

    E-print Network

    by recent research results that the additional global positioning system (GPS) information helps visual the fusion of user photos and satellite images obtained using the global positioning system (GPS) information

  15. Measurement of effective temperature range of fire service thermal imaging cameras

    NASA Astrophysics Data System (ADS)

    Amon, Francine; Bryner, Nelson

    2008-04-01

    The use of thermal imaging cameras (TIC) by the fire service is increasing as fire fighters become more aware of the value of these tools. The National Fire Protection Association (NFPA) is currently developing a consensus standard for design and performance requirements of TIC as used by the fire service. The National Institute of Standards and Technology facilitates this process by providing recommendations for science-based performance metrics and test methods to the NFPA technical committee charged with the development of this standard. A suite of imaging performance metrics and test methods, based on the harsh operating environment and limitations of use particular to the fire service, has been proposed for inclusion in the standard. The Effective Temperature Range (ETR) measures the range of temperatures that a TIC can view while still providing useful information to the user. Specifically, extreme heat in the field of view tends to inhibit a TIC's ability to discern surfaces having intermediate temperatures, such as victims and fire fighters. The ETR measures the contrast of a target having alternating 25 °C and 30 °C bars while an increasing temperature range is imposed on other surfaces in the field of view. The ETR also indicates the thermal conditions that trigger a shift in integration time common to TIC employing microbolometer sensors. The reported values for this imaging performance metric are the hot surface temperature range within which the TIC provides adequate bar contrast, and the hot surface temperature at which the TIC shifts integration time.

  16. Examination of the semi-automatic calculation technique of vegetation cover rate by digital camera images.

    NASA Astrophysics Data System (ADS)

    Takemine, S.; Rikimaru, A.; Takahashi, K.

    The rice is one of the staple foods in the world High quality rice production requires periodically collecting rice growth data to control the growth of rice The height of plant the number of stem the color of leaf is well known parameters to indicate rice growth Rice growth diagnosis method based on these parameters is used operationally in Japan although collecting these parameters by field survey needs a lot of labor and time Recently a laborsaving method for rice growth diagnosis is proposed which is based on vegetation cover rate of rice Vegetation cover rate of rice is calculated based on discriminating rice plant areas in a digital camera image which is photographed in nadir direction Discrimination of rice plant areas in the image was done by the automatic binarization processing However in the case of vegetation cover rate calculation method depending on the automatic binarization process there is a possibility to decrease vegetation cover rate against growth of rice In this paper a calculation method of vegetation cover rate was proposed which based on the automatic binarization process and referred to the growth hysteresis information For several images obtained by field survey during rice growing season vegetation cover rate was calculated by the conventional automatic binarization processing and the proposed method respectively And vegetation cover rate of both methods was compared with reference value obtained by visual interpretation As a result of comparison the accuracy of discriminating rice plant areas was increased by the proposed

  17. Tower Camera Handbook

    SciTech Connect

    Moudry, D

    2005-01-01

    The tower camera in Barrow provides hourly images of ground surrounding the tower. These images may be used to determine fractional snow cover as winter arrives, for comparison with the albedo that can be calculated from downward-looking radiometers, as well as some indication of present weather. Similarly, during spring time, the camera images show the changes in the ground albedo as the snow melts. The tower images are saved in hourly intervals. In addition, two other cameras, the skydeck camera in Barrow and the piling camera in Atqasuk, show the current conditions at those sites.

  18. Automated Registration of Images from Multiple Bands of Resourcesat-2 Liss-4 camera

    NASA Astrophysics Data System (ADS)

    Radhadevi, P. V.; Solanki, S. S.; Jyothi, M. V.; Varadan, G.

    2014-11-01

    Continuous and automated co-registration and geo-tagging of images from multiple bands of Liss-4 camera is one of the interesting challenges of Resourcesat-2 data processing. Three arrays of the Liss-4 camera are physically separated in the focal plane in alongtrack direction. Thus, same line on the ground will be imaged by extreme bands with a time interval of as much as 2.1 seconds. During this time, the satellite would have covered a distance of about 14 km on the ground and the earth would have rotated through an angle of 30". A yaw steering is done to compensate the earth rotation effects, thus ensuring a first level registration between the bands. But this will not do a perfect co-registration because of the attitude fluctuations, satellite movement, terrain topography, PSM steering and small variations in the angular placement of the CCD lines (from the pre-launch values) in the focal plane. This paper describes an algorithm based on the viewing geometry of the satellite to do an automatic band to band registration of Liss-4 MX image of Resourcesat-2 in Level 1A. The algorithm is using the principles of photogrammetric collinearity equations. The model employs an orbit trajectory and attitude fitting with polynomials. Then, a direct geo-referencing with a global DEM with which every pixel in the middle band is mapped to a particular position on the surface of the earth with the given attitude. Attitude is estimated by interpolating measurement data obtained from star sensors and gyros, which are sampled at low frequency. When the sampling rate of attitude information is low compared to the frequency of jitter or micro-vibration, images processed by geometric correction suffer from distortion. Therefore, a set of conjugate points are identified between the bands to perform a relative attitude error estimation and correction which will ensure the internal accuracy and co-registration of bands. Accurate calculation of the exterior orientation parameters with GCPs is not required. Instead, the relative line of sight vector of each detector in different bands in relation to the payload is addressed. With this method a band to band registration accuracy of better than 0.3 pixels could be achieved even in high hill areas.

  19. SPLASSH: Open source software for camera-based high-speed, multispectral in-vivo optical image acquisition

    PubMed Central

    Sun, Ryan; Bouchard, Matthew B.; Hillman, Elizabeth M. C.

    2010-01-01

    Camera-based in-vivo optical imaging can provide detailed images of living tissue that reveal structure, function, and disease. High-speed, high resolution imaging can reveal dynamic events such as changes in blood flow and responses to stimulation. Despite these benefits, commercially available scientific cameras rarely include software that is suitable for in-vivo imaging applications, making this highly versatile form of optical imaging challenging and time-consuming to implement. To address this issue, we have developed a novel, open-source software package to control high-speed, multispectral optical imaging systems. The software integrates a number of modular functions through a custom graphical user interface (GUI) and provides extensive control over a wide range of inexpensive IEEE 1394 Firewire cameras. Multispectral illumination can be incorporated through the use of off-the-shelf light emitting diodes which the software synchronizes to image acquisition via a programmed microcontroller, allowing arbitrary high-speed illumination sequences. The complete software suite is available for free download. Here we describe the software’s framework and provide details to guide users with development of this and similar software. PMID:21258475

  20. Cluster approach based multi-camera digital image correlation: Methodology and its application in large area high temperature measurement

    NASA Astrophysics Data System (ADS)

    Chen, Xu; Yang, Lianxiang; Xu, Nan; Xie, Xin; Sia, Bernard; Xu, Roger

    2014-04-01

    A cluster approach based multi-camera Digital Image Correlation (DIC) system has been developed to quantify dynamic material response at temperature up to 1200 °C The Monochromatic Light Illuminated Stereo DIC technique was embedded to eliminate surface radiance at high temperature. The employed measurement system not only takes advantage of a conventional 3D DIC system, but also provides a feasible way to enlarge the measurement field without losing effective resolution in the area of interest. Two pairs of pre-calibrated CCD cameras are used to measure a piece of sheet nickel alloy. The view of each pair of cameras covers about half of the specimen. To guarantee the continuity of the evaluation result, an overlapped area that is covered by all four cameras is used in the setup. Unlike the conventional data stitching technique which stitches data from different pairs of cameras, our system with the cluster approach technique, maps all data points into a universal world coordinate system before evaluating the contour, displacement, and strain. To evaluate our system, a specimen was loaded with infrared heaters, and the dynamic contour, displacement, and strain field was evaluated. The methodology of the employed system is introduced in this paper. The system has the potential to be expanded with more cameras to measure a very large surface with one shot.

  1. Dynamic simultaneous anterior and posterior imaging of transplant and native renal function using dual-headed gamma camera.

    PubMed

    Mastin, S T; Drane, W E

    1992-07-01

    The authors describe the nonvisualization of a renal transplant on DTPA scan with late visualization of activity within the bladder in a 27-year-old patient with diabetes. Origin of bladder activity was later clarified by dynamic anterior and posterior imaging using MAG 3 and an extra large field-of-view, dual-headed gamma camera. PMID:1386301

  2. Calibration of the fast range imaging camera SwissRanger for use in the surveillance of the environment

    NASA Astrophysics Data System (ADS)

    Kahlmann, Timo; Ingensand, Hilmar

    2006-09-01

    Many security & defense systems need to capture their environment in one, two or even three dimensions. Therefore adequate measurement sensors are required that provide fast, accurate and reliable 3D data. With the upcoming range imaging cameras, like the SwissRanger TM introduced by CSEM Switzerland, new cheap sensors with such ability and high performance are available on the market. Because of the measurement concept these sensors long for a special calibration approach. Due to the implementation of several thousand distance measurement systems as pixels, a standard photogrammetric camera calibration is not sufficient. This paper will present results of investigations on the accuracy of the range imaging camera SwissRanger. A systematic calibration method is presented which takes into consideration the different influencing parameters, like reflectivity, integration time, temperature and distance itself. The analyzed parameters with respect to their impact on the distance measuring pixels and their output data were determined. The investigations were mainly done on the high precision calibration track line in the calibration laboratory at ETH Zurich, which provides a relative accuracy of several microns. In this paper it will be shown, under which circumstances the goal accuracy of the sub centimeter level can be reached. The results of this work can be very helpful for users of range imaging systems to increase their accuracy and thus the reliability of their systems. As an example, the usefulness of a range imaging camera in security systems for room surveillance is presented.

  3. Proposal for real-time terahertz imaging system with palm-size terahertz camera and compact quantum cascade laser

    NASA Astrophysics Data System (ADS)

    Oda, Naoki; Lee, Alan W. M.; Ishi, Tsutomu; Hosako, Iwao; Hu, Qing

    2012-05-01

    This paper describes a real-time terahertz (THz) imaging system, using the combination of a palm-size THz camera with a compact quantum cascade laser (QCL). The THz camera contains a 320x240 microbolometer focal plane array which has nearly flat spectral response over a frequency range of ca. 1.5 to 100 THz, and operates at 30 Hz frame rate. The QCL is installed in compact cryogen-free cooler. A variety of QCLs are prepared which can cover frequency range from ca. 1.5 to 5 THz. THz images of biochemical samples will be presented, using the combined imaging system. Performance of the imaging system, such as signal-to-noise ratio of transmission-type THz microscope, is predicted.

  4. High-spatial-resolution and real-time medical imaging using a high-sensitivity HARPICON camera.

    PubMed

    Umetani, K; Ueki, H; Takeda, T; Itai, Y; Mori, H; Tanaka, E; Uddin-Mohammed, M; Shinozaki, Y; Akisada, M; Sasaki, Y

    1998-05-01

    A HARPICON(TM) camera has been applied to a digital angiography system with fluorescent-screen optical-lens coupling. It uses avalanche multiplication in the photoconductive layer for high-sensitivity imaging. The limiting spatial resolutions in the 1050 scanning-line mode of the camera are about 30 and 50 micro m at input field sizes of 20 x 20 and 50 x 50 mm on the screen, respectively. For high-speed imaging, the 525 scanning-line mode at a rate of 60 images s(-1) can be selected. High-quality images of coronary arteries in dogs were obtained by intra-aortic coronary angiography and superselective coronary angiography using a single-energy X-ray above the iodine K-edge energy. PMID:15263768

  5. Educational Applications for Digital Cameras.

    ERIC Educational Resources Information Center

    Cavanaugh, Terence; Cavanaugh, Catherine

    1997-01-01

    Discusses uses of digital cameras in education. Highlights include advantages and disadvantages, digital photography assignments and activities, camera features and operation, applications for digital images, accessory equipment, and comparisons between digital cameras and other digitizers. (AEF)

  6. Combined dynamic and steady-state infrared camera based carrier lifetime imaging of silicon wafers

    NASA Astrophysics Data System (ADS)

    Ramspeck, Klaus; Bothe, Karsten; Schmidt, Jan; Brendel, Rolf

    2009-12-01

    We report on a calibration-free dynamic carrier lifetime imaging technique yielding spatially resolved carrier lifetime maps of silicon wafers within data acquisition times of seconds. Our approach is based on infrared lifetime mapping (ILM), which exploits the proportionality between the measured infrared emission and the free carrier density. Dynamic ILM determines the lifetime analytically from the signal ratio of infrared camera images recorded directly after turning on an excitation source and after steady-state conditions are established within the sample. We investigate the applicability of dynamic infrared lifetime mapping on silicon wafers with rough surfaces, study the impact of injection dependencies, and examine the technical requirements for measuring low lifetime values in the range of microseconds. While the dynamic ILM approach is suitable for lifetimes exceeding 10?s, a combination with steady-state ILM is required to measure lifetime values in the range of 1?s. The injection dependence does not hamper a correct determination of the carrier lifetime by the dynamic evaluation procedure.

  7. First results from the Faint Object Camera - Imaging the core of R Aquarii

    NASA Technical Reports Server (NTRS)

    Paresce, F.; Albrecht, R.; Barbieri, C.; Blades, J. C.; Boksenberg, A.

    1991-01-01

    The Faint Object Camera on the HST was pointed toward the symbiotic long-period M7e Mira variable R Aquarii, and very high resolution images of the inner core, mainly in the ionized oxygen emission lines in the optical, are reported. Both images show bright arcs, knots, and filaments superposed on a fainter, diffuse nebulosity extending in a general SW-NE direction from the variable to the edge of the field at 10 arcsec distance. The core is resolved in forbidden O III 5007 A and forbidden O II 3727 A into at least two bright knots of emission whose positions and structures are aligned with PA = 50 deg. The central knots appear to be the source of a continuous, well-collimated, stream of material extending out to 3-4 arcsec in the northern sector corresponding to a linear distance of about 1000 AU. The northern stream seems to bend around an opaque obstacle and form a spiral before breaking up into wisps and knots. The southern stream is composed of smaller, discrete parcels of emitting gas curving to the SE.

  8. 4 Vesta in Color: High Resolution Mapping from Dawn Framing Camera Images

    NASA Technical Reports Server (NTRS)

    Reddy, V.; LeCorre, L.; Nathues, A.; Sierks, H.; Christensen, U.; Hoffmann, M.; Schroeder, S. E.; Vincent, J. B.; McSween, H. Y.; Denevi, B. W.; Li, J.-Y.; Pieters, C. M.; Gaffey, M.; Mittlefehldt, D.; Buratti, B.; Hicks, M.; McCord, T.; Combe, J.-P.; DeSantis, M. C.; Russell, C. T.; Raymond, C. A.; Marques, P. Gutierrez; Maue, T.; Hall, I.

    2011-01-01

    Rotational surface variations on asteroid 4 Vesta have been known from ground-based and HST observations, and they have been interpreted as evidence of compositional diversity. NASA s Dawn mission entered orbit around Vesta on July 16, 2011 for a year-long global characterization. The framing cameras (FC) onboard the Dawn spacecraft will image the asteroid in one clear (broad) and seven narrow band filters covering the wavelength range between 0.4-1.0 microns. We present color mapping results from the Dawn FC observations of Vesta obtained during Survey orbit (approx.3000 km) and High-Altitude Mapping Orbit (HAMO) (approx.950 km). Our aim is to create global color maps of Vesta using multi spectral FC images to identify the spatial extent of compositional units and link them with other available data sets to extract the basic mineralogy. While the VIR spectrometer onboard Dawn has higher spectral resolution (864 channels) allowing precise mineralogical assessment of Vesta s surface, the FC has three times higher spatial resolution in any given orbital phase. In an effort to extract maximum information from FC data we have developed algorithms using laboratory spectra of pyroxenes and HED meteorites to derive parameters associated with the 1-micron absorption band wing. These parameters will help map the global distribution of compositionally related units on Vesta s surface. Interpretation of these units will involve the integration of FC and VIR data.

  9. Fire service and first responder thermal imaging camera (TIC) advances and standards

    NASA Astrophysics Data System (ADS)

    Konsin, Lawrence S.; Nixdorff, Stuart

    2007-04-01

    Fire Service and First Responder Thermal Imaging Camera (TIC) applications are growing, saving lives and preventing injury and property damage. Firefighters face a wide range of serious hazards. TICs help mitigate the risks by protecting Firefighters and preventing injury, while reducing time spent fighting the fire and resources needed to do so. Most fire safety equipment is covered by performance standards. Fire TICs, however, are not covered by such standards and are also subject to inadequate operational performance and insufficient user training. Meanwhile, advancements in Fire TICs and lower costs are driving product demand. The need for a Fire TIC Standard was spurred in late 2004 through a Government sponsored Workshop where experts from the First Responder community, component manufacturers, firefighter training, and those doing research on TICs discussed strategies, technologies, procedures, best practices and R&D that could improve Fire TICs. The workshop identified pressing image quality, performance metrics, and standards issues. Durability and ruggedness metrics and standard testing methods were also seen as important, as was TIC training and certification of end-users. A progress report on several efforts in these areas and their impact on the IR sensor industry will be given. This paper is a follow up to the SPIE Orlando 2004 paper on Fire TIC usage (entitled Emergency Responders' Critical Infrared) which explored the technological development of this IR industry segment from the viewpoint of the end user, in light of the studies and reports that had established TICs as a mission critical tool for firefighters.

  10. 4 Vesta in Color: High Resolution Mapping from Dawn Framing Camera Images

    NASA Astrophysics Data System (ADS)

    Reddy, V.; Le Corre, L.; Nathues, A.; Sierks, H.; Christensen, U. R.; Hoffmann, M.; Schroeder, S.; Vincent, J.; McSween, H. Y.; Denevi, B. W.; Li, J.; Pieters, C. M.; Gaffey, M.; Mittlefehldt, D. W.; Buratti, B. J.; Hicks, M.; McCord, T. B.; Combe, J.; DeSanctis, C.; Russell, C. T.; Raymond, C. A.; Gutierrez-Marques, P.; Maue, T.; Hall, I.

    2011-12-01

    Rotational surface variations on asteroid 4 Vesta have been known from ground-based and HST observations, and they have been interpreted as evidence of compositional diversity. NASA's Dawn mission entered orbit around Vesta on July 16, 2011 for a yearlong global characterization. The framing cameras (FC) onboard the Dawn spacecraft will image the asteroid in one clear (broad) and seven narrow band filters covering the wavelength range between 0.4-1.0 ?m. We present color mapping results from the Dawn FC observations of Vesta obtained during Survey orbit (~3000 km) and High-Altitude Mapping Orbit (HAMO) (~950 km). Our aim is to create global color maps of Vesta using multi spectral FC images to identify the spatial extent of compositional units and link them with other available data sets to extract the basic mineralogy. While the VIR spectrometer onboard Dawn has higher spectral resolution (864 channels) allowing precise mineralogical assessment of Vesta's surface, the FC has three times higher spatial resolution in any given orbital phase. In an effort to extract maximum information from FC data we have developed algorithms using laboratory spectra of pyroxenes and HED meteorites to derive parameters associated with the 1-micron absorption band wing. These parameters will help map the global distribution of compositionally related units on Vesta's surface. Interpretation of these units will involve the integration of FC and VIR data.

  11. The JANUS camera onboard JUICE mission for Jupiter system optical imaging

    NASA Astrophysics Data System (ADS)

    Della Corte, Vincenzo; Schmitz, Nicole; Zusi, Michele; Castro, José Maria; Leese, Mark; Debei, Stefano; Magrin, Demetrio; Michalik, Harald; Palumbo, Pasquale; Jaumann, Ralf; Cremonese, Gabriele; Hoffmann, Harald; Holland, Andrew; Lara, Luisa Maria; Fiethe, Björn; Friso, Enrico; Greggio, Davide; Herranz, Miguel; Koncz, Alexander; Lichopoj, Alexander; Martinez-Navajas, Ignacio; Mazzotta Epifani, Elena; Michaelis, Harald; Ragazzoni, Roberto; Roatsch, Thomas; Rodrigo, Julio; Rodriguez, Emilio; Schipani, Pietro; Soman, Matthew; Zaccariotto, Mirco

    2014-08-01

    JANUS (Jovis, Amorum ac Natorum Undique Scrutator) is the visible camera selected for the ESA JUICE mission to the Jupiter system. Resources constraints, S/C characteristics, mission design, environment and the great variability of observing conditions for several targets put stringent constraints on instrument architecture. In addition to the usual requirements for a planetary mission, the problem of mass and power consumption is particularly stringent due to the long-lasting cruising and operations at large distance from the Sun. JANUS design shall cope with a wide range of targets, from Jupiter atmosphere, to solid satellite surfaces, exosphere, rings, and lightning, all to be observed in several color and narrow-band filters. All targets shall be tracked during the mission and in some specific cases the DTM will be derived from stereo imaging. Mission design allows a quite long time range for observations in Jupiter system, with orbits around Jupiter and multiple fly-bys of satellites for 2.5 years, followed by about 6 months in orbit around Ganymede, at surface distances variable from 104 to few hundreds km. Our concept was based on a single optical channel, which was fine-tuned to cover all scientific objectives based on low to high-resolution imaging. A catoptric telescope with excellent optical quality is coupled with a rectangular detector, avoiding any scanning mechanism. In this paper the present JANUS design and its foreseen scientific capabilities are discussed.

  12. Development of Electron Tracking Compton Camera using micro pixel gas chamber for medical imaging

    NASA Astrophysics Data System (ADS)

    Kabuki, Shigeto; Hattori, Kaori; Kohara, Ryota; Kunieda, Etsuo; Kubo, Atsushi; Kubo, Hidetoshi; Miuchi, Kentaro; Nakahara, Tadaki; Nagayoshi, Tsutomu; Nishimura, Hironobu; Okada, Yoko; Orito, Reiko; Sekiya, Hiroyuki; Shirahata, Takashi; Takada, Atsushi; Tanimori, Toru; Ueno, Kazuki

    2007-10-01

    We have developed the Electron Tracking Compton Camera (ETCC) with reconstructing the 3-D tracks of the scattered electron in Compton process for both sub-MeV and MeV gamma rays. By measuring both the directions and energies of not only the recoil gamma ray but also the scattered electron, the direction of the incident gamma ray is determined for each individual photon. Furthermore, a residual measured angle between the recoil electron and scattered gamma ray is quite powerful for the kinematical background rejection. For the 3-D tracking of the electrons, the Micro Time Projection Chamber (?-TPC) was developed using a new type of the micro pattern gas detector. The ETCC consists of this ?-TPC (10×10×8 cm 3) and the 6×6×13 mm 3 GSO crystal pixel arrays with a flat panel photo-multiplier surrounding the ?-TPC for detecting recoil gamma rays. The ETCC provided the angular resolution of 6.6° (FWHM) at 364 keV of 131I. A mobile ETCC for medical imaging, which is fabricated in a 1 m cubic box, has been operated since October 2005. Here, we present the imaging results for the line sources and the phantom of human thyroid gland using 364 keV gamma rays of 131I.

  13. Factors affecting the repeatability of gamma camera calibration for quantitative imaging applications using a sealed source

    NASA Astrophysics Data System (ADS)

    Anizan, N.; Wang, H.; Zhou, X. C.; Wahl, R. L.; Frey, E. C.

    2015-02-01

    Several applications in nuclear medicine require absolute activity quantification of single photon emission computed tomography images. Obtaining a repeatable calibration factor that converts voxel values to activity units is essential for these applications. Because source preparation and measurement of the source activity using a radionuclide activity meter are potential sources of variability, this work investigated instrumentation and acquisition factors affecting repeatability using planar acquisition of sealed sources. The calibration factor was calculated for different acquisition and geometry conditions to evaluate the effect of the source size, lateral position of the source in the camera field-of-view (FOV), source-to-camera distance (SCD), and variability over time using sealed Ba-133 sources. A small region of interest (ROI) based on the source dimensions and collimator resolution was investigated to decrease the background effect. A statistical analysis with a mixed-effects model was used to evaluate quantitatively the effect of each variable on the global calibration factor variability. A variation of 1?cm in the measurement of the SCD from the assumed distance of 17?cm led to a variation of 1–2% in the calibration factor measurement using a small disc source (0.4?cm diameter) and less than 1% with a larger rod source (2.9?cm diameter). The lateral position of the source in the FOV and the variability over time had small impacts on calibration factor variability. The residual error component was well estimated by Poisson noise. Repeatability of better than 1% in a calibration factor measurement using a planar acquisition of a sealed source can be reasonably achieved. The best reproducibility was obtained with the largest source with a count rate much higher than the average background in the ROI, and when the SCD was positioned within 5?mm of the desired position. In this case, calibration source variability was limited by the quantum noise.

  14. A novel Compton camera design featuring a rear-panel shield for substantial noise reduction in gamma-ray images

    NASA Astrophysics Data System (ADS)

    Nishiyama, T.; Kataoka, J.; Kishimoto, A.; Fujita, T.; Iwamoto, Y.; Taya, T.; Ohsuka, S.; Nakamura, S.; Hirayanagi, M.; Sakurai, N.; Adachi, S.; Uchiyama, T.

    2014-12-01

    After the Japanese nuclear disaster in 2011, large amounts of radioactive isotopes were released and still remain a serious problem in Japan. Consequently, various gamma cameras are being developed to help identify radiation hotspots and ensure effective decontamination operation. The Compton camera utilizes the kinematics of Compton scattering to contract images without using a mechanical collimator, and features a wide field of view. For instance, we have developed a novel Compton camera that features a small size (13 × 14 × 15 cm3) and light weight (1.9 kg), but which also achieves high sensitivity thanks to Ce:GAGG scintillators optically coupled wiith MPPC arrays. By definition, in such a Compton camera, gamma rays are expected to scatter in the ``scatterer'' and then be fully absorbed in the ``absorber'' (in what is called a forward-scattered event). However, high energy gamma rays often interact with the detector in the opposite direction - initially scattered in the absorber and then absorbed in the scatterer - in what is called a ``back-scattered'' event. Any contamination of such back-scattered events is known to substantially degrade the quality of gamma-ray images, but determining the order of gamma-ray interaction based solely on energy deposits in the scatterer and absorber is quite difficult. For this reason, we propose a novel yet simple Compton camera design that includes a rear-panel shield (a few mm thick) consisting of W or Pb located just behind the scatterer. Since the energy of scattered gamma rays in back-scattered events is much lower than that in forward-scattered events, we can effectively discriminate and reduce back-scattered events to improve the signal-to-noise ratio in the images. This paper presents our detailed optimization of the rear-panel shield using Geant4 simulation, and describes a demonstration test using our Compton camera.

  15. Development of a pixelated GSO gamma camera system with tungsten parallel hole collimator for single photon imaging

    SciTech Connect

    Yamamoto, S.; Watabe, H.; Kanai, Y.; Shimosegawa, E.; Hatazawa, J. [Kobe City College of Technology, 8-3 Gakuen-Higashi-machi, Nishi-ku, Kobe 651-2194 (Japan); Department of Molecular Imaging in Medicine, Osaka University Graduate School of Medicine, Osaka 565-0871 (Japan); Department of Nuclear Medicine and Tracer Kinetics, Osaka University Graduate School of Medicine, Osaka 565-0871 (Japan); Department of Molecular Imaging in Medicine, Osaka University Graduate School of Medicine, Osaka 565-0871 (Japan) and Department of Nuclear Medicine and Tracer Kinetics, Osaka University Graduate School of Medicine, Osaka 565-0871 (Japan)

    2012-02-15

    Purpose: In small animal imaging using a single photon emitting radionuclide, a high resolution gamma camera is required. Recently, position sensitive photomultiplier tubes (PSPMTs) with high quantum efficiency have been developed. By combining these with nonhygroscopic scintillators with a relatively low light output, a high resolution gamma camera can become useful for low energy gamma photons. Therefore, the authors developed a gamma camera by combining a pixelated Ce-doped Gd{sub 2}SiO{sub 5} (GSO) block with a high quantum efficiency PSPMT. Methods: GSO was selected for the scintillator, because it is not hygroscopic and does not contain any natural radioactivity. An array of 1.9 mm x 1.9 mm x 7 mm individual GSO crystal elements was constructed. These GSOs were combined with a 0.1-mm thick reflector to form a 22 x 22 matrix and optically coupled to a high quantum efficiency PSPMT (H8500C-100 MOD8). The GSO gamma camera was encased in a tungsten gamma-ray shield with tungsten pixelated parallel hole collimator, and the basic performance was measured for Co-57 gamma photons (122 keV). Results: In a two-dimensional position histogram, all pixels were clearly resolved. The energy resolution was {approx}15% FWHM. With the 20-mm thick tungsten pixelated collimator, the spatial resolution was 4.4-mm FWHM 40 mm from the collimator surface, and the sensitivity was {approx}0.05%. Phantom and small animal images were successfully obtained with our developed gamma camera. Conclusions: These results confirmed that the developed pixelated GSO gamma camera has potential as an effective instrument for low energy gamma photon imaging.

  16. On-Orbit Geometric Calibration of the Lunar Reconnaissance Orbiter Wide Angle Camera

    NASA Astrophysics Data System (ADS)

    Speyerer, E. J.; Wagner, R.; Robinson, M. S.

    2013-12-01

    Lunar Reconnaissance Orbiter (LRO) is equipped with a single Wide Angle Camera (WAC) [1] designed to collect monochromatic and multispectral observations of the lunar surface. Cartographically accurate image mosaics and stereo image based terrain models requires the position of each pixel in a given image be known to a corresponding point on the lunar surface with a high degree of accuracy and precision. The Lunar Reconnaissance Orbiter Camera (LROC) team initially characterized the WAC geometry prior to launch at the Malin Space Science Systems calibration facility. After lunar orbit insertion, the LROC team recognized spatially varying geometric offsets between color bands. These misregistrations made analysis of the color data problematic and showed that refinements to the pre-launch geometric analysis were necessary. The geometric parameters that define the WAC optical system were characterized from statistics gathered from co-registering over 84,000 image pairs. For each pair, we registered all five visible WAC bands to a precisely rectified Narrow Angle Camera (NAC) image (accuracy <15 m) [2] to compute key geometric parameters. In total, we registered 2,896 monochrome and 1,079 color WAC observations to nearly 34,000 NAC observations and collected over 13.7 million data points across the visible portion of the WAC CCD. Using the collected statistics, we refined the relative pointing (yaw, pitch and roll), effective focal length, principal point coordinates, and radial distortion coefficients. This large dataset also revealed spatial offsets between bands after orthorectification due to chromatic aberrations in the optical system. As white light enters the optical system, the light bends at different magnitudes as a function of wavelength, causing a single incident ray to disperse in a spectral spread of color [3,4]. This lateral chromatic aberration effect, also known as 'chromatic difference in magnification' [5] introduces variation to the effective focal length for each WAC band. Secondly, tangential distortions caused by minor decentering in the optical system altered the derived exterior orientation parameters for each 14-line WAC band. We computed the geometric parameter sets separately for each band to characterize the lateral chromatic aberrations and the decentering components in the WAC optical system. From this approach, we negated the need for additional tangential terms in the distortion model, thus reducing the number of computations during image orthorectification and therefore expediting the orthorectification process. We undertook a similar process for refining the geometry for the UV bands (321 and 360 nm), except we registered each UV bands to orthorectified visible bands of the same WAC observation (the visible bands have resolutions 4 times greater than the UV). The resulting 7-band camera model with refined geometric parameters enables map projection with sub-pixel accuracy. References: [1] Robinson et al. (2010) Space Sci. Rev. 150, 81-124 [2] Wagner et al. (2013) Lunar Sci Forum [3] Mahajan, V.N. (1998) Optical Imaging and Aberrations [4] Fiete, R.D. (2013), Manual of Photogrammetry, pp. 359-450 [5] Brown, D.C. (1966) Photometric Eng. 32, 444-462.

  17. CCD Camera

    DOEpatents

    Roth, Roger R. (Minnetonka, MN)

    1983-01-01

    A CCD camera capable of observing a moving object which has varying intensities of radiation eminating therefrom and which may move at varying speeds is shown wherein there is substantially no overlapping of successive images and wherein the exposure times and scan times may be varied independently of each other.

  18. CCD Camera

    DOEpatents

    Roth, R.R.

    1983-08-02

    A CCD camera capable of observing a moving object which has varying intensities of radiation emanating therefrom and which may move at varying speeds is shown wherein there is substantially no overlapping of successive images and wherein the exposure times and scan times may be varied independently of each other. 7 figs.

  19. Photoreceptor images of normal eyes and of eyes with macular dystrophy obtained in vivo with an adaptive optics fundus camera

    Microsoft Academic Search

    Kenichiro Bessho; Takashi Fujikado; Toshifumi Mihashi; Tatsuya Yamaguchi; Naoki Nakazawa; Yasuo Tano

    2008-01-01

    Purpose  To report on images of the human photoreceptor mosaic acquired in vivo with a newly developed, compact adaptive optics (AO)\\u000a fundus camera.\\u000a \\u000a \\u000a \\u000a Methods  The photoreceptors of two normal subjects and a patient with macular dystrophy were examined by using an AO fundus camera\\u000a equipped with a liquid crystal phase modulator. In the eye with macular dystrophy, the fixation point in the

  20. Multi-temporal database of High Resolution Stereo Camera (HRSC) images

    NASA Astrophysics Data System (ADS)

    Erkeling, G.; Luesebrink, D.; Hiesinger, H.; Reiss, D.

    2013-09-01

    Image data transmitted to Earth by Martian spacecraft since the 1970s, for example by Mariner and Viking, Mars Global Surveyor (MGS), Mars Express (MEx) and the Mars Reconnaissance Orbiter (MRO) showed, that the surface of Mars has changed dramatically and actually is continually changing [e.g., 1-8]. The changes are attributed to a large variety of atmospherical, geological and morphological processes, including eolian processes [9,10], mass wasting processes [11], changes of the polar caps [12] and impact cratering processes [13]. In addition, comparisons between Mariner, Viking and Mars Global Surveyor images suggest that more than one third of the Martian surface has brightened or darkened by at least 10% [6]. Albedo changes can have effects on the global heat balance and the circulation of winds, which can result in further surface changes [14-15]. In particular, the High Resolution Stereo Camera (HRSC) [16,17] on board Mars Express (MEx) covers large areas at high resolution and is therefore suited to detect the frequency, extent and origin of Martian surface changes. Since 2003 HRSC acquires high-resolution images of the Martian surface and contributes to Martian research, with focus on the surface morphology, the geology and mineralogy, the role of liquid water on the surface and in the atmosphere, on volcanism, as well as on the proposed climate change throughout the Martian history and has improved our understanding of the evolution of Mars significantly [18-21]. The HRSC data are available at ESA's Planetary Science Archive (PSA) as well as through the NASA Planetary Data System (PDS). Both data platforms are frequently used by the scientific community and provide additional software and environments to further generate map-projected and geometrically calibrated HRSC data. However, while previews of the images are available, there is no possibility to quickly and conveniently see the spatial and temporal availability of HRSC images in a specific region, which is important to detect the surface changes that occurred between two or more images.

  1. Retrieval of sulfur dioxide from a ground-based thermal infrared imaging camera

    NASA Astrophysics Data System (ADS)

    Prata, A. J.; Bernardo, C.

    2014-09-01

    Recent advances in uncooled detector technology now offer the possibility of using relatively inexpensive thermal (7 to 14 ?m) imaging devices as tools for studying and quantifying the behaviour of hazardous gases and particulates in atmospheric plumes. An experimental fast-sampling (60 Hz) ground-based uncooled thermal imager (Cyclops), operating with four spectral channels at central wavelengths of 8.6, 10, 11 and 12 ?m and one broadband channel (7-14 ?m) has been tested at several volcanoes and at an industrial site, where SO2 was a major constituent of the plumes. This paper presents new algorithms, which include atmospheric corrections to the data and better calibrations to show that SO2 slant column density can be reliably detected and quantified. Our results indicate that it is relatively easy to identify and discriminate SO2 in plumes, but more challenging to quantify the column densities. A full description of the retrieval algorithms, illustrative results and a detailed error analysis are provided. The noise-equivalent temperature difference (NE?T) of the spectral channels, a fundamental measure of the quality of the measurements, lies between 0.4 and 0.8 K, resulting in slant column density errors of 20%. Frame averaging and improved NE?T's can reduce this error to less than 10%, making a stand-off, day or night operation of an instrument of this type very practical for both monitoring industrial SO2 emissions and for SO2 column densities and emission measurements at active volcanoes. The imaging camera system may also be used to study thermal radiation from meteorological clouds and the atmosphere.

  2. Biophysical control of intertidal benthic macroalgae revealed by high-frequency multispectral camera images

    NASA Astrophysics Data System (ADS)

    van der Wal, Daphne; van Dalen, Jeroen; Wielemaker-van den Dool, Annette; Dijkstra, Jasper T.; Ysebaert, Tom

    2014-07-01

    Intertidal benthic macroalgae are a biological quality indicator in estuaries and coasts. While remote sensing has been applied to quantify the spatial distribution of such macroalgae, it is generally not used for their monitoring. We examined the day-to-day and seasonal dynamics of macroalgal cover on a sandy intertidal flat using visible and near-infrared images from a time-lapse camera mounted on a tower. Benthic algae were identified using supervised, semi-supervised and unsupervised classification techniques, validated with monthly ground-truthing over one year. A supervised classification (based on maximum likelihood, using training areas identified in the field) performed best in discriminating between sediment, benthic diatom films and macroalgae, with highest spectral separability between macroalgae and diatoms in spring/summer. An automated unsupervised classification (based on the Normalised Differential Vegetation Index NDVI) allowed detection of daily changes in macroalgal coverage without the need for calibration. This method showed a bloom of macroalgae (filamentous green algae, Ulva sp.) in summer with > 60% cover, but with pronounced superimposed day-to-day variation in cover. Waves were a major factor in regulating macroalgal cover, but regrowth of the thalli after a summer storm was fast (2 weeks). Images and in situ data demonstrated that the protruding tubes of the polychaete Lanice conchilega facilitated both settlement (anchorage) and survival (resistance to waves) of the macroalgae. Thus, high-frequency, high resolution images revealed the mechanisms for regulating the dynamics in cover of the macroalgae and for their spatial structuring. Ramifications for the mode, timing, frequency and evaluation of monitoring macroalgae by field and remote sensing surveys are discussed.

  3. Controlling Small Fixed Wing UAVs to Optimize Image Quality from On-Board Cameras

    NASA Astrophysics Data System (ADS)

    Jackson, Stephen Phillip

    Small UAVs have shown great promise as tools for collecting aerial imagery both quickly and cheaply. Furthermore, using a team of small UAVs, as opposed to one large UAV, has shown promise as being a cheaper, faster and more robust method for collecting image data over a large area. Unfortunately, the autonomy of small UAVs has not yet reached the point where they can be relied upon to collect good aerial imagery without human intervention, or supervision. The work presented here intends to increase the level of autonomy of small UAVs so that they can independently, and reliably collect quality aerial imagery. The main contribution of this paper is a novel approach to controlling small fixed wing UAVs that optimizes the quality of the images captured by cameras on board the aircraft. This main contribution is built on three minor contributions: a kinodynamic motion model for small fixed wing UAVs, an iterative Gaussian sampling strategy for rapidly exploring random trees, and a receding horizon, nonlinear model predictive controller for controlling a UAV's sensor footprint. The kinodynamic motion model is built on the traditional unicycle model of an aircraft. In order to create dynamically feasible paths, the kinodynamic motion model augments the kinetic unicycle model by adding a first order estimate of the aircraft's roll dynamics. Experimental data is presented that not only validates this novel kinodynamic motion model, but also shows a 25% improvement over the traditional unicycle model. A novel Gaussian biased sampling strategy is presented for building a rapidly exploring random tree that quickly iterates to a near optimal path. This novel sampling strategy does not require a method for calculating the nearest node to a point, which means that it runs much faster than the traditional RRT algorithm, but it still results in a Gaussian distribution of nodes. Furthermore, because it uses the kinodynamic motion model, the near optimal path it generates is, by definition, dynamically feasible. A nonlinear model predictive controller is presented to control the non-minimum phase problem of tracking a target on the ground from a UAV with a fixed camera. It is shown that this novel controller is probabilistically guaranteed to asymptotically converge to the path that minimizes the cross-track error of the UAV's sensor footprint. In addition, for a minimum phased problem, it is shown that its tracking performance is on par with a sliding mode controller, which at least theoretically, is capable of achieving perfect tracking. Finally, all three of these contributions are experimental validated by performing a variety of tracking tasks using the Berkeley Sig Rascal UAV.

  4. Active hyperspectral imaging using a quantum cascade laser (QCL) array and digital-pixel focal plane array (DFPA) camera.

    PubMed

    Goyal, Anish; Myers, Travis; Wang, Christine A; Kelly, Michael; Tyrrell, Brian; Gokden, B; Sanchez, Antonio; Turner, George; Capasso, Federico

    2014-06-16

    We demonstrate active hyperspectral imaging using a quantum-cascade laser (QCL) array as the illumination source and a digital-pixel focal-plane-array (DFPA) camera as the receiver. The multi-wavelength QCL array used in this work comprises 15 individually addressable QCLs in which the beams from all lasers are spatially overlapped using wavelength beam combining (WBC). The DFPA camera was configured to integrate the laser light reflected from the sample and to perform on-chip subtraction of the passive thermal background. A 27-frame hyperspectral image was acquired of a liquid contaminant on a diffuse gold surface at a range of 5 meters. The measured spectral reflectance closely matches the calculated reflectance. Furthermore, the high-speed capabilities of the system were demonstrated by capturing differential reflectance images of sand and KClO3 particles that were moving at speeds of up to 10 m/s. PMID:24977536

  5. Far ultraviolet wide field imaging and photometry - Spartan-202 Mark II Far Ultraviolet Camera

    NASA Technical Reports Server (NTRS)

    Carruthers, George R.; Heckathorn, Harry M.; Opal, Chet B.; Witt, Adolf N.; Henize, Karl G.

    1988-01-01

    The U.S. Naval Research Laboratory' Mark II Far Ultraviolet Camera, which is expected to be a primary scientific instrument aboard the Spartan-202 Space Shuttle mission, is described. This camera is intended to obtain FUV wide-field imagery of stars and extended celestial objects, including diffuse nebulae and nearby galaxies. The observations will support the HST by providing FUV photometry of calibration objects. The Mark II camera is an electrographic Schmidt camera with an aperture of 15 cm, a focal length of 30.5 cm, and sensitivity in the 1230-1600 A wavelength range.

  6. A double photomultiplier Compton camera and its readout system for mice imaging

    SciTech Connect

    Fontana, Cristiano Lino [Physics Department Galileo Galilei, University of Padua, Via Marzolo 8, Padova 35131 (Italy) and INFN Padova, Via Marzolo 8, Padova 35131 (Italy); Atroshchenko, Kostiantyn [Physics Department Galileo Galilei, University of Padua, Via Marzolo 8, Padova 35131 (Italy) and INFN Legnaro, Viale dell'Universita 2, Legnaro PD 35020 (Italy); Baldazzi, Giuseppe [Physics Department, University of Bologna, Viale Berti Pichat 6/2, Bologna 40127, Italy and INFN Bologna, Viale Berti Pichat 6/2, Bologna 40127 (Italy); Bello, Michele [INFN Legnaro, Viale dell'Universita 2, Legnaro PD 35020 (Italy); Uzunov, Nikolay [Department of Natural Sciences, Shumen University, 115 Universitetska str., Shumen 9712, Bulgaria and INFN Legnaro, Viale dell'Universita 2, Legnaro PD 35020 (Italy); Di Domenico, Giovanni [Physics Department, University of Ferrara, Via Saragat 1, Ferrara 44122 (Italy) and INFN Ferrara, Via Saragat 1, Ferrara 44122 (Italy)

    2013-04-19

    We have designed a Compton Camera (CC) to image the bio-distribution of gamma-emitting radiopharmaceuticals in mice. A CC employs the 'electronic collimation', i.e. a technique that traces the gamma-rays instead of selecting them with physical lead or tungsten collimators. To perform such a task, a CC measures the parameters of the Compton interaction that occurs in the device itself. At least two detectors are required: one (tracker), where the primary gamma undergoes a Compton interaction and a second one (calorimeter), in which the scattered gamma is completely absorbed. Eventually the polar angle and hence a 'cone' of possible incident directions are obtained (event with 'incomplete geometry'). Different solutions for the two detectors are proposed in the literature: our design foresees two similar Position Sensitive Photomultipliers (PMT, Hamamatsu H8500). Each PMT has 64 output channels that are reduced to 4 using a charge multiplexed readout system, i.e. a Series Charge Multiplexing net of resistors. Triggering of the system is provided by the coincidence of fast signals extracted at the last dynode of the PMTs. Assets are the low cost and the simplicity of design and operation, having just one type of device; among drawbacks there is a lower resolution with respect to more sophisticated trackers and full 64 channels Readout. This paper does compare our design of our two-Hamamatsu CC to other solutions and shows how the spatial and energy accuracy is suitable for the inspection of radioactivity in mice.

  7. A compact, discrete CsI(Tl) scintillator/Si photodiode gamma camera for breast cancer imaging

    SciTech Connect

    Gruber, Gregory J.

    2000-12-01

    Recent clinical evaluations of scintimammography (radionuclide breast imaging) are promising and suggest that this modality may prove a valuable complement to X-ray mammography and traditional breast cancer detection and diagnosis techniques. Scintimammography, however, typically has difficulty revealing tumors that are less than 1 cm in diameter, are located in the medial part of the breast, or are located in the axillary nodes. These shortcomings may in part be due to the use of large, conventional Anger cameras not optimized for breast imaging. In this thesis I present compact single photon camera technology designed specifically for scintimammography which strives to alleviate some of these limitations by allowing better and closer access to sites of possible breast tumors. Specific applications are outlined. The design is modular, thus a camera of the desired size and geometry can be constructed from an array (or arrays) of individual modules and a parallel hole lead collimator for directional information. Each module consists of: (1) an array of 64 discrete, optically-isolated CsI(Tl) scintillator crystals 3 x 3 x 5 mm{sup 3} in size, (2) an array of 64 low-noise Si PIN photodiodes matched 1-to-1 to the scintillator crystals, (3) an application-specific integrated circuit (ASIC) that amplifies the 64 photodiode signals and selects the signal with the largest amplitude, and (4) connectors and hardware for interfacing the module with a motherboard, thereby allowing straightforward computer control of all individual modules within a camera.

  8. Infrared Camera

    NASA Technical Reports Server (NTRS)

    1997-01-01

    A sensitive infrared camera that observes the blazing plumes from the Space Shuttle or expendable rocket lift-offs is capable of scanning for fires, monitoring the environment and providing medical imaging. The hand-held camera uses highly sensitive arrays in infrared photodetectors known as quantum well infrared photo detectors (QWIPS). QWIPS were developed by the Jet Propulsion Laboratory's Center for Space Microelectronics Technology in partnership with Amber, a Raytheon company. In October 1996, QWIP detectors pointed out hot spots of the destructive fires speeding through Malibu, California. Night vision, early warning systems, navigation, flight control systems, weather monitoring, security and surveillance are among the duties for which the camera is suited. Medical applications are also expected.

  9. Modelling image profiles produced with a small field of view gamma camera with a single pinhole collimator

    NASA Astrophysics Data System (ADS)

    Bugby, S. L.; Lees, J. E.; Perkins, A. C.

    2012-11-01

    Gamma cameras making use of parallel-hole collimators have a long history in medical imaging. Pinhole collimators were used in the original gamma camera instruments and have been used more recently in dedicated organ specific systems, intraoperative instruments and for small animal imaging, providing higher resolution over a smaller field of view than the traditional large field of view systems. With the resurgence of interest in the use of pinhole collimators for small field of view (SOV) medical gamma cameras, it is important to be able to accurately determine their response under various conditions. Several analytical approaches to pinhole response have been reported in the literature including models of 3D pinhole imaging systems. Success has also been reported in the use of Monte Carlo simulations; however this approach can require significant time and computing power. This report describes a 2D model that was used to investigate some common problems in pinhole imaging, the variation in resolution over the field of view and the use of `point' sources for quantifying pinhole response.

  10. Digital photogrammetric analysis of the IMP camera images: Mapping the Mars Pathfinder landing site in three dimensions

    NASA Astrophysics Data System (ADS)

    Kirk, R. L.; Howington-Kraus, E.; Hare, T.; Dorrer, E.; Cook, D.; Becker, K.; Thompson, K.; Redding, B.; Blue, J.; Galuszka, D.; Lee, E. M.; Gaddis, L. R.; Johnson, J. R.; Soderblom, L. A.; Ward, A. W.; Smith, P. H.; Britt, D. T.

    1999-04-01

    This paper describes our photogrammetric analysis of the Imager for Mars Pathfinder data, part of a broader program of mapping the Mars Pathfinder landing site in support of geoscience investigations. This analysis, carried out primarily with a commercial digital photogrammetric system, supported by our in-house Integrated Software for Imagers and Spectrometers (ISIS), consists of three steps: (1) geometric control: simultaneous solution for refined estimates of camera positions and pointing plus three-dimensional (3-D) coordinates of ~103 features sitewide, based on the measured image coordinates of those features; (2) topographic modeling: identification of ~3×105 closely spaced points in the images and calculation (based on camera parameters from step 1) of their 3-D coordinates, yielding digital terrain models (DTMs); and (3) geometric manipulation of the data: combination of the DTMs from different stereo pairs into a sitewide model, and reprojection of image data to remove parallax between the different spectral filters in the two cameras and to provide an undistorted planimetric view of the site. These processes are described in detail and example products are shown. Plans for combining the photogrammetrically derived topographic data with spectrophotometry are also described. These include photometric modeling using surface orientations from the DTM to study surface microtextures and improve the accuracy of spectral measurements, and photoclinometry to refine the DTM to single-pixel resolution where photometric properties are sufficiently uniform. Finally, the inclusion of rover images in a joint photogrammetric analysis with IMP images is described. This challenging task will provide coverage of areas hidden to the IMP, but accurate ranging of distant features can be achieved only if the lander is also visible in the rover image used.

  11. Digital photogrammetric analysis of the IMP camera images: Mapping the Mars Pathfinder landing site in three dimensions

    USGS Publications Warehouse

    Kirk, R.L.; Howington-Kraus, E.; Hare, T.; Dorrer, E.; Cook, D.; Becker, K.; Thompson, K.; Redding, B.; Blue, J.; Galuszka, D.; Lee, E.M.; Gaddis, L.R.; Johnson, J.R.; Soderblom, L.A.; Ward, A.W.; Smith, P.H.; Britt, D.T.

    1999-01-01

    This paper describes our photogrammetric analysis of the Imager for Mars Pathfinder data, part of a broader program of mapping the Mars Pathfinder landing site in support of geoscience investigations. This analysis, carried out primarily with a commercial digital photogrammetric system, supported by our in-house Integrated Software for Imagers and Spectrometers (ISIS), consists of three steps: (1) geometric control: simultaneous solution for refined estimates of camera positions and pointing plus three-dimensional (3-D) coordinates of ???103 features sitewide, based on the measured image coordinates of those features; (2) topographic modeling: identification of ???3 ?? 105 closely spaced points in the images and calculation (based on camera parameters from step 1) of their 3-D coordinates, yielding digital terrain models (DTMs); and (3) geometric manipulation of the data: combination of the DTMs from different stereo pairs into a sitewide model, and reprojection of image data to remove parallax between the different spectral filters in the two cameras and to provide an undistorted planimetric view of the site. These processes are described in detail and example products are shown. Plans for combining the photogrammetrically derived topographic data with spectrophotometry are also described. These include photometric modeling using surface orientations from the DTM to study surface microtextures and improve the accuracy of spectral measurements, and photoclinometry to refine the DTM to single-pixel resolution where photometric properties are sufficiently uniform. Finally, the inclusion of rover images in a joint photogrammetric analysis with IMP images is described. This challenging task will provide coverage of areas hidden to the IMP, but accurate ranging of distant features can be achieved only if the lander is also visible in the rover image used. Copyright 1999 by the American Geophysical Union.

  12. Gamma-ray imaging with a large micro-TPC and a scintillation camera

    Microsoft Academic Search

    K. Hattori; S. Kabuki; H. Kubo; S. Kurosawa; K. Miuchi; T. Nagayoshi; H. Nishimura; Y. Okada; R. Orito; H. Sekiya; A. Takada; A. Takeda; T. Tanimori; K. Ueno

    2007-01-01

    We report on the development of a large Compton camera with the full reconstruction of the Compton process based on a prototype. This camera consists of two kinds of detectors. One is a gaseous time projection chamber (micro-TPC) for measuring the energy and the track of a Compton recoil electron. The micro-TPC is based on a ?-PIC and a GEM,

  13. Observation of diffuse gamma-ray with Electron-Tracking Compton imaging camera loaded on balloon

    Microsoft Academic Search

    A. Takada; T. Tanimori; H. Kubo; K. Miuchi; K. Tsuchiya; S. Kabuki; H. Nishimura; K. Hattori; K. Ueno; S. Kurosawa; N. Nonaka; E. Mizuta; R. Orito; T. Nagayoshi

    2007-01-01

    We have developed an electron tracking Compton camera (ETCC) as an MeV gamma-ray telescope in the next generation. Our detector consists of a gaseous time projection chamber and a position sensitive scintillation camera. In order to evaluate the performance of this detector, we constructed a flight model detector as a balloon experiment for the observation of diffuse cosmic gamma rays

  14. Novel architecture for surveillance cameras with complementary metal oxide semiconductor image sensors

    Microsoft Academic Search

    Igor Kharitonenko; Wanqing Li; Chaminda Weerasinghe

    2005-01-01

    This work presents a novel architecture of an intelligent video surveillance camera. It is embedded with automated scene analysis and object behavior detection, so that operators can monitor more venues relying on the system that provides immediate response to suspicious events. The developed camera turns passive video data recording systems into active collaborators with security operators leaving to them only

  15. Automated Registration of High Resolution Images from Slide Presentation and Whiteboard Handwriting via a Video Camera

    E-print Network

    Zhu, Zhigang

    Handwriting via a Video Camera Weihong Li+ , Hao Tang§ and Zhigang Zhu§+ * § Department of Computer SciencePoint© (PPT) slide presentation and a whiteboard handwriting capture system, when used together, could provide with printing notes and the other with handwriting notes, we use a low-cost digital camera as a bridge to align

  16. Digital Pinhole Camera

    ERIC Educational Resources Information Center

    Lancor, Rachael; Lancor, Brian

    2014-01-01

    In this article we describe how the classic pinhole camera demonstration can be adapted for use with digital cameras. Students can easily explore the effects of the size of the pinhole and its distance from the sensor on exposure time, magnification, and image quality. Instructions for constructing a digital pinhole camera and our method for…

  17. Variations of zonal wind speed at Venus cloud tops from Venus Monitoring Camera UV images

    NASA Astrophysics Data System (ADS)

    Khatuntsev, Igor; Patsaeva, Marina; Ignatiev, Nikolai; Titov, Dmitri; Markiewicz, Wojciech J.

    2013-04-01

    7 years of continuous monitoring of Venus by ESA's Venus Express provided an opportunity to study dynamics of the atmosphere of Venus. Venus Monitoring Camera (VMC) [1] delivered the longest and the most complete so far set of UV images to study the cloud level circulation by tracking motion of the cloud features. We analyzed 130 orbits with manual cloud tracking and 600 orbits with digital correlation method. Here we present the latest update of our results. Total number of wind vectors derived in this work is approximately a half million. During Venus Express observations the mean zonal speed was in the range of 85-110 m/s. VMC observations indicated a long term trend for the zonal wind speed at low latitudes to increase. The origin of low frequency trend with a period about 3000 days is unclear. Fourier analysis [2-3] of revealed quasi-periodicities in the zonal circulation at low latitudes. Two groups of the periods were found. The first group is close to the period of superrotation at low latitudes (4.83±0.1 days) with the period 4.1-5.1 days and the amplitude ranging from ±4.2 to ±17.4 m/s. The amplitude and phase of oscillations demonstrates dependence from the latitude and also time variability with preserving stable parameters of oscillation during at least 70 days. Short term oscillations may be caused by wave processes in the mesosphere of Venus at the cloud top level. Wave number of the observed oscillations is 1. The second group is a long term periods caused by orbital motion of Venus (116 days, 224 days) and is related to the periodicity in VMC observations. Also VMC UV observations showed a clear diurnal pattern of the mean circulation. The zonal wind demonstrated semi-diurnal variations with minimum speed close to noon (11-14 h) and maxima in the morning (8-9 h) and in the evening (16-17 h). The meridional component clearly peaks in the early afternoon (13-15h) at latitudes near 50S. The minimum of the meridional wind is located at low latitudes in the morning (8-11h). References [1] Markiewicz W. J. et al.: Venus Monitoring Camera for Venus Express // Planet. Space Sci.. V.55(12). pp1701-1711. doi:10.1016/j.pss.2007.01.004, 2007. [2] Deeming T.J.: Fourier analysis with unequally-spaced data. Astroph. and Sp. Sci. V.36, pp137-158, 1975. [3] Terebizh, V.Yu. Time series analysis in astrophysics. Moscow: "Nauka," Glav. red. fiziko-matematicheskoi lit-ry, 1992. In Russian

  18. Medical applications of fast 3D cameras in real-time image-guided radiotherapy (IGRT) of cancer

    NASA Astrophysics Data System (ADS)

    Li, Shidong; Li, Tuotuo; Geng, Jason

    2013-03-01

    Dynamic volumetric medical imaging (4DMI) has reduced motion artifacts, increased early diagnosis of small mobile tumors, and improved target definition for treatment planning. High speed cameras for video, X-ray, or other forms of sequential imaging allow a live tracking of external or internal movement useful for real-time image-guided radiation therapy (IGRT). However, none of 4DMI can track real-time organ motion and no camera has correlated with 4DMI to show volumetric changes. With a brief review of various IGRT techniques, we propose a fast 3D camera for live-video stereovision, an automatic surface-motion identifier to classify body or respiratory motion, a mechanical model for synchronizing the external surface movement with the internal target displacement by combination use of the real-time stereovision and pre-treatment 4DMI, and dynamic multi-leaf collimation for adaptive aiming the moving target. Our preliminary results demonstrate that the technique is feasible and efficient in IGRT of mobile targets. A clinical trial has been initiated for validation of its spatial and temporal accuracies and dosimetric impact for intensity-modulated RT (IMRT), volumetric-modulated arc therapy (VMAT), and stereotactic body radiotherapy (SBRT) of any mobile tumors. The technique can be extended for surface-guided stereotactic needle insertion in biopsy of small lung nodules.

  19. An electronically collimated gamma camera for single photon emission computed tomography. Part II: Image reconstruction and preliminary experimental measurements.

    PubMed

    Singh, M; Doria, D

    1983-01-01

    Iterative algorithms have been investigated for reconstructing images from data acquired with a new type of gamma camera based upon an electronic method of collimating gamma radiation. The camera is composed of two detection systems which record a sequential interaction of the emitted gamma radiation. Coincident counting in accordance with Compton scattering kinematics leads to a localization of activity upon a multitude of conical surfaces throughout the object. A two-stage reconstruction procedure in which conical line projection images as seen by each position sensing element of the first detector are reconstructed in the first stage, and tomographic images are reconstructed in the second stage, has been developed. Computer simulation studies of both stages and first-stage reconstruction studies with preliminary experimental data are reported. Experimental data were obtained with one detection element of a prototype germanium detector. A microcomputer based circuit was developed to record coincident counts between the germanium detector and an uncollimated conventional scintillation camera. Point sources of Tc-99m and Cs-137 were used to perform preliminary measurements of sensitivity and point spread function characteristics of electronic collimation. PMID:6604217

  20. Miniaturized thermal snapshot camera

    NASA Astrophysics Data System (ADS)

    Hornback, William B.; Payson, Ellwood; Linsacum, Deron L.; Ward, Kenneth; Kennedy, John; Myers, Leo; Cuadra, Dean; Li, Mark

    2003-01-01

    This paper reports on the development of a new class of thermal cameras. Known as the FLAsh STabilized (FLAST) thermal imaging camera systme, these cameras are the first to be able to capture snapshop thermal images. Results from testing of the prototype unit will be presented and status on the design of amore efficient, miniaturized version for produciotn. The camera is highly programmable for images capture method, shot sequence, and shot quantity. To achieve the ability to operate in a snapship mode, the FLAST camera is designed to function without the need for cooling or other thermal regulation. In addition, the camera can operate over extended periods without the need for re-calibration. Thus, the cemera does not require a shutter, chopper or user inserted imager blocking system. This camera is capable operating for weeks using standard AA batteries. The initial camera configuration provides an image resolution of 320 x 240 and is able to turn-on and capture an image within approximately 1/4 sec. The FLAST camera operates autonomously, to collect, catalog and store over 500 images. Any interface and relay system capable of video formatted input will be able to serve as the image download transmission system.

  1. Compact, rugged, and intuitive thermal imaging cameras for homeland security and law enforcement applications

    NASA Astrophysics Data System (ADS)

    Hanson, Charles M.

    2005-05-01

    Low cost, small size, low power uncooled thermal imaging sensors have completely changed the way the world views commercial law enforcement and military applications. Key applications include security, medical, automotive, power generation monitoring, manufacturing and process control, aerospace application, defense, environmental and resource monitoring, maintenance monitoring and night vision. Commercial applications also include law enforcement and military special operations. Each application drives a unique set of requirements that include similar fundamental infrared technologies. Recently, in the uncooled infrared camera and microbolometer detector areas, major strides have been made in the design and manufacture of personal military and law enforcement sensors. L-3 Communications Infrared Products (L-3 IP) is producing a family of new products based on the amorphous silicon microbolometer with low cost, low power, high volume, wafer-level vacuum packaged silicon focal plane array technologies. These bolometer systems contain no choppers or thermoelectric coolers, require no manual calibration, and use readily available commercial off-the-shelf components. One such successful product is the Thermal-Eye X100xp. Extensive market needs analysis for these small hand held sensors has been validated by the quick acceptability into the Law Enforcement and Military Segments. As well as this product has already been received, L-3 IP has developed a strategic roadmap to improve and enhance the features and function of this product to include upgrades such as the new 30-Hz, 30-?m pitch detector. This paper describes advances in bolometric focal plane arrays, optical and circuit card technologies while providing a glimpse into the future of micro hand held sensor growth. Also, technical barriers are addressed in light of constraints, lessons learned and boundary conditions. One conclusion is that the Thermal Eye Silicon Bolometer technology simultaneously drives weight, cost, size, power, performance, producibility and design flexibility, each individually and all together - a must for the portable commercial law enforcement and military markets.

  2. Geologic map of the northern hemisphere of Vesta based on Dawn Framing Camera (FC) images

    NASA Astrophysics Data System (ADS)

    Ruesch, Ottaviano; Hiesinger, Harald; Blewett, David T.; Williams, David A.; Buczkowski, Debra; Scully, Jennifer; Yingst, R. Aileen; Roatsch, Thomas; Preusker, Frank; Jaumann, Ralf; Russell, Christopher T.; Raymond, Carol A.

    2014-12-01

    The Dawn Framing Camera (FC) has imaged the northern hemisphere of the Asteroid (4) Vesta at high spatial resolution and coverage. This study represents the first investigation of the overall geology of the northern hemisphere (22-90°N, quadrangles Av-1, 2, 3, 4 and 5) using these unique Dawn mission observations. We have compiled a morphologic map and performed crater size-frequency distribution (CSFD) measurements to date the geologic units. The hemisphere is characterized by a heavily cratered surface with a few highly subdued basins up to ?200 km in diameter. The most widespread unit is a plateau (cratered highland unit), similar to, although of lower elevation than the equatorial Vestalia Terra plateau. Large-scale troughs and ridges have regionally affected the surface. Between ?180°E and ?270°E, these tectonic features are well developed and related to the south pole Veneneia impact (Saturnalia Fossae trough unit), elsewhere on the hemisphere they are rare and subdued (Saturnalia Fossae cratered unit). In these pre-Rheasilvia units we observed an unexpectedly high frequency of impact craters up to ?10 km in diameter, whose formation could in part be related to the Rheasilvia basin-forming event. The Rheasilvia impact has potentially affected the northern hemisphere also with S-N small-scale lineations, but without covering it with an ejecta blanket. Post-Rheasilvia impact craters are small (<60 km in diameter) and show a wide range of degradation states due to impact gardening and mass wasting processes. Where fresh, they display an ejecta blanket, bright rays and slope movements on walls. In places, crater rims have dark material ejecta and some crater floors are covered by ponded material interpreted as impact melt.

  3. Determining Camera Gain in Room Temperature Cameras

    SciTech Connect

    Joshua Cogliati

    2010-12-01

    James R. Janesick provides a method for determining the amplification of a CCD or CMOS camera when only access to the raw images is provided. However, the equation that is provided ignores the contribution of dark current. For CCD or CMOS cameras that are cooled well below room temperature, this is not a problem, however, the technique needs adjustment for use with room temperature cameras. This article describes the adjustment made to the equation, and a test of this method.

  4. A JPEG-Like Algorithm for Compression of Single-Sensor Camera Image

    E-print Network

    Paris-Sud XI, Université de

    of expression through social networks, blogs, emails and so on. It has allowed users to avoid thinking before solution for this problem and is widely used for digital cameras, smartphones, webcams, etc. The used

  5. Dual charge-coupled device /CCD/, astronomical spectrometer and direct imaging camera. II - Data handling and control systems

    NASA Astrophysics Data System (ADS)

    Dewey, D.; Ricker, G. R.

    The data collection system for the MASCOT (MIT Astronomical Spectrometer/Camera for Optical Telescopes) is described. The system relies on an RCA 1802 microprocessor-based controller, which serves to collect and format data, to present data to a scan converter, and to operate a device communication bus. A NOVA minicomputer is used to record and recall frame images and to perform refined image processing. The RCA 1802 also provides instrument mode control for the MASCOT. Commands are issued using STOIC, a FORTH-like language. Sufficient flexibility has been provided so that a variety of CCDs can be accommodated.

  6. A Simple Nonmydriatic Self-Retinal Imaging Procedure Using a Kowa Genesis-D Hand-Held Digital Fundus Camera

    PubMed Central

    Ozerdem, Ugur

    2009-01-01

    Research on vascular adaptation to microgravity in the central nervous system requires a simple, noninvasive, direct imaging technique that can be performed with compact equipment. In this report we describe a practical, nonmydriatic, retinal self-imaging technique using a Kowa Genesis-D hand-held digital camera and a Black and Decker laser level. This simple technique will be useful to clinical physiologists conducting microgravity research, as well as for the studies of high-altitude medicine and aviation physiology. PMID:19571602

  7. Camera Projector

    NSDL National Science Digital Library

    Oakland Discovery Center

    2011-01-01

    In this activity (posted on March 14, 2011), learners follow the steps to construct a camera projector to explore lenses and refraction. First, learners use relatively simple materials to construct the projector. Then, learners discover that lenses project images upside down and backwards. They explore this phenomenon by creating their own slides (must be drawn upside down and backwards to appear normally). Use this activity to also introduce learners to spherical aberration and chromatic aberration.

  8. UVUDF: Ultraviolet Imaging of the Hubble Ultra Deep Field with Wide-Field Camera 3

    NASA Astrophysics Data System (ADS)

    Teplitz, Harry I.; Rafelski, Marc; Kurczynski, Peter; Bond, Nicholas A.; Grogin, Norman; Koekemoer, Anton M.; Atek, Hakim; Brown, Thomas M.; Coe, Dan; Colbert, James W.; Ferguson, Henry C.; Finkelstein, Steven L.; Gardner, Jonathan P.; Gawiser, Eric; Giavalisco, Mauro; Gronwall, Caryl; Hanish, Daniel J.; Lee, Kyoung-Soo; de Mello, Duilia F.; Ravindranath, Swara; Ryan, Russell E.; Siana, Brian D.; Scarlata, Claudia; Soto, Emmaris; Voyer, Elysse N.; Wolfe, Arthur M.

    2013-12-01

    We present an overview of a 90 orbit Hubble Space Telescope treasury program to obtain near-ultraviolet imaging of the Hubble Ultra Deep Field using the Wide Field Camera 3 UVIS detector with the F225W, F275W, and F336W filters. This survey is designed to: (1) investigate the episode of peak star formation activity in galaxies at 1 < z < 2.5; (2) probe the evolution of massive galaxies by resolving sub-galactic units (clumps); (3) examine the escape fraction of ionizing radiation from galaxies at z ~ 2-3; (4) greatly improve the reliability of photometric redshift estimates; and (5) measure the star formation rate efficiency of neutral atomic-dominated hydrogen gas at z ~ 1-3. In this overview paper, we describe the survey details and data reduction challenges, including both the necessity of specialized calibrations and the effects of charge transfer inefficiency. We provide a stark demonstration of the effects of charge transfer inefficiency on resultant data products, which when uncorrected, result in uncertain photometry, elongation of morphology in the readout direction, and loss of faint sources far from the readout. We agree with the STScI recommendation that future UVIS observations that require very sensitive measurements use the instrument's capability to add background light through a "post-flash." Preliminary results on number counts of UV-selected galaxies and morphology of galaxies at z ~ 1 are presented. We find that the number density of UV dropouts at redshifts 1.7, 2.1, and 2.7 is largely consistent with the number predicted by published luminosity functions. We also confirm that the image mosaics have sufficient sensitivity and resolution to support the analysis of the evolution of star-forming clumps, reaching 28-29th magnitude depth at 5? in a 0.''2 radius aperture depending on filter and observing epoch. Based on observations made with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are #12534.

  9. 4 Vesta in Color: Lithologic heterogeneity from Dawn Framing Camera Images

    NASA Astrophysics Data System (ADS)

    Nathues, A.; Christensen, U. R.; Reddy, V.; LeCorre, L.; Sierks, H.; Pieters, C. M.; Gaffey, M.; Denevi, B. W.; DeSanctis, C.; Hoffmann, M.; Schroeder, S.; Vincent, J.; Russell, C. T.; Raymond, C. A.; Jaumann, R.; Keller, H.; Mottola, S.; Neukum, G.; McCord, T. B.; Hiesinger, H.; Sunshine, J. M.; Gutierrez-Marques, P.; Maue, T.; Hall, I.; the Dawn Science Team

    2011-12-01

    4 Vesta is the largest differentiated asteroid that is still mostly intact today and is considered to be a model for the initial stages of planetary differentiation. NASA's Dawn mission entered orbit around Vesta on July 16, 2011 for a yearlong global characterization. The Framing Cameras (FC) onboard the Dawn spacecraft will image the asteroid in one clear and seven narrow band filters covering the wavelength range between 0.4-1.0 ?m. We present results from the Dawn FC color observations of 4 Vesta obtained during Survey and HAMO orbit. Our aim is to describe and explain the surface compositional diversity of 4 Vesta and the processes acting on it, in the context of its geology and at high spatial resolution. Mineral characterization will come from an integration of FC data with VIR spectroscopy. Issues to be addressed include: a) what are the distinct color units on Vesta and how do these color variations relate to large-scale geologic features of the surface? b) can known HED meteorites serve as a ground truth for some identified color units? c) what new or different lithologies not present in the terrestrial meteorite collections might occur on Vesta and what is their geologic context? d) do color differences on local scale exist and are these ones correlated with geologic features? In order to extract the maximum information we will apply a variety of analysis tools to the initial Dawn multicolor data. These include standard imaging processing procedures (PCA, mixing analyses, etc.) as well as evaluation and manipulation of color ratios and spectral parameters that are linked to mineralogy (HED components, etc.) and/or surface processes (space weathering, texture, etc.). We intend to also evaluate software tools such as "DawnKey", an automated lithology identification software that allows us to identify spectrally distinct units by using a HED and non-HED spectral library, and an Automated Spectral System Analyzer, a terrain mapping system based on pyroxene mineral chemistry, developed at MPS, to refine mapping approaches and integrate them with mineralogical analyses from Dawn VIR spectroscopic data.

  10. Multiformat video and laser cameras: history, design considerations, acceptance testing, and quality control. Report of AAPM Diagnostic X-Ray Imaging Committee Task Group No. 1.

    PubMed

    Gray, J E; Anderson, W F; Shaw, C C; Shepard, S J; Zeremba, L A; Lin, P J

    1993-01-01

    Acceptance testing and quality control of video and laser cameras is relatively simple, especially with the use of the SMPTE test pattern. Photographic quality control is essential if one wishes to be able to maintain the quality of video and laser cameras. In addition, photographic quality control must be carried out with the film used clinically in the video and laser cameras, and with a sensitometer producing a light spectrum similar to that of the video or laser camera. Before the end of the warranty period a second acceptance test should be carried out. At this time the camera should produce the same results as noted during the initial acceptance test. With the appropriate acceptance and quality control the video and laser cameras should produce quality images throughout the life of the equipment. PMID:8497235

  11. Large-format imaging plate and weissenberg camera for accurate protein crystallographic data collection using synchrotron radiation.

    PubMed

    Sakabe, K; Sasaki, K; Watanabe, N; Suzuki, M; Wang, Z G; Miyahara, J; Sakabe, N

    1997-05-01

    Off-line and on-line protein data-collection systems using an imaging plate as a detector are described and their components reported. The off-line scanner IPR4080 was developed for a large-format imaging plate ;BASIII' of dimensions 400 x 400 mm and 400 x 800 mm. The characteristics of this scanner are a dynamic range of 10(5) photons pixel(-1), low background noise and high sensitivity. A means of reducing electronic noise and a method for finding the origin of the noise are discussed in detail. A dedicated screenless Weissenberg camera matching IPR4080 with synchrotron radiation was developed and installed on beamline BL6B at the Photon Factory. This camera can attach one or two sheets of 400 x 800 mm large-format imaging plate inside the film cassette by evacuation. The positional reproducibility of the imaging plate on the cassette is so good that the data can be processed by batch job. Data of 93% completeness up to 1.6 A resolution were collected on a single axis rotation and the value of R(merge) becomes 4% from a tetragonal lysozyme crystal using a set of two imaging-plate sheets. Comparing two types of imaging plates, the signal-to-noise ratio of the ST-VIP-type imaging plate is 25% better than that of the BASIII-type imaging plate for protein data collection using 1.0 and 0.7 A X-rays. A new on-line protein data-collection system with imaging plates is specially designed to use synchrotron radiation X-rays at maximum efficiency. PMID:16699220

  12. Aircraft engine-mounted camera system for long wavelength infrared imaging of in-service thermal barrier coated turbine blades

    NASA Astrophysics Data System (ADS)

    Markham, James; Cosgrove, Joseph; Scire, James; Haldeman, Charles; Agoos, Ian

    2014-12-01

    This paper announces the implementation of a long wavelength infrared camera to obtain high-speed thermal images of an aircraft engine's in-service thermal barrier coated turbine blades. Long wavelength thermal images were captured of first-stage blades. The achieved temporal and spatial resolutions allowed for the identification of cooling-hole locations. The software and synchronization components of the system allowed for the selection of any blade on the turbine wheel, with tuning capability to image from leading edge to trailing edge. Its first application delivered calibrated thermal images as a function of turbine rotational speed at both steady state conditions and during engine transients. In advance of presenting these data for the purpose of understanding engine operation, this paper focuses on the components of the system, verification of high-speed synchronized operation, and the integration of the system with the commercial jet engine test bed.

  13. Aircraft engine-mounted camera system for long wavelength infrared imaging of in-service thermal barrier coated turbine blades.

    PubMed

    Markham, James; Cosgrove, Joseph; Scire, James; Haldeman, Charles; Agoos, Ian

    2014-12-01

    This paper announces the implementation of a long wavelength infrared camera to obtain high-speed thermal images of an aircraft engine's in-service thermal barrier coated turbine blades. Long wavelength thermal images were captured of first-stage blades. The achieved temporal and spatial resolutions allowed for the identification of cooling-hole locations. The software and synchronization components of the system allowed for the selection of any blade on the turbine wheel, with tuning capability to image from leading edge to trailing edge. Its first application delivered calibrated thermal images as a function of turbine rotational speed at both steady state conditions and during engine transients. In advance of presenting these data for the purpose of understanding engine operation, this paper focuses on the components of the system, verification of high-speed synchronized operation, and the integration of the system with the commercial jet engine test bed. PMID:25554314

  14. A sniffer-camera for imaging of ethanol vaporization from wine: the effect of wine glass shape.

    PubMed

    Arakawa, Takahiro; Iitani, Kenta; Wang, Xin; Kajiro, Takumi; Toma, Koji; Yano, Kazuyoshi; Mitsubayashi, Kohji

    2015-04-21

    A two-dimensional imaging system (Sniffer-camera) for visualizing the concentration distribution of ethanol vapor emitting from wine in a wine glass has been developed. This system provides image information of ethanol vapor concentration using chemiluminescence (CL) from an enzyme-immobilized mesh. This system measures ethanol vapor concentration as CL intensities from luminol reactions induced by alcohol oxidase and a horseradish peroxidase (HRP)-luminol-hydrogen peroxide system. Conversion of ethanol distribution and concentration to two-dimensional CL was conducted using an enzyme-immobilized mesh containing an alcohol oxidase, horseradish peroxidase, and luminol solution. The temporal changes in CL were detected using an electron multiplier (EM)-CCD camera and analyzed. We selected three types of glasses-a wine glass, a cocktail glass, and a straight glass-to determine the differences in ethanol emission caused by the shape effects of the glass. The emission measurements of ethanol vapor from wine in each glass were successfully visualized, with pixel intensity reflecting ethanol concentration. Of note, a characteristic ring shape attributed to high alcohol concentration appeared near the rim of the wine glass containing 13 °C wine. Thus, the alcohol concentration in the center of the wine glass was comparatively lower. The Sniffer-camera was demonstrated to be sufficiently useful for non-destructive ethanol measurement for the assessment of food characteristics. PMID:25756409

  15. Omnifocus video camera

    NASA Astrophysics Data System (ADS)

    Iizuka, Keigo

    2011-04-01

    The omnifocus video camera takes videos, in which objects at different distances are all in focus in a single video display. The omnifocus video camera consists of an array of color video cameras combined with a unique distance mapping camera called the Divcam. The color video cameras are all aimed at the same scene, but each is focused at a different distance. The Divcam provides real-time distance information for every pixel in the scene. A pixel selection utility uses the distance information to select individual pixels from the multiple video outputs focused at different distances, in order to generate the final single video display that is everywhere in focus. This paper presents principle of operation, design consideration, detailed construction, and over all performance of the omnifocus video camera. The major emphasis of the paper is the proof of concept, but the prototype has been developed enough to demonstrate the superiority of this video camera over a conventional video camera. The resolution of the prototype is high, capturing even fine details such as fingerprints in the image. Just as the movie camera was a significant advance over the still camera, the omnifocus video camera represents a significant advance over all-focus cameras for still images.

  16. Security camera video authentication

    Microsoft Academic Search

    D. K. Roberts

    2002-01-01

    The ability to authenticate images captured by a security camera, and localise any tampered areas, will increase the value of these images as evidence in a court of law. This paper outlines the challenges in security camera video authentication, and discusses the reasons why fingerprinting, a robust type of digital signature, provides a solution preferable to semi-fragile watermarking. A fingerprint

  17. The Complementary Pinhole Camera.

    ERIC Educational Resources Information Center

    Bissonnette, D.; And Others

    1991-01-01

    Presents an experiment based on the principles of rectilinear motion of light operating in a pinhole camera that projects the image of an illuminated object through a small hole in a sheet to an image screen. (MDH)

  18. Microcomputer-based artificial vision support system for real-time image processing for camera-driven visual prostheses

    NASA Astrophysics Data System (ADS)

    Fink, Wolfgang; You, Cindy X.; Tarbell, Mark A.

    2010-01-01

    It is difficult to predict exactly what blind subjects with camera-driven visual prostheses (e.g., retinal implants) can perceive. Thus, it is prudent to offer them a wide variety of image processing filters and the capability to engage these filters repeatedly in any user-defined order to enhance their visual perception. To attain true portability, we employ a commercial off-the-shelf battery-powered general purpose Linux microprocessor platform to create the microcomputer-based artificial vision support system (?AVS2) for real-time image processing. Truly standalone, ?AVS2 is smaller than a deck of playing cards, lightweight, fast, and equipped with USB, RS-232 and Ethernet interfaces. Image processing filters on ?AVS2 operate in a user-defined linear sequential-loop fashion, resulting in vastly reduced memory and CPU requirements during execution. ?AVS2 imports raw video frames from a USB or IP camera, performs image processing, and issues the processed data over an outbound Internet TCP/IP or RS-232 connection to the visual prosthesis system. Hence, ?AVS2 affords users of current and future visual prostheses independent mobility and the capability to customize the visual perception generated. Additionally, ?AVS2 can easily be reconfigured for other prosthetic systems. Testing of ?AVS2 with actual retinal implant carriers is envisioned in the near future.

  19. Microcomputer-based artificial vision support system for real-time image processing for camera-driven visual prostheses.

    PubMed

    Fink, Wolfgang; You, Cindy X; Tarbell, Mark A

    2010-01-01

    It is difficult to predict exactly what blind subjects with camera-driven visual prostheses (e.g., retinal implants) can perceive. Thus, it is prudent to offer them a wide variety of image processing filters and the capability to engage these filters repeatedly in any user-defined order to enhance their visual perception. To attain true portability, we employ a commercial off-the-shelf battery-powered general purpose Linux microprocessor platform to create the microcomputer-based artificial vision support system (microAVS(2)) for real-time image processing. Truly standalone, microAVS(2) is smaller than a deck of playing cards, lightweight, fast, and equipped with USB, RS-232 and Ethernet interfaces. Image processing filters on microAVS(2) operate in a user-defined linear sequential-loop fashion, resulting in vastly reduced memory and CPU requirements during execution. MiccroAVS(2) imports raw video frames from a USB or IP camera, performs image processing, and issues the processed data over an outbound Internet TCP/IP or RS-232 connection to the visual prosthesis system. Hence, microAVS(2) affords users of current and future visual prostheses independent mobility and the capability to customize the visual perception generated. Additionally, microAVS(2) can easily be reconfigured for other prosthetic systems. Testing of microAVS(2) with actual retinal implant carriers is envisioned in the near future. PMID:20210459

  20. Camera based texture mapping: 3D applications for 2D images

    E-print Network

    Bowden, Nathan Charles

    2005-08-29

    Digital Workspace While Norman Dawn is credited as the father of matte painting, it was Walt Disney?s studio that pioneered multiplane animation. The state-of-the-art in 1937 was a 14 foot tall framing and camera system that was tremendously expensive... viewing a strip of live-action test film in the camera aperture? (Vaz and Barron 2002). Another variation of this technique was featured in the 1924 film, Master of Women, while Dawn was working at the Louis B. Mayer film company. The final composite...

  1. Imaging microscopic structures in pathological retinas using a flood-illumination adaptive optics retinal camera

    Microsoft Academic Search

    Clément Viard; Kiyoko Nakashima; Barbara Lamory; Michel Pâques; Xavier Levecq; Nicolas Château

    2011-01-01

    This research is aimed at characterizing in vivo differences between healthy and pathological retinal tissues at the microscopic scale using a compact adaptive optics (AO) retinal camera. Tests were performed in 120 healthy eyes and 180 eyes suffering from 19 different pathological conditions, including age-related maculopathy (ARM), glaucoma and rare diseases such as inherited retinal dystrophies. Each patient was first

  2. A prototype small CdTe gamma camera for radioguided surgery and other imaging applications

    Microsoft Academic Search

    Makoto Tsuchimochi; Harumi Sakahara; Kazuhide Hayama; Minoru Funaki; Ryoichi Ohno; Takashi Shirahata; Terje Orskaug; Gunnar Maehlum; Koki Yoshioka; Einar Nygard

    2003-01-01

    Gamma probes have been used for sentinel lymph node biopsy in melanoma and breast cancer. However, these probes can provide only radioactivity counts and variable pitch audio output based on the intensity of the detected radioactivity. We have developed a small semiconductor gamma camera (SSGC) that allows visualisation of the size, shape and location of the target tissues. This study

  3. A mobile gamma camera system for 3D acute myocardial perfusion imaging

    Microsoft Academic Search

    M. Persson; D. Bone; L.-A. Brodin; S. Dale; C. Lindstrom; T. Ribbe; H. Elmqvist

    1997-01-01

    A mobile tomographic gamma camera has been developed to enable three-dimensional perfusion studies in the emergency room and in the intensive care environment. The system, Cardioatom, is based on the limited view angle method Ectomography and comprises a modern detector head equipped with a rotating slant hole collimator and specially developed hardware and software for data acquisition, processing and display.

  4. “SIMPLE”—a novel concept of an imaging camera for space applications

    Microsoft Academic Search

    Daniel Ferenc

    2003-01-01

    A novel concept of a large camera for space applications is proposed. It comprises extremely light and inexpensive construction, and very high photon detection efficiency. The essential point of the concept is the evasion of any vacuum-sealing constructional elements, and the exploitation of vacuum in space. This concept was developed particularly for the next-generation experiments aimed to detect extensive air

  5. Low-cost camera modifications and methodologies for very-high-resolution digital images

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Aerial color and color-infrared photography are usually acquired at high altitude so the ground resolution of the photographs is < 1 m. Moreover, current color-infrared cameras and manned aircraft flight time are expensive, so the objective is the development of alternative methods for obtaining ve...

  6. Face Shape Reconstruction from Image Sequence Taken with Monocular Camera using Shape Database

    Microsoft Academic Search

    Hideo Saito; Yosuke Ito; Masaaki Mochimaru

    2007-01-01

    We propose a method for reconstructing 3D face shape from a camera, which captures the object face from various viewing angles. In this method, we do not directly reconstruct the shape, but estimate a small number of parameters which represent the face shape. The parameter space is constructed with principal component analysis of database of a large number of face

  7. Experimental evaluation of an online gamma-camera imaging of permanent seed implantation (OGIPSI) prototype for partial breast irradiation

    SciTech Connect

    Ravi, Ananth; Caldwell, Curtis B.; Pignol, Jean-Philippe [Department of Medical Biophysics, University of Toronto, Toronto, Ontario, M4N 3M5 (Canada); Department of Medical Biophysics and Department of Medical Imaging, University of Toronto, Toronto, Ontario, M4N 3M5 (Canada) and Department of Medical Physics, Sunnybrook Health Sciences Centre, Toronto, Ontario, M4N 3M5 (Canada); Department of Medical Biophysics, University of Toronto, Toronto, Ontario, M4N 3M5 (Canada) and Department of Radiation Oncology, Sunnybrook Health Sciences Centre, Toronto, Ontario, M4N 3M5 (Canada)

    2008-06-15

    Previously, our team used Monte Carlo simulation to demonstrate that a gamma camera could potentially be used as an online image guidance device to visualize seeds during permanent breast seed implant procedures. This could allow for intraoperative correction if seeds have been misplaced. The objective of this study is to describe an experimental evaluation of an online gamma-camera imaging of permanent seed implantation (OGIPSI) prototype. The OGIPSI device is intended to be able to detect a seed misplacement of 5 mm or more within an imaging time of 2 min or less. The device was constructed by fitting a custom built brass collimator (16 mm height, 0.65 mm hole pitch, 0.15 mm septal thickness) on a 64 pixel linear array CZT detector (eValuator-2000, eV Products, Saxonburg, PA). Two-dimensional projection images of seed distributions were acquired by the use of a digitally controlled translation stage. Spatial resolution and noise characteristics of the detector were measured. The ability and time needed for the OGIPSI device to image the seeds and to detect cold spots was tested using an anthropomorphic breast phantom. Mimicking a real treatment plan, a total of 52 {sup 103}Pd seeds of 65.8 MBq each were placed on three different layers at appropriate depths within the phantom. The seeds were reliably detected within 30 s with a median error in localization of 1 mm. In conclusion, an OGIPSI device can potentially be used for image guidance of permanent brachytherapy applications in the breast and, possibly, other sites.

  8. Experimental evaluation of an online gamma-camera imaging of permanent seed implantation (OGIPSI) prototype for partial breast irradiation.

    PubMed

    Ravi, Ananth; Caldwell, Curtis B; Pignol, Jean-Philippe

    2008-06-01

    Previously, our team used Monte Carlo simulation to demonstrate that a gamma camera could potentially be used as an online image guidance device to visualize seeds during permanent breast seed implant procedures. This could allow for intraoperative correction if seeds have been misplaced. The objective of this study is to describe an experimental evaluation of an online gamma-camera imaging of permanent seed implantation (OGIPSI) prototype. The OGIPSI device is intended to be able to detect a seed misplacement of 5 mm or more within an imaging time of 2 min or less. The device was constructed by fitting a custom built brass collimator (16 mm height, 0.65 mm hole pitch, 0.15 mm septal thickness) on a 64 pixel linear array CZT detector (eValuator-2000, eV Products, Saxonburg, PA). Two-dimensional projection images of seed distributions were acquired by the use of a digitally controlled translation stage. Spatial resolution and noise characteristics of the detector were measured. The ability and time needed for the OGIPSI device to image the seeds and to detect cold spots was tested using an anthropomorphic breast phantom. Mimicking a real treatment plan, a total of 52 103Pd seeds of 65.8 MBq each were placed on three different layers at appropriate depths within the phantom. The seeds were reliably detected within 30 s with a median error in localization of 1 mm. In conclusion, an OGIPSI device can potentially be used for image guidance of permanent brachytherapy applications in the breast and, possibly, other sites. PMID:18649481

  9. 3D papillary image capturing by the stereo fundus camera system for clinical diagnosis on retina and optic nerve

    NASA Astrophysics Data System (ADS)

    Motta, Danilo A.; Serillo, André; de Matos, Luciana; Yasuoka, Fatima M. M.; Bagnato, Vanderlei S.; Carvalho, Luis A. V.

    2014-03-01

    Glaucoma is the second main cause of the blindness in the world and there is a tendency to increase this number due to the lifetime expectation raise of the population. Glaucoma is related to the eye conditions, which leads the damage to the optic nerve. This nerve carries visual information from eye to brain, then, if it has damage, it compromises the visual quality of the patient. In the majority cases the damage of the optic nerve is irreversible and it happens due to increase of intraocular pressure. One of main challenge for the diagnosis is to find out this disease, because any symptoms are not present in the initial stage. When is detected, it is already in the advanced stage. Currently the evaluation of the optic disc is made by sophisticated fundus camera, which is inaccessible for the majority of Brazilian population. The purpose of this project is to develop a specific fundus camera without fluorescein angiography and red-free system to accomplish 3D image of optic disc region. The innovation is the new simplified design of a stereo-optical system, in order to make capable the 3D image capture and in the same time quantitative measurements of excavation and topography of optic nerve; something the traditional fundus cameras do not do. The dedicated hardware and software is developed for this ophthalmic instrument, in order to permit quick capture and print of high resolution 3D image and videos of optic disc region (20° field-of-view) in the mydriatic and nonmydriatic mode.

  10. Depth and all-in-focus imaging by a multi-line-scan light-field camera

    NASA Astrophysics Data System (ADS)

    Štolc, Svorad; Soukup, Daniel; Holländer, Branislav; Huber-Mörk, Reinhold

    2014-09-01

    We present a multi-line-scan light-field image acquisition and processing system designed for 2.5/3-D inspection of fine surface structures in industrial environments. The acquired three-dimensional light field is composed of multiple observations of an object viewed from different angles. The acquisition system consists of an area-scan camera that allows for a small number of sensor lines to be extracted at high frame rates, and a mechanism for transporting an inspected object at a constant speed and direction. During acquisition, an object is moved orthogonally to the camera's optical axis as well as the orientation of the sensor lines and a predefined subset of lines is read out from the sensor at each time step. This allows for the construction of so-called epipolar plane images (EPIs) and subsequent EPI-based depth estimation. We compare several approaches based on testing a set of slope hypotheses in the EPI domain. Hypotheses are derived from block matching, namely the sum of absolute differences, modified sum of absolute differences, normalized cross correlation, census transform, and modified census transform. Results for depth estimation and all-in-focus image generation are presented for synthetic and real data.

  11. Usage of cornea and sclera back reflected images captured in security cameras for forensic and card games applications

    NASA Astrophysics Data System (ADS)

    Zalevsky, Zeev; Ilovitsh, Asaf; Beiderman, Yevgeny

    2013-10-01

    We present an approach allowing seeing objects that are hidden and that are not positioned in direct line of sight with security inspection cameras. The approach is based on inspecting the back reflections obtained from the cornea and the sclera of the eyes of people attending the inspected scene and which are positioned in front of the hidden objects we aim to image after performing proper calibration with point light source (e.g. a LED). The scene can be a forensic scene or for instance a casino in which the application is to see the cards of poker players seating in front of you.

  12. Camera Obscura

    NSDL National Science Digital Library

    Mr. Engelman

    2008-10-28

    Before photography was invented there was the camera obscura, useful for studying the sun, as an aid to artists, and for general entertainment. What is a camera obscura and how does it work ??? Camera = Latin for room Obscura = Latin for dark But what is a Camera Obscura? The Magic Mirror of Life What is a camera obscura? A French drawing camera with supplies A French drawing camera with supplies Drawing Camera Obscuras with Lens at the top Drawing Camera Obscuras with Lens at the top Read the first three paragraphs of this article. Under the portion Early Observations and Use in Astronomy you will find the answers to the ...

  13. Latest developments in the iLids performance standard: from multiple standard camera views to new imaging modalities

    NASA Astrophysics Data System (ADS)

    Sage, K. H.; Nilski, A. J.; Sillett, I. M.

    2009-09-01

    The Imagery Library for Intelligent Detection Systems (iLids) is the UK Government's standard for Video Based Detection Systems (VBDS). The first four iLids scenarios were released in November 2006 and annual evaluations for these four scenarios began in 2007. The Home Office Scientific Development Branch (HOSDB), in partnership with the Centre for the Protection of National Infrastructure (CPNI), has also developed a fifth iLids Scenario; Multiple Camera Tracking (MCT). The fifth scenario data sets were made available in November 2008 to industry, academic and commercial research organizations The imagery contains various staged events of people walking through the camera views. Multiple Camera Tracking Systems (MCTS) are expected to initialise on a specific target and be able to track the target over some or all of the camera views. HOSDB and CPNI are now working on a sixth iLids dataset series. These datasets will cover several technology areas: • Thermal imaging systems • Systems that rely on active IR illumination The aim is to develop libraries that promote the development of systems that are able to demonstrate effective performance in the key application area of people and vehicular detection at a distance. This paper will: • Describe the evaluation process, infrastructure and tools that HOSDB will use to evaluate MCT systems. Building on the success of our previous automated tools for evaluation, HOSDB has developed the MCT evaluation tool CLAYMORE. CLAYMORE is a tool for the real-time evaluation of MCT systems. • Provide an overview of the new sixth scenario aims and objectives, library specifications and timescales for release.

  14. Adaptive Optics Imaging at 1-5 Microns on Large Telescopes:The COMIC Camera for ADONIS

    NASA Astrophysics Data System (ADS)

    Lacombe, F.; Marco, O.; Geoffray, H.; Beuzit, J. L.; Monin, J. L.; Gigan, P.; Talureau, B.; Feautrier, P.; Petmezakis, P.; Bonaccini, D.

    1998-09-01

    A new 1-5 ?m high-resolution camera dedicated to the ESO adaptive optics system ADONIS has been developed as a collaborative project of Observatoire de Paris-Meudon and Observatoire de Grenoble, under ESO contract. Since this camera has been designed to correctly sample the diffraction, two focal plate scales are available: 36 mas pixel^-1 for the 1-2.5 ?m range and 100 mas pixel^-1 for the 3-5 ?m range, yielding fields of view of 4.5"x4.5" and 12.8"x12.8", respectively. Several broadband and narrowband filters are available as well as two circular variable filters, allowing low spectral resolution (R~60-120) imagery between 1.2 and 4.8 mum. This camera is equipped with a 128x128 HgCdTe/CCD array detector built by the CEA-LETI-LIR (Grenoble, France). Among its main characteristics, this detector offers a remarkably high storage capacity (more than 10^6 electrons) with a total system readout noise of ~1000 electrons rms, making it particularly well suited for long integration time imagery in the 3-5 ?m range of the near-infrared domain. The measured dark current is 2000 electrons s^-1 pixel^-1 at the regular operating temperature of 77 K, allowing long exposure times at short wavelengths (lambda<3 ?m), where the performances are readout-noise limited. At longer wavelengths (lambda>3 ?m), the performances are background-noise limited. We have estimated the ADONIS + COMIC imaging performances using a method specially dedicated to high angular resolution cameras.

  15. Miniaturized fundus camera

    NASA Astrophysics Data System (ADS)

    Gliss, Christine; Parel, Jean-Marie A.; Flynn, John T.; Pratisto, Hans S.; Niederer, Peter F.

    2003-07-01

    We present a miniaturized version of a fundus camera. The camera is designed for the use in screening for retinopathy of prematurity (ROP). There, but also in other applications a small, light weight, digital camera system can be extremely useful. We present a small wide angle digital camera system. The handpiece is significantly smaller and lighter then in all other systems. The electronics is truly portable fitting in a standard boardcase. The camera is designed to be offered at a compatible price. Data from tests on young rabbits' eyes is presented. The development of the camera system is part of a telemedicine project screening for ROP. Telemedical applications are a perfect application for this camera system using both advantages: the portability as well as the digital image.

  16. Development of Electron Tracking Compton Camera using micro pixel gas chamber for medical imaging

    Microsoft Academic Search

    Shigeto Kabuki; Kaori Hattori; Ryota Kohara; Etsuo Kunieda; Atsushi Kubo; Hidetoshi Kubo; Kentaro Miuchi; Tadaki Nakahara; Tsutomu Nagayoshi; Hironobu Nishimura; Yoko Okada; Reiko Orito; Hiroyuki Sekiya; Takashi Shirahata; Atsushi Takada; Toru Tanimori; Kazuki Ueno

    2007-01-01

    We have developed the Electron Tracking Compton Camera (ETCC) with reconstructing the 3-D tracks of the scattered electron in Compton process for both sub-MeV and MeV gamma rays. By measuring both the directions and energies of not only the recoil gamma ray but also the scattered electron, the direction of the incident gamma ray is determined for each individual photon.

  17. Measurement system based on a high-speed camera and image processing techniques for testing of spacecraft subsystems

    NASA Astrophysics Data System (ADS)

    Casarosa, Gianluca; Sarti, Bruno

    2005-03-01

    In the frame of the development of new Electrical Ground Support Equipment (EGSE) for the testing phase of a spacecraft and its subsystems, the Engineering Services Section, within the Testing Division, Mechanical Systems Department, at the European Space and Technology Centre (ESTEC), has started an investigation aiming to verify the performances of a contact-less measurement system based on a high-speed camera and image processing techniques. This shall be used as an additional tool during the future test campaigns to be held at ESTEC, every time a non-intrusive GSE is required. The system is based on a PhotronTM High Speed System, composed of a High Speed camera connected to its frame-grabber via a Panel LinkTM bus, and a SW interface for the camera control. Derivative Filters and techniques for edge detection, such as the Sobel, Prewitt and Laplace algorithms, have been used for the image enhancement and processing during several tests campaigns, which have been held to evaluate the measurement system. The improvement of the detection of the movement of the specimen has been achieved by sticking, where possible, one or more optical targets over the surface of the test article. The targets are of two types: for ambient and vacuum qualified. The performances of the measuring system have been evaluated and are summarized in this paper. The limitations of the proposed tool have been assessed, together with the identifications of the possible scenarios where this system would be useful and could be applied to increase the effectiveness of the verification phase of a spacecraft-subsystem.

  18. Long range 3D imaging with a 32×32 Geiger mode InGaAs/InP camera

    NASA Astrophysics Data System (ADS)

    Hiskett, Philip A.; Gordon, Karen J.; Copley, Jeremy W.; Lamb, Robert A.

    2014-05-01

    This paper reports the performance of a long range 3D imaging system operating at a wavelength of 1550nm incorporating a Geiger mode 32x32 array InGaAs/InP camera. A cross-correlation technique were used to mitigate range aliasing and therefore enable the measurement of the absolute range to single or multiple surfaces within the instantaneous field of view of each pixel in the 2D array. The system uses a fibre amplified laser source operating at an average pulse repetition rate of 125kHz with pulse energies of 2.4?J per pulse. Measurements of the absolute range to remote manmade Lambertian surfaces and foliage at ranges up to 10km with range accuracy of better than 4cm are reported. The simultaneous imaging and measurement of the absolute range of two remote manmade Lambertian surfaces separated by >1km is also presented.

  19. 21 CFR 892.1110 - Positron camera.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ...2010-04-01 2010-04-01 false Positron camera. 892.1110 Section 892.1110 Food...Diagnostic Devices § 892.1110 Positron camera. (a) Identification. A positron camera is a device intended to image the...

  20. Assessment of the metrological performance of an in situ storage image sensor ultra-high speed camera for full-field deformation measurements

    NASA Astrophysics Data System (ADS)

    Rossi, Marco; Pierron, Fabrice; Forquin, Pascal

    2014-02-01

    Ultra-high speed (UHS) cameras allow us to acquire images typically up to about 1 million frames s-1 for a full spatial resolution of the order of 1 Mpixel. Different technologies are available nowadays to achieve these performances, an interesting one is the so-called in situ storage image sensor architecture where the image storage is incorporated into the sensor chip. Such an architecture is all solid state and does not contain movable devices as occurs, for instance, in the rotating mirror UHS cameras. One of the disadvantages of this system is the low fill factor (around 76% in the vertical direction and 14% in the horizontal direction) since most of the space in the sensor is occupied by memory. This peculiarity introduces a series of systematic errors when the camera is used to perform full-field strain measurements. The aim of this paper is to develop an experimental procedure to thoroughly characterize the performance of such kinds of cameras in full-field deformation measurement and identify the best operative conditions which minimize the measurement errors. A series of tests was performed on a Shimadzu HPV-1 UHS camera first using uniform scenes and then grids under rigid movements. The grid method was used as full-field measurement optical technique here. From these tests, it has been possible to appropriately identify the camera behaviour and utilize this information to improve actual measurements.

  1. Characterization of the luminance and shape of ash particles at Sakurajima volcano, Japan, using CCD camera images

    NASA Astrophysics Data System (ADS)

    Miwa, Takahiro; Shimano, Taketo; Nishimura, Takeshi

    2015-01-01

    We develop a new method for characterizing the properties of volcanic ash at the Sakurajima volcano, Japan, based on automatic processing of CCD camera images. Volcanic ash is studied in terms of both luminance and particle shape. A monochromatic CCD camera coupled with a stereomicroscope is used to acquire digital images through three filters that pass red, green, or blue light. On single ash particles, we measure the apparent luminance, corresponding to 256 tones for each color (red, green, and blue) for each pixel occupied by ash particles in the image, and the average and standard deviation of the luminance. The outline of each ash particle is captured from a digital image taken under transmitted light through a polarizing plate. Also, we define a new quasi-fractal dimension ( D qf ) to quantify the complexity of the ash particle outlines. We examine two ash samples, each including about 1000 particles, which were erupted from the Showa crater of the Sakurajima volcano, Japan, on February 09, 2009 and January 13, 2010. The apparent luminance of each ash particle shows a lognormal distribution. The average luminance of the ash particles erupted in 2009 is higher than that of those erupted in 2010, which is in good agreement with the results obtained from component analysis under a binocular microscope (i.e., the number fraction of dark juvenile particles is lower for the 2009 sample). The standard deviations of apparent luminance have two peaks in the histogram, and the quasi-fractal dimensions show different frequency distributions between the two samples. These features are not recognized in the results of conventional qualitative classification criteria or the sphericity of the particle outlines. Our method can characterize and distinguish ash samples, even for ash particles that have gradual property changes, and is complementary to component analysis. This method also enables the relatively fast and systematic analysis of ash samples that is required for petrologic monitoring of ongoing activity, such as at the Sakurajima volcano.

  2. Reconstructing the landing trajectory of the CE-3 lunar probe by using images from the landing camera

    NASA Astrophysics Data System (ADS)

    Liu, Jian-Jun; Yan, Wei; Li, Chun-Lai; Tan, Xu; Ren, Xin; Mu, Ling-Li

    2014-12-01

    An accurate determination of the landing trajectory of Chang'e-3 (CE-3) is significant for verifying orbital control strategy, optimizing orbital planning, accurately determining the landing site of CE-3 and analyzing the geological background of the landing site. Due to complexities involved in the landing process, there are some differences between the planned trajectory and the actual trajectory of CE-3. The landing camera on CE-3 recorded a sequence of the landing process with a frequency of 10 frames per second. These images recorded by the landing camera and high-resolution images of the lunar surface are utilized to calculate the position of the probe, so as to reconstruct its precise trajectory. This paper proposes using the method of trajectory reconstruction by Single Image Space Resection to make a detailed study of the hovering stage at a height of 100 m above the lunar surface. Analysis of the data shows that the closer CE-3 came to the lunar surface, the higher the spatial resolution of images that were acquired became, and the more accurately the horizontal and vertical position of CE-3 could be determined. The horizontal and vertical accuracies were 7.09 m and 4.27 m respectively during the hovering stage at a height of 100.02 m. The reconstructed trajectory can reflect the change in CE-3's position during the powered descent process. A slight movement in CE-3 during the hovering stage is also clearly demonstrated. These results will provide a basis for analysis of orbit control strategy, and it will be conducive to adjustment and optimization of orbit control strategy in follow-up missions.

  3. Traffic camera system development

    NASA Astrophysics Data System (ADS)

    Hori, Toshi

    1997-04-01

    The intelligent transportation system has generated a strong need for the development of intelligent camera systems to meet the requirements of sophisticated applications, such as electronic toll collection (ETC), traffic violation detection and automatic parking lot control. In order to achieve the highest levels of accuracy in detection, these cameras must have high speed electronic shutters, high resolution, high frame rate, and communication capabilities. A progressive scan interline transfer CCD camera, with its high speed electronic shutter and resolution capabilities, provides the basic functions to meet the requirements of a traffic camera system. Unlike most industrial video imaging applications, traffic cameras must deal with harsh environmental conditions and an extremely wide range of light. Optical character recognition is a critical function of a modern traffic camera system, with detection and accuracy heavily dependent on the camera function. In order to operate under demanding conditions, communication and functional optimization is implemented to control cameras from a roadside computer. The camera operates with a shutter speed faster than 1/2000 sec. to capture highway traffic both day and night. Consequently camera gain, pedestal level, shutter speed and gamma functions are controlled by a look-up table containing various parameters based on environmental conditions, particularly lighting. Lighting conditions are studied carefully, to focus only on the critical license plate surface. A unique light sensor permits accurate reading under a variety of conditions, such as a sunny day, evening, twilight, storms, etc. These camera systems are being deployed successfully in major ETC projects throughout the world.

  4. Night Vision Camera

    NASA Technical Reports Server (NTRS)

    1996-01-01

    PixelVision, Inc. developed the Night Video NV652 Back-illuminated CCD Camera, based on the expertise of a former Jet Propulsion Laboratory employee and a former employee of Scientific Imaging Technologies, Inc. The camera operates without an image intensifier, using back-illuminated and thinned CCD technology to achieve extremely low light level imaging performance. The advantages of PixelVision's system over conventional cameras include greater resolution and better target identification under low light conditions, lower cost and a longer lifetime. It is used commercially for research and aviation.

  5. A Novel 24 Ghz One-Shot Rapid and Portable Microwave Imaging System (Camera)

    NASA Technical Reports Server (NTRS)

    Ghasr, M.T.; Abou-Khousa, M.A.; Kharkovsky, S.; Zoughi, R.; Pommerenke, D.

    2008-01-01

    A novel 2D microwave imaging system at 24 GHz based on MST techniques. Enhanced sensitivity and SNR by utilizing PIN diode-loaded resonant slots. Specific slot and array design to increase transmission and reduce cross -coupling. Real-time imaging at a rate in excess of 30 images per second. Reflection as well transmission mode capabilities. Utility and application for electric field distribution mapping related to: Nondestructive Testing (NDT), imaging applications (SAR, Holography), and antenna pattern measurements.

  6. Camera Operator and Videographer

    ERIC Educational Resources Information Center

    Moore, Pam

    2007-01-01

    Television, video, and motion picture camera operators produce images that tell a story, inform or entertain an audience, or record an event. They use various cameras to shoot a wide range of material, including television series, news and sporting events, music videos, motion pictures, documentaries, and training sessions. Those who film or…

  7. Sliding-aperture multiview 3D camera-projector system and its application for 3D image transmission and IR to visible conversion

    NASA Astrophysics Data System (ADS)

    Shestak, Serguei A.; Son, Jung-Young; Jeon, Hyung-Wook; Komar, Victor G.

    1997-05-01

    A new architecture of the 3-D multiview camera and projector is presented. Camera optical system consist of a single wide aperture objective, a secondary (small) objective, a field lens and a scanner. Projector supplementary includes rear projection pupil forming screen. The system is intended for sequential 2-D prospective images acquisition and projection while the small working aperture is sliding across the opening of the big size objective lens or the spherical mirror. Both horizontal and full parallax imaging are possible. The system can transmit 3-D images in real time through fiber bundles, free space, and video image transmission lines. This system can also be used for real time conversion of infrared 3-D images. With this system, clear multiview stereoscopic images of real scene can be displayed with 30 degrees view zone angle.

  8. Ringfield lithographic camera

    DOEpatents

    Sweatt, William C. (Albuquerque, NM)

    1998-01-01

    A projection lithography camera is presented with a wide ringfield optimized so as to make efficient use of extreme ultraviolet radiation from a large area radiation source (e.g., D.sub.source .apprxeq.0.5 mm). The camera comprises four aspheric mirrors optically arranged on a common axis of symmetry with an increased etendue for the camera system. The camera includes an aperture stop that is accessible through a plurality of partial aperture stops to synthesize the theoretical aperture stop. Radiation from a mask is focused to form a reduced image on a wafer, relative to the mask, by reflection from the four aspheric mirrors.

  9. Joint depth map and color consistency estimation for stereo images with different illuminations and cameras.

    PubMed

    Heo, Yong Seok; Lee, Kyoung Mu; Lee, Sang Uk

    2013-05-01

    Abstract—In this paper, we propose a method that infers both accurate depth maps and color-consistent stereo images for radiometrically varying stereo images. In general, stereo matching and performing color consistency between stereo images are a chicken-and-egg problem since it is not a trivial task to simultaneously achieve both goals. Hence, we have developed an iterative framework in which these two processes can boost each other. First, we transform the input color images to log-chromaticity color space, from which a linear relationship can be established during constructing a joint pdf of transformed left and right color images. From this joint pdf, we can estimate a linear function that relates the corresponding pixels in stereo images. Based on this linear property, we present a new stereo matching cost by combining Mutual Information (MI), SIFT descriptor, and segment-based plane-fitting to robustly find correspondence for stereo image pairs which undergo radiometric variations. Meanwhile, we devise a Stereo Color Histogram Equalization (SCHE) method to produce color-consistent stereo image pairs, which conversely boost the disparity map estimation. Experimental results show that our method produces both accurate depth maps and color-consistent stereo images, even for stereo images with severe radiometric differences. PMID:22868654

  10. Dynamic imaging with a triggered and intensified CCD camera system in a high-intensity neutron beam

    NASA Astrophysics Data System (ADS)

    Vontobel, P.; Frei, G.; Brunner, J.; Gildemeister, A. E.; Engelhardt, M.

    2005-04-01

    When time-dependent processes within metallic structures should be inspected and visualized, neutrons are well suited due to their high penetration through Al, Ag, Ti or even steel. Then it becomes possible to inspect the propagation, distribution and evaporation of organic liquids as lubricants, fuel or water. The principle set-up of a suited real-time system was implemented and tested at the radiography facility NEUTRA of PSI. The highest beam intensity there is 2×107 cm s, which enables to observe sequences in a reasonable time and quality. The heart of the detection system is the MCP intensified CCD camera PI-Max with a Peltier cooled chip (1300×1340 pixels). The intensifier was used for both gating and image enhancement, where as the information was accumulated over many single frames on the chip before readout. Although, a 16-bit dynamic range is advertised by the camera manufacturers, it must be less due to the inherent noise level from the intensifier. The obtained result should be seen as the starting point to go ahead to fit the different requirements of car producers in respect to fuel injection, lubricant distribution, mechanical stability and operation control. Similar inspections will be possible for all devices with repetitive operation principle. Here, we report about two measurements dealing with the lubricant distribution in a running motorcycle motor turning at 1200 rpm. We were monitoring the periodic stationary movements of piston, valves and camshaft with a micro-channel plate intensified CCD camera system (PI-Max 1300RB, Princeton Instruments) triggered at exactly chosen time points.

  11. a Modified Projective Transformation Scheme for Mosaicking Multi-Camera Imaging System Equipped on a Large Payload Fixed-Wing Uas

    NASA Astrophysics Data System (ADS)

    Jhan, J. P.; Li, Y. T.; Rau, J. Y.

    2015-03-01

    In recent years, Unmanned Aerial System (UAS) has been applied to collect aerial images for mapping, disaster investigation, vegetation monitoring and etc. It is a higher mobility and lower risk platform for human operation, but the low payload and short operation time reduce the image collection efficiency. In this study, one nadir and four oblique consumer grade DSLR cameras composed multiple camera system is equipped on a large payload UAS, which is designed to collect large ground coverage images in an effective way. The field of view (FOV) is increased to 127 degree, which is thus suitable to collect disaster images in mountainous area. The synthetic acquired five images are registered and mosaicked as larger format virtual image for reducing the number of images, post processing time, and for easier stereo plotting. Instead of traditional image matching and applying bundle adjustment method to estimate transformation parameters, the IOPs and ROPs of multiple cameras are calibrated and derived the coefficients of modified projective transformation (MPT) model for image mosaicking. However, there are some uncertainty of indoor calibrated IOPs and ROPs since the different environment conditions as well as the vibration of UAS, which will cause misregistration effect of initial MPT results. Remaining residuals are analysed through tie points matching on overlapping area of initial MPT results, in which displacement and scale difference are introduced and corrected to modify the ROPs and IOPs for finer registration results. In this experiment, the internal accuracy of mosaic image is better than 0.5 pixels after correcting the systematic errors. Comparison between separate cameras and mosaic images through rigorous aerial triangulation are conducted, in which the RMSE of 5 control and 9 check points is less than 5 cm and 10 cm in planimetric and vertical directions, respectively, for all cases. It proves that the designed imaging system and the proposed scheme have potential to create large scale topographic map.

  12. Methane Band and Continuum Band Imaging of Titan's Atmosphere Using Cassini ISS Narrow Angle Camera Pictures from the CURE/Cassini Imaging Project

    NASA Astrophysics Data System (ADS)

    Shitanishi, Jennifer; Gillam, S. D.

    2009-05-01

    The study of Titan's atmosphere, which bears resemblance to early Earth's, may help us understand more of our own. Constructing a Monte Carlo model of Titan's atmosphere is helpful to achieve this goal. Methane (MT) and continuum band (CB) images of Titan taken by the CURE/Cassini Imaging Project, using the Cassini Narrow Angle Camera (NAC) were analyzed. They were scheduled by Cassini Optical Navigation. Images were obtained at phase 53°, 112°, 161°, and 165°. They include 22 total MT1(center wavelength at 619nm), MT2(727nm), MT3(889nm), CB1(635nm), CB2(751nm), and CB3(938nm) images. They were reduced with previously written scripts using the National Optical Astronomy Observatory Image Reduction and Analysis Facility scientific analysis suite. Correction for horizontal and vertical banding and cosmic ray hits were made. The MT images were registered with corresponding CB images to ensure that subsequently measured fluxes ratios came from the same parts of the atmosphere. Preliminary DN limb-to-limb scans and loci of the haze layers will be presented. Accurate estimates of the sub-spacecraft points on each picture will be presented. Flux ratios (FMT/FCB=Q0) along the scans and total absorption coefficients along the lines of sight from the spacecraft through the pixels (and into Titan) will also be presented.

  13. Electro-optical testing of fully depleted CCD image sensors for the Large Synoptic Survey Telescope camera

    NASA Astrophysics Data System (ADS)

    Doherty, Peter E.; Antilogus, Pierre; Astier, Pierre; Chiang, James; Gilmore, D. Kirk; Guyonnet, Augustin; Huang, Dajun; Kelly, Heather; Kotov, Ivan; Kubanek, Petr; Nomerotski, Andrei; O'Connor, Paul; Rasmussen, Andrew; Riot, Vincent J.; Stubbs, Christopher W.; Takacs, Peter; Tyson, J. Anthony; Vetter, Kurt

    2014-07-01

    The LSST Camera science sensor array will incorporate 189 large format Charge Coupled Device (CCD) image sensors. Each CCD will include over 16 million pixels and will be divided into 16 equally sized segments and each segment will be read through a separate output amplifier. The science goals of the project require CCD sensors with state of the art performance in many aspects. The broad survey wavelength coverage requires fully depleted, 100 micrometer thick, high resistivity, bulk silicon as the imager substrate. Image quality requirements place strict limits on the image degradation that may be caused by sensor effects: optical, electronic, and mechanical. In this paper we discuss the design of the prototype sensors, the hardware and software that has been used to perform electro-optic testing of the sensors, and a selection of the results of the testing to date. The architectural features that lead to internal electrostatic fields, the various effects on charge collection and transport that are caused by them, including charge diffusion and redistribution, effects on delivered PSF, and potential impacts on delivered science data quality are addressed.

  14. The Araucaria Project: The effect of blending on the Cepheid distance to NGC 300 from Advanced Camera for Surveys images

    E-print Network

    Bresolin, F; Gieren, W; Kudritzki, R P; Bresolin, Fabio; Pietrzynski, Grzegorz; Gieren, Wolfgang; Kudritzki, Rolf-Peter

    2005-01-01

    We have used the Advanced Camera for Surveys aboard the Hubble Space Telescope to obtain F435W, F555W and F814W single-epoch images of six fields in the spiral galaxy NGC 300. Taking advantage of the superb spatial resolution of these images, we have tested the effect that blending of the Cepheid variables studied from the ground with close stellar neighbors, unresolved on the ground-based images, has on the distance determination to NGC 300. Out of the 16 Cepheids included in this study, only three are significantly affected by nearby stellar objects. After correcting the ground-based magnitudes for the contribution by these projected companions to the observed flux, we find that the corresponding Period-Luminosity relations in V, I and the Wesenheit magnitude W_I are not significantly different from the relations obtained without corrections. We fix an upper limit of 0.04 magnitudes to the systematic effect of blending on the distance modulus to NGC 300. As part of our HST imaging program, we present improv...

  15. The Araucaria Project: The Effect of Blending on the Cepheid Distance to NGC 300 from Advanced Camera for Surveys Images

    NASA Astrophysics Data System (ADS)

    Bresolin, Fabio; Pietrzy?ski, Grzegorz; Gieren, Wolfgang; Kudritzki, Rolf-Peter

    2005-12-01

    We have used the Advanced Camera for Surveys aboard the Hubble Space Telescope (HST) to obtain F435W, F555W, and F814W single-epoch images of six fields in the spiral galaxy NGC 300. Taking advantage of the superb spatial resolution of these images, we have tested the effect that blending of the Cepheid variables studied from the ground with close stellar neighbors, unresolved on the ground-based images, has on the distance determination to NGC 300. Out of the 16 Cepheids included in this study, only three are significantly affected by nearby stellar objects. After correcting the ground-based magnitudes for the contribution by these projected companions to the observed flux, we find that the corresponding period-luminosity relations in V, I, and the Wesenheit magnitude WI are not significantly different from the relations obtained without corrections. We fix an upper limit of 0.04 mag to the systematic effect of blending on the distance modulus to NGC 300. As part of our HST imaging program, we present improved photometry for 40 blue supergiants in NGC 300. Based on observations with the NASA/ESA Hubble Space Telescope obtained at the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS5-26555. These observations are associated with program 9492.

  16. The Araucaria Project: The effect of blending on the Cepheid distance to NGC 300 from Advanced Camera for Surveys images

    E-print Network

    Fabio Bresolin; Grzegorz Pietrzynski; Wolfgang Gieren; Rolf-Peter Kudritzki

    2005-08-18

    We have used the Advanced Camera for Surveys aboard the Hubble Space Telescope to obtain F435W, F555W and F814W single-epoch images of six fields in the spiral galaxy NGC 300. Taking advantage of the superb spatial resolution of these images, we have tested the effect that blending of the Cepheid variables studied from the ground with close stellar neighbors, unresolved on the ground-based images, has on the distance determination to NGC 300. Out of the 16 Cepheids included in this study, only three are significantly affected by nearby stellar objects. After correcting the ground-based magnitudes for the contribution by these projected companions to the observed flux, we find that the corresponding Period-Luminosity relations in V, I and the Wesenheit magnitude W_I are not significantly different from the relations obtained without corrections. We fix an upper limit of 0.04 magnitudes to the systematic effect of blending on the distance modulus to NGC 300. As part of our HST imaging program, we present improved photometry for 40 blue supergiants in NGC 300.

  17. Caliste 64, a new CdTe micro-camera for hard X-ray spectro-imaging

    NASA Astrophysics Data System (ADS)

    Meuris, A.; Limousin, O.; Lugiez, F.; Gevin, O.; Blondel, C.; Pinsard, F.; Vassal, M. C.; Soufflet, F.; Le Mer, I.

    2009-10-01

    In the frame of the Simbol-X mission of hard X-ray astrophysics, a prototype of micro-camera with 64 pixels called Caliste 64 has been designed and several samples have been tested. The device integrates ultra-low-noise IDeF-X V1.1 ASICs from CEA and a 1 cm 2 Al Schottky CdTe detector from Acrorad because of its high uniformity and spectroscopic performance. The process of hybridization, mastered by the 3D Plus company, respects space applications standards. The camera is a spectro-imager with time-tagging capability. Each photon interacting in the semiconductor is tagged with a time, a position and an energy. Time resolution is better than 100 ns rms for energy deposits greater than 20 keV, taking into account electronic noise and technological dispersal of the front-end electronics. The spectrum summed across the 64 pixels results in an energy resolution of 664 eV fwhm at 13.94 keV and 842 eV fwhm at 59.54 keV, when the detector is cooled down to -10 °C and biased at -500 V.

  18. The DSLR Camera

    NASA Astrophysics Data System (ADS)

    Berkó, Ern?; Argyle, R. W.

    Cameras have developed significantly in the past decade; in particular, digital Single-Lens Reflex Cameras (DSLR) have appeared. As a consequence we can buy cameras of higher and higher pixel number, and mass production has resulted in the great reduction of prices. CMOS sensors used for imaging are increasingly sensitive, and the electronics in the cameras allows images to be taken with much less noise. The software background is developing in a similar way—intelligent programs are created for after-processing and other supplementary works. Nowadays we can find a digital camera in almost every household, most of these cameras are DSLR ones. These can be used very well for astronomical imaging, which is nicely demonstrated by the amount and quality of the spectacular astrophotos appearing in different publications. These examples also show how much post-processing software contributes to the rise in the standard of the pictures. To sum up, the DSLR camera serves as a cheap alternative for the CCD camera, with somewhat weaker technical characteristics. In the following, I will introduce how we can measure the main parameters (position angle and separation) of double stars, based on the methods, software and equipment I use. Others can easily apply these for their own circumstances.

  19. Camera, handlens, and microscope optical system for imaging and coupled optical spectroscopy

    NASA Technical Reports Server (NTRS)

    Mungas, Greg S. (Inventor); Boynton, John (Inventor); Sepulveda, Cesar A. (Inventor); Nunes de Sepulveda, Alicia (Inventor); Gursel, Yekta (Inventor)

    2011-01-01

    An optical system comprising two lens cells, each lens cell comprising multiple lens elements, to provide imaging over a very wide image distance and within a wide range of magnification by changing the distance between the two lens cells. An embodiment also provides scannable laser spectroscopic measurements within the field-of-view of the instrument.

  20. Camera, handlens, and microscope optical system for imaging and coupled optical spectroscopy

    NASA Technical Reports Server (NTRS)

    Mungas, Greg S. (Inventor); Boynton, John (Inventor); Sepulveda, Cesar A. (Inventor); Nunes de Sepulveda, legal representative, Alicia (Inventor); Gursel, Yekta (Inventor)

    2012-01-01

    An optical system comprising two lens cells, each lens cell comprising multiple lens elements, to provide imaging over a very wide image distance and within a wide range of magnification by changing the distance between the two lens cells. An embodiment also provides scannable laser spectroscopic measurements within the field-of-view of the instrument.