Science.gov

Sample records for camera lroc images

  1. LROC - Lunar Reconnaissance Orbiter Camera

    NASA Astrophysics Data System (ADS)

    Robinson, M. S.; Eliason, E.; Hiesinger, H.; Jolliff, B. L.; McEwen, A.; Malin, M. C.; Ravine, M. A.; Thomas, P. C.; Turtle, E. P.

    2009-12-01

    The Lunar Reconnaissance Orbiter (LRO) went into lunar orbit on 23 June 2009. The LRO Camera (LROC) acquired its first lunar images on June 30 and commenced full scale testing and commissioning on July 10. The LROC consists of two narrow-angle cameras (NACs) that provide 0.5 m scale panchromatic images over a combined 5 km swath, and a wide-angle camera (WAC) to provide images at a scale of 100 m per pixel in five visible wavelength bands (415, 566, 604, 643, and 689 nm) and 400 m per pixel in two ultraviolet bands (321 nm and 360 nm) from the nominal 50 km orbit. Early operations were designed to test the performance of the cameras under all nominal operating conditions and provided a baseline for future calibrations. Test sequences included off-nadir slews to image stars and the Earth, 90° yaw sequences to collect flat field calibration data, night imaging for background characterization, and systematic mapping to test performance. LRO initially was placed into a terminator orbit resulting in images acquired under low signal conditions. Over the next three months the incidence angle at the spacecraft’s equator crossing gradually decreased towards high noon, providing a range of illumination conditions. Several hundred south polar images were collected in support of impact site selection for the LCROSS mission; details can be seen in many of the shadows. Commissioning phase images not only proved the instruments’ overall performance was nominal, but also that many geologic features of the lunar surface are well preserved at the meter-scale. Of particular note is the variety of impact-induced morphologies preserved in a near pristine state in and around kilometer-scale and larger young Copernican age impact craters that include: abundant evidence of impact melt of a variety of rheological properties, including coherent flows with surface textures and planimetric properties reflecting supersolidus (e.g., liquid melt) emplacement, blocks delicately perched on

  2. Lunar Reconnaissance Orbiter Camera (LROC) instrument overview

    USGS Publications Warehouse

    Robinson, M.S.; Brylow, S.M.; Tschimmel, M.; Humm, D.; Lawrence, S.J.; Thomas, P.C.; Denevi, B.W.; Bowman-Cisneros, E.; Zerr, J.; Ravine, M.A.; Caplinger, M.A.; Ghaemi, F.T.; Schaffner, J.A.; Malin, M.C.; Mahanti, P.; Bartels, A.; Anderson, J.; Tran, T.N.; Eliason, E.M.; McEwen, A.S.; Turtle, E.; Jolliff, B.L.; Hiesinger, H.

    2010-01-01

    The Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) and Narrow Angle Cameras (NACs) are on the NASA Lunar Reconnaissance Orbiter (LRO). The WAC is a 7-color push-frame camera (100 and 400 m/pixel visible and UV, respectively), while the two NACs are monochrome narrow-angle linescan imagers (0.5 m/pixel). The primary mission of LRO is to obtain measurements of the Moon that will enable future lunar human exploration. The overarching goals of the LROC investigation include landing site identification and certification, mapping of permanently polar shadowed and sunlit regions, meter-scale mapping of polar regions, global multispectral imaging, a global morphology base map, characterization of regolith properties, and determination of current impact hazards.

  3. Flight Calibration of the LROC Narrow Angle Camera

    NASA Astrophysics Data System (ADS)

    Humm, D. C.; Tschimmel, M.; Brylow, S. M.; Mahanti, P.; Tran, T. N.; Braden, S. E.; Wiseman, S.; Danton, J.; Eliason, E. M.; Robinson, M. S.

    2016-04-01

    Characterization and calibration are vital for instrument commanding and image interpretation in remote sensing. The Lunar Reconnaissance Orbiter Camera Narrow Angle Camera (LROC NAC) takes 500 Mpixel greyscale images of lunar scenes at 0.5 meters/pixel. It uses two nominally identical line scan cameras for a larger crosstrack field of view. Stray light, spatial crosstalk, and nonlinearity were characterized using flight images of the Earth and the lunar limb. These are important for imaging shadowed craters, studying ˜1 meter size objects, and photometry respectively. Background, nonlinearity, and flatfield corrections have been implemented in the calibration pipeline. An eight-column pattern in the background is corrected. The detector is linear for DN = 600--2000 but a signal-dependent additive correction is required and applied for DN<600. A predictive model of detector temperature and dark level was developed to command dark level offset. This avoids images with a cutoff at DN=0 and minimizes quantization error in companding. Absolute radiometric calibration is derived from comparison of NAC images with ground-based images taken with the Robotic Lunar Observatory (ROLO) at much lower spatial resolution but with the same photometric angles.

  4. Characterization of previously unidentified lunar pyroclastic deposits using Lunar Reconnaissance Orbiter Camera (LROC) data

    USGS Publications Warehouse

    Gustafson, J. Olaf; Bell, James F.; Gaddis, Lisa R.R.; Hawke, B. Ray Ray; Giguere, Thomas A.

    2012-01-01

    We used a Lunar Reconnaissance Orbiter Camera (LROC) global monochrome Wide-angle Camera (WAC) mosaic to conduct a survey of the Moon to search for previously unidentified pyroclastic deposits. Promising locations were examined in detail using LROC multispectral WAC mosaics, high-resolution LROC Narrow Angle Camera (NAC) images, and Clementine multispectral (ultraviolet-visible or UVVIS) data. Out of 47 potential deposits chosen for closer examination, 12 were selected as probable newly identified pyroclastic deposits. Potential pyroclastic deposits were generally found in settings similar to previously identified deposits, including areas within or near mare deposits adjacent to highlands, within floor-fractured craters, and along fissures in mare deposits. However, a significant new finding is the discovery of localized pyroclastic deposits within floor-fractured craters Anderson E and F on the lunar farside, isolated from other known similar deposits. Our search confirms that most major regional and localized low-albedo pyroclastic deposits have been identified on the Moon down to ~100 m/pix resolution, and that additional newly identified deposits are likely to be either isolated small deposits or additional portions of discontinuous, patchy deposits.

  5. Photometric normalization of LROC WAC images

    NASA Astrophysics Data System (ADS)

    Sato, H.; Denevi, B.; Robinson, M. S.; Hapke, B. W.; McEwen, A. S.; LROC Science Team

    2010-12-01

    The Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) acquires near global coverage on a monthly basis. The WAC is a push frame sensor with a 90° field of view (FOV) in BW mode and 60° FOV in 7-color mode (320 nm to 689 nm). WAC images are acquired during each orbit in 10° latitude segments with cross track coverage of ~50 km. Before mosaicking, WAC images are radiometrically calibrated to remove instrumental artifacts and to convert at sensor radiance to I/F. Images are also photometrically normalized to common viewing and illumination angles (30° phase), a challenge due to the wide angle nature of the WAC where large differences in phase angle are observed in a single image line (±30°). During a single month the equatorial incidence angle drifts about 28° and over the course of ~1 year the lighting completes a 360° cycle. The light scattering properties of the lunar surface depend on incidence(i), emission(e), and phase(p) angles as well as soil properties such as single-scattering albedo and roughness that vary with terrain type and state of maturity [1]. We first tested a Lommel-Seeliger Correction (LSC) [cos(i)/(cos(i) + cos(e))] [2] with a phase function defined by an exponential decay plus 4th order polynomial term [3] which did not provide an adequate solution. Next we employed a LSC with an exponential 2nd order decay phase correction that was an improvement, but still exhibited unacceptable frame-to-frame residuals. In both cases we fitted the LSC I/F vs. phase angle to derive the phase corrections. To date, the best results are with a lunar-lambert function [4] with exponential 2nd order decay phase correction (LLEXP2) [(A1exp(B1p)+A2exp(B2p)+A3) * cos(i)/(cos(e) + cos(i)) + B3cos(i)]. We derived the parameters for the LLEXP2 from repeat imaging of a small region and then corrected that region with excellent results. When this correction was applied to the whole Moon the results were less than optimal - no surprise given the

  6. Extracting Accurate and Precise Topography from Lroc Narrow Angle Camera Stereo Observations

    NASA Astrophysics Data System (ADS)

    Henriksen, M. R.; Manheim, M. R.; Speyerer, E. J.; Robinson, M. S.; LROC Team

    2016-06-01

    The Lunar Reconnaissance Orbiter Camera (LROC) includes two identical Narrow Angle Cameras (NAC) that acquire meter scale imaging. Stereo observations are acquired by imaging from two or more orbits, including at least one off-nadir slew. Digital terrain models (DTMs) generated from the stereo observations are controlled to Lunar Orbiter Laser Altimeter (LOLA) elevation profiles. With current processing methods, digital terrain models (DTM) have absolute accuracies commensurate than the uncertainties of the LOLA profiles (~10 m horizontally and ~1 m vertically) and relative horizontal and vertical precisions better than the pixel scale of the DTMs (2 to 5 m). The NAC stereo pairs and derived DTMs represent an invaluable tool for science and exploration purposes. We computed slope statistics from 81 highland and 31 mare DTMs across a range of baselines. Overlapping DTMs of single stereo sets were also combined to form larger area DTM mosaics, enabling detailed characterization of large geomorphic features and providing a key resource for future exploration planning. Currently, two percent of the lunar surface is imaged in NAC stereo and continued acquisition of stereo observations will serve to strengthen our knowledge of the Moon and geologic processes that occur on all the terrestrial planets.

  7. Extracting accurate and precise topography from LROC narrow angle camera stereo observations

    NASA Astrophysics Data System (ADS)

    Henriksen, M. R.; Manheim, M. R.; Burns, K. N.; Seymour, P.; Speyerer, E. J.; Deran, A.; Boyd, A. K.; Howington-Kraus, E.; Rosiek, M. R.; Archinal, B. A.; Robinson, M. S.

    2017-02-01

    The Lunar Reconnaissance Orbiter Camera (LROC) includes two identical Narrow Angle Cameras (NAC) that each provide 0.5 to 2.0 m scale images of the lunar surface. Although not designed as a stereo system, LROC can acquire NAC stereo observations over two or more orbits using at least one off-nadir slew. Digital terrain models (DTMs) are generated from sets of stereo images and registered to profiles from the Lunar Orbiter Laser Altimeter (LOLA) to improve absolute accuracy. With current processing methods, DTMs have absolute accuracies better than the uncertainties of the LOLA profiles and relative vertical and horizontal precisions less than the pixel scale of the DTMs (2-5 m). We computed slope statistics from 81 highland and 31 mare DTMs across a range of baselines. For a baseline of 15 m the highland mean slope parameters are: median = 9.1°, mean = 11.0°, standard deviation = 7.0°. For the mare the mean slope parameters are: median = 3.5°, mean = 4.9°, standard deviation = 4.5°. The slope values for the highland terrain are steeper than previously reported, likely due to a bias in targeting of the NAC DTMs toward higher relief features in the highland terrain. Overlapping DTMs of single stereo sets were also combined to form larger area DTM mosaics that enable detailed characterization of large geomorphic features. From one DTM mosaic we mapped a large viscous flow related to the Orientale basin ejecta and estimated its thickness and volume to exceed 300 m and 500 km3, respectively. Despite its ∼3.8 billion year age the flow still exhibits unconfined margin slopes above 30°, in some cases exceeding the angle of repose, consistent with deposition of material rich in impact melt. We show that the NAC stereo pairs and derived DTMs represent an invaluable tool for science and exploration purposes. At this date about 2% of the lunar surface is imaged in high-resolution stereo, and continued acquisition of stereo observations will serve to strengthen our

  8. Exploring the Moon at High-Resolution: First Results From the Lunar Reconnaissance Orbiter Camera (LROC)

    NASA Astrophysics Data System (ADS)

    Robinson, Mark; Hiesinger, Harald; McEwen, Alfred; Jolliff, Brad; Thomas, Peter C.; Turtle, Elizabeth; Eliason, Eric; Malin, Mike; Ravine, A.; Bowman-Cisneros, Ernest

    The Lunar Reconnaissance Orbiter (LRO) spacecraft was launched on an Atlas V 401 rocket from the Cape Canaveral Air Force Station Launch Complex 41 on June 18, 2009. After spending four days in Earth-Moon transit, the spacecraft entered a three month commissioning phase in an elliptical 30×200 km orbit. On September 15, 2009, LRO began its planned one-year nominal mapping mission in a quasi-circular 50 km orbit. A multi-year extended mission in a fixed 30×200 km orbit is optional. The Lunar Reconnaissance Orbiter Camera (LROC) consists of a Wide Angle Camera (WAC) and two Narrow Angle Cameras (NACs). The WAC is a 7-color push-frame camera, which images the Moon at 100 and 400 m/pixel in the visible and UV, respectively, while the two NACs are monochrome narrow-angle linescan imagers with 0.5 m/pixel spatial resolution. LROC was specifically designed to address two of the primary LRO mission requirements and six other key science objectives, including 1) assessment of meter-and smaller-scale features in order to select safe sites for potential lunar landings near polar resources and elsewhere on the Moon; 2) acquire multi-temporal synoptic 100 m/pixel images of the poles during every orbit to unambiguously identify regions of permanent shadow and permanent or near permanent illumination; 3) meter-scale mapping of regions with permanent or near-permanent illumination of polar massifs; 4) repeat observations of potential landing sites and other regions to derive high resolution topography; 5) global multispectral observations in seven wavelengths to characterize lunar resources, particularly ilmenite; 6) a global 100-m/pixel basemap with incidence angles (60° -80° ) favorable for morphological interpretations; 7) sub-meter imaging of a variety of geologic units to characterize their physical properties, the variability of the regolith, and other key science questions; 8) meter-scale coverage overlapping with Apollo-era panoramic images (1-2 m/pixel) to document

  9. Occurrence probability of slopes on the lunar surface: Estimate by the shaded area percentage in the LROC NAC images

    NASA Astrophysics Data System (ADS)

    Abdrakhimov, A. M.; Basilevsky, A. T.; Ivanov, M. A.; Kokhanov, A. A.; Karachevtseva, I. P.; Head, J. W.

    2015-09-01

    The paper describes the method of estimating the distribution of slopes by the portion of shaded areas measured in the images acquired at different Sun elevations. The measurements were performed for the benefit of the Luna-Glob Russian mission. The western ellipse for the spacecraft landing in the crater Bogus-lawsky in the southern polar region of the Moon was investigated. The percentage of the shaded area was measured in the images acquired with the LROC NAC camera with a resolution of ~0.5 m. Due to the close vicinity of the pole, it is difficult to build digital terrain models (DTMs) for this region from the LROC NAC images. Because of this, the method described has been suggested. For the landing ellipse investigated, 52 LROC NAC images obtained at the Sun elevation from 4° to 19° were used. In these images the shaded portions of the area were measured, and the values of these portions were transferred to the values of the occurrence of slopes (in this case, at the 3.5-m baseline) with the calibration by the surface characteristics of the Lunokhod-1 study area. For this area, the digital terrain model of the ~0.5-m resolution and 13 LROC NAC images obtained at different elevations of the Sun are available. From the results of measurements and the corresponding calibration, it was found that, in the studied landing ellipse, the occurrence of slopes gentler than 10° at the baseline of 3.5 m is 90%, while it is 9.6, 5.7, and 3.9% for the slopes steeper than 10°, 15°, and 20°, respectively. Obviously, this method can be recommended for application if there is no DTM of required granularity for the regions of interest, but there are high-resolution images taken at different elevations of the Sun.

  10. Photometric parameter maps of the Moon derived from LROC WAC images

    NASA Astrophysics Data System (ADS)

    Sato, H.; Robinson, M. S.; Hapke, B. W.; Denevi, B. W.; Boyd, A. K.

    2013-12-01

    Spatially resolved photometric parameter maps were computed from 21 months of Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) images. Due to a 60° field-of-view (FOV), the WAC achieves nearly global coverage of the Moon each month with more than 50% overlap from orbit-to-orbit. From the repeat observations at various viewing and illumination geometries, we calculated Hapke bidirectional reflectance model parameters [1] for 1°x1° "tiles" from 70°N to 70°S and 0°E to 360°E. About 66,000 WAC images acquired from February 2010 to October 2011 were converted from DN to radiance factor (I/F) though radiometric calibration, partitioned into gridded tiles, and stacked in a time series (tile-by-tile method [2]). Lighting geometries (phase, incidence, emission) were computed using the WAC digital terrain model (100 m/pixel) [3]. The Hapke parameters were obtained by model fitting against I/F within each tile. Among the 9 parameters of the Hapke model, we calculated 3 free parameters (w, b, and hs) by setting constant values for 4 parameters (Bco=0, hc=1, θ, φ=0) and interpolating 2 parameters (c, Bso). In this simplification, we ignored the Coherent Backscatter Opposition Effect (CBOE) to avoid competing CBOE and Shadow Hiding Opposition Effect (SHOE). We also assumed that surface regolith porosity is uniform across the Moon. The roughness parameter (θ) was set to an averaged value from the equator (× 3°N). The Henyey-Greenstein double lobe function (H-G2) parameter (c) was given by the 'hockey stick' relation [4] (negative correlation) between b and c based on laboratory measurements. The amplitude of SHOE (Bso) was given by the correlation between w and Bso at the equator (× 3°N). Single scattering albedo (w) is strongly correlated to the photometrically normalized I/F, as expected. The c shows an inverse trend relative to b due to the 'hockey stick' relation. The parameter c is typically low for the maria (0.08×0.06) relative to the

  11. Secondary Craters and the Size-Velocity Distribution of Ejected Fragments around Lunar Craters Measured Using LROC Images

    NASA Astrophysics Data System (ADS)

    Singer, K. N.; Jolliff, B. L.; McKinnon, W. B.

    2013-12-01

    Title: Secondary Craters and the Size-Velocity Distribution of Ejected Fragments around Lunar Craters Measured Using LROC Images Authors: Kelsi N. Singer1, Bradley L. Jolliff1, and William B. McKinnon1 Affiliations: 1. Earth and Planetary Sciences, Washington University in St Louis, St. Louis, MO, United States. We report results from analyzing the size-velocity distribution (SVD) of secondary crater forming fragments from the 93 km diameter Copernicus impact. We measured the diameters of secondary craters and their distances from Copernicus using LROC Wide Angle Camera (WAC) and Narrow Angle Camera (NAC) image data. We then estimated the velocity and size of the ejecta fragment that formed each secondary crater from the range equation for a ballistic trajectory on a sphere and Schmidt-Holsapple scaling relations. Size scaling was carried out in the gravity regime for both non-porous and porous target material properties. We focus on the largest ejecta fragments (dfmax) at a given ejection velocity (υej) and fit the upper envelope of the SVD using quantile regression to an equation of the form dfmax = A*υej ^- β. The velocity exponent, β, describes how quickly fragment sizes fall off with increasing ejection velocity during crater excavation. For Copernicus, we measured 5800 secondary craters, at distances of up to 700 km (15 crater radii), corresponding to an ejecta fragment velocity of approximately 950 m/s. This mapping only includes secondary craters that are part of a radial chain or cluster. The two largest craters in chains near Copernicus that are likely to be secondaries are 6.4 and 5.2 km in diameter. We obtained a velocity exponent, β, of 2.2 × 0.1 for a non-porous surface. This result is similar to Vickery's [1987, GRL 14] determination of β = 1.9 × 0.2 for Copernicus using Lunar Orbiter IV data. The availability of WAC 100 m/pix global mosaics with illumination geometry optimized for morphology allows us to update and extend the work of Vickery

  12. LROC Advances in Lunar Science

    NASA Astrophysics Data System (ADS)

    Robinson, M. S.

    2012-12-01

    Since entering orbit in 2009 the Lunar Reconnaissance Orbiter Camera (LROC) has acquired over 700,000 Wide Angle Camera (WAC) and Narrow Angle Camera (NAC) images of the Moon. This new image collection is fueling research into the origin and evolution of the Moon. NAC images revealed a volcanic complex 35 x 25 km (60N, 100E), between Compton and Belkovich craters (CB). The CB terrain sports volcanic domes and irregular depressed areas (caldera-like collapses). The volcanic complex corresponds to an area of high-silica content (Diviner) and high Th (Lunar Prospector). A low density of impact craters on the CB complex indicates a relatively young age. The LROC team mapped over 150 volcanic domes and 90 volcanic cones in the Marius Hills (MH), many of which were not previously identified. Morphology and compositional estimates (Diviner) indicate that MH domes are silica poor, and are products of low-effusion mare lavas. Impact melt deposits are observed with Copernican impact craters (>10 km) on exterior ejecta, the rim, inner wall, and crater floors. Preserved impact melt flow deposits are observed around small craters (25 km diam.), and estimated melt volumes exceed predictions. At these diameters the amount of melt predicted is small, and melt that is produced is expected to be ejected from the crater. However, we observe well-defined impact melt deposits on the floor of highland craters down to 200 m diameter. A globally distributed population of previously undetected contractional structures were discovered. Their crisp appearance and associated impact crater populations show that they are young landforms (<1 Ga). NAC images also revealed small extensional troughs. Crosscutting relations with small-diameter craters and depths as shallow as 1 m indicate ages <50 Ma. These features place bounds on the amount of global radial contraction and the level of compressional stress in the crust. WAC temporal coverage of the poles allowed quantification of highly

  13. Dry imaging cameras.

    PubMed

    Indrajit, Ik; Alam, Aftab; Sahni, Hirdesh; Bhatia, Mukul; Sahu, Samaresh

    2011-04-01

    Dry imaging cameras are important hard copy devices in radiology. Using dry imaging camera, multiformat images of digital modalities in radiology are created from a sealed unit of unexposed films. The functioning of a modern dry camera, involves a blend of concurrent processes, in areas of diverse sciences like computers, mechanics, thermal, optics, electricity and radiography. Broadly, hard copy devices are classified as laser and non laser based technology. When compared with the working knowledge and technical awareness of different modalities in radiology, the understanding of a dry imaging camera is often superficial and neglected. To fill this void, this article outlines the key features of a modern dry camera and its important issues that impact radiology workflow.

  14. Defining Long-Duration Traverses of Lunar Volcanic Complexes with LROC NAC Images

    NASA Technical Reports Server (NTRS)

    Stopar, J. D.; Lawrence, S. J.; Joliff, B. L.; Speyerer, E. J.; Robinson, M. S.

    2016-01-01

    A long-duration lunar rover [e.g., 1] would be ideal for investigating large volcanic complexes like the Marius Hills (MH) (approximately 300 x 330 km), where widely spaced sampling points are needed to explore the full geologic and compositional variability of the region. Over these distances, a rover would encounter varied surface morphologies (ranging from impact craters to rugged lava shields), each of which need to be considered during the rover design phase. Previous rovers including Apollo, Lunokhod, and most recently Yutu, successfully employed pre-mission orbital data for planning (at scales significantly coarser than that of the surface assets). LROC was specifically designed to provide mission-planning observations at scales useful for accurate rover traverse planning (crewed and robotic) [2]. After-the-fact analyses of the planning data can help improve predictions of future rover performance [e.g., 3-5].

  15. Spacecraft camera image registration

    NASA Technical Reports Server (NTRS)

    Kamel, Ahmed A. (Inventor); Graul, Donald W. (Inventor); Chan, Fred N. T. (Inventor); Gamble, Donald W. (Inventor)

    1987-01-01

    A system for achieving spacecraft camera (1, 2) image registration comprises a portion external to the spacecraft and an image motion compensation system (IMCS) portion onboard the spacecraft. Within the IMCS, a computer (38) calculates an image registration compensation signal (60) which is sent to the scan control loops (84, 88, 94, 98) of the onboard cameras (1, 2). At the location external to the spacecraft, the long-term orbital and attitude perturbations on the spacecraft are modeled. Coefficients (K, A) from this model are periodically sent to the onboard computer (38) by means of a command unit (39). The coefficients (K, A) take into account observations of stars and landmarks made by the spacecraft cameras (1, 2) themselves. The computer (38) takes as inputs the updated coefficients (K, A) plus synchronization information indicating the mirror position (AZ, EL) of each of the spacecraft cameras (1, 2), operating mode, and starting and stopping status of the scan lines generated by these cameras (1, 2), and generates in response thereto the image registration compensation signal (60). The sources of periodic thermal errors on the spacecraft are discussed. The system is checked by calculating measurement residuals, the difference between the landmark and star locations predicted at the external location and the landmark and star locations as measured by the spacecraft cameras (1, 2).

  16. Selective-imaging camera

    NASA Astrophysics Data System (ADS)

    Szu, Harold; Hsu, Charles; Landa, Joseph; Cha, Jae H.; Krapels, Keith A.

    2015-05-01

    How can we design cameras that image selectively in Full Electro-Magnetic (FEM) spectra? Without selective imaging, we cannot use, for example, ordinary tourist cameras to see through fire, smoke, or other obscurants contributing to creating a Visually Degraded Environment (VDE). This paper addresses a possible new design of selective-imaging cameras at firmware level. The design is consistent with physics of the irreversible thermodynamics of Boltzmann's molecular entropy. It enables imaging in appropriate FEM spectra for sensing through the VDE, and displaying in color spectra for Human Visual System (HVS). We sense within the spectra the largest entropy value of obscurants such as fire, smoke, etc. Then we apply a smart firmware implementation of Blind Sources Separation (BSS) to separate all entropy sources associated with specific Kelvin temperatures. Finally, we recompose the scene using specific RGB colors constrained by the HVS, by up/down shifting Planck spectra at each pixel and time.

  17. Photometric characterization of the Chang'e-3 landing site using LROC NAC images

    NASA Astrophysics Data System (ADS)

    Clegg-Watkins, R. N.; Jolliff, B. L.; Boyd, A.; Robinson, M. S.; Wagner, R.; Stopar, J. D.; Plescia, J. B.; Speyerer, E. J.

    2016-07-01

    China's robotic Chang'e-3 spacecraft, carrying the Yutu rover, touched down in Mare Imbrium on the lunar surface on 14 December 2013. The Lunar Reconnaissance Orbiter (LRO) Narrow Angle Camera (NAC) imaged the site both before and after landing. Multi-temporal NAC images taken before and after the landing, phase-ratio images made from NAC images taken after the landing, and Hapke photometric techniques were used to evaluate surface changes caused by the disturbance of regolith at the landing site (blast zone) by the descent engines of the Chang'e-3 spacecraft. The reflectance of the landing site increased by 10 ± 1% (from I/F = 0.040 to 0.044 at 30° phase angle) as a result of the landing, a value similar to reflectance increases estimated for the Apollo, Luna, and Surveyor landing sites. The spatial extent of the disturbed area at the Chang'e-3 landing site, 2530 m2, also falls close to what is predicted on the basis of correlations between lander mass, thrust, and blast zone areas for the historic landed missions. A multi-temporal ratio image of the Chang'e-3 landing site reveals a main blast zone (slightly elongate in the N-S direction; ∼75 m across N-S and ∼43 m across in the E-W direction) and an extended diffuse, irregular halo that is less reflective than the main blast zone (extending ∼40-50 m in the N-S direction and ∼10-15 m in the E-W direction beyond the main blast zone). The N-S elongation of the blast zone likely resulted from maneuvering during hazard avoidance just prior to landing. The phase-ratio image reveals that the blast zone is less backscattering than surrounding undisturbed areas. The similarities in magnitude of increased reflectance between the Chang'e-3 landing site and the Surveyor, Apollo, and Luna landing sites suggest that lunar soil reflectance changes caused by interaction with rocket exhaust are not significantly altered over a period of 40-50 years. The reflectance changes are independent of regolith composition

  18. Satellite camera image navigation

    NASA Technical Reports Server (NTRS)

    Kamel, Ahmed A. (Inventor); Graul, Donald W. (Inventor); Savides, John (Inventor); Hanson, Charles W. (Inventor)

    1987-01-01

    Pixels within a satellite camera (1, 2) image are precisely located in terms of latitude and longitude on a celestial body, such as the earth, being imaged. A computer (60) on the earth generates models (40, 50) of the satellite's orbit and attitude, respectively. The orbit model (40) is generated from measurements of stars and landmarks taken by the camera (1, 2), and by range data. The orbit model (40) is an expression of the satellite's latitude and longitude at the subsatellite point, and of the altitude of the satellite, as a function of time, using as coefficients (K) the six Keplerian elements at epoch. The attitude model (50) is based upon star measurements taken by each camera (1, 2). The attitude model (50) is a set of expressions for the deviations in a set of mutually orthogonal reference optical axes (x, y, z) as a function of time, for each camera (1, 2). Measured data is fit into the models (40, 50) using a walking least squares fit algorithm. A transformation computer (66 ) transforms pixel coordinates as telemetered by the camera (1, 2) into earth latitude and longitude coordinates, using the orbit and attitude models (40, 50).

  19. Neutron Imaging Camera

    NASA Technical Reports Server (NTRS)

    Hunter, Stanley; deNolfo, G. A.; Barbier, L. M.; Link, J. T.; Son, S.; Floyd, S. R.; Guardala, N.; Skopec, M.; Stark, B.

    2008-01-01

    The Neutron Imaging Camera (NIC) is based on the Three-dimensional Track Imager (3DTI) technology developed at GSFC for gamma-ray astrophysics applications. The 3-DTI, a large volume time-projection chamber, provides accurate, approximately 0.4 mm resolution, 3-D tracking of charged particles. The incident direction of fast neutrons, En > 0.5 MeV, are reconstructed from the momenta and energies of the proton and triton fragments resulting from (sup 3)He(n,p) (sup 3)H interactions in the 3-DTI volume. The performance of the NIC from laboratory and accelerator tests is presented.

  20. Image Sensors Enhance Camera Technologies

    NASA Technical Reports Server (NTRS)

    2010-01-01

    In the 1990s, a Jet Propulsion Laboratory team led by Eric Fossum researched ways of improving complementary metal-oxide semiconductor (CMOS) image sensors in order to miniaturize cameras on spacecraft while maintaining scientific image quality. Fossum s team founded a company to commercialize the resulting CMOS active pixel sensor. Now called the Aptina Imaging Corporation, based in San Jose, California, the company has shipped over 1 billion sensors for use in applications such as digital cameras, camera phones, Web cameras, and automotive cameras. Today, one of every three cell phone cameras on the planet feature Aptina s sensor technology.

  1. Neutron Imaging Camera

    NASA Technical Reports Server (NTRS)

    Hunter, Stanley D.; DeNolfo, Georgia; Floyd, Sam; Krizmanic, John; Link, Jason; Son, Seunghee; Guardala, Noel; Skopec, Marlene; Stark, Robert

    2008-01-01

    We describe the Neutron Imaging Camera (NIC) being developed for DTRA applications by NASA/GSFC and NSWC/Carderock. The NIC is based on the Three-dimensional Track Imager (3-DTI) technology developed at GSFC for gamma-ray astrophysics applications. The 3-DTI, a large volume time-projection chamber, provides accurate, approximately 0.4 mm resolution. 3-D tracking of charged particles. The incident direction of fast neutrons, E(sub N) > 0.5 MeV. arc reconstructed from the momenta and energies of the proton and triton fragments resulting from 3He(n,p)3H interactions in the 3-DTI volume. We present angular and energy resolution performance of the NIC derived from accelerator tests.

  2. Full Stokes polarization imaging camera

    NASA Astrophysics Data System (ADS)

    Vedel, M.; Breugnot, S.; Lechocinski, N.

    2011-10-01

    Objective and background: We present a new version of Bossa Nova Technologies' passive polarization imaging camera. The previous version was performing live measurement of the Linear Stokes parameters (S0, S1, S2), and its derivatives. This new version presented in this paper performs live measurement of Full Stokes parameters, i.e. including the fourth parameter S3 related to the amount of circular polarization. Dedicated software was developed to provide live images of any Stokes related parameters such as the Degree Of Linear Polarization (DOLP), the Degree Of Circular Polarization (DOCP), the Angle Of Polarization (AOP). Results: We first we give a brief description of the camera and its technology. It is a Division Of Time Polarimeter using a custom ferroelectric liquid crystal cell. A description of the method used to calculate Data Reduction Matrix (DRM)5,9 linking intensity measurements and the Stokes parameters is given. The calibration was developed in order to maximize the condition number of the DRM. It also allows very efficient post processing of the images acquired. Complete evaluation of the precision of standard polarization parameters is described. We further present the standard features of the dedicated software that was developed to operate the camera. It provides live images of the Stokes vector components and the usual associated parameters. Finally some tests already conducted are presented. It includes indoor laboratory and outdoor measurements. This new camera will be a useful tool for many applications such as biomedical, remote sensing, metrology, material studies, and others.

  3. Height-to-diameter ratios of moon rocks from analysis of Lunokhod-1 and -2 and Apollo 11-17 panoramas and LROC NAC images

    NASA Astrophysics Data System (ADS)

    Demidov, N. E.; Basilevsky, A. T.

    2014-09-01

    An analysis is performed of 91 panoramic photographs taken by Lunokhod-1 and -2, 17 panoramic images composed of photographs taken by Apollo 11-15 astronauts, and six LROC NAC photographs. The results are used to measure the height-to-visible-diameter ( h/ d) and height-to-maximum-diameter ( h/ D) ratios for lunar rocks at three highland and three mare sites on the Moon. The average h/ d and h/ D for the six sites are found to be indistinguishable at a significance level of 95%. Therefore, our estimates for the average h/ d = 0.6 ± 0.03 and h/ D = 0.54 ± 0.03 on the basis of 445 rocks are applicable for the entire Moon's surface. Rounding off, an h/ D ratio of ≈0.5 is suggested for engineering models of the lunar surface. The ratios between the long, medium, and short axes of the lunar rocks are found to be similar to those obtained in high-velocity impact experiments for different materials. It is concluded, therefore, that the degree of penetration of the studied lunar rocks into the regolith is negligible, and micrometeorite abrasion and other factors do not dominate in the evolution of the shape of lunar rocks.

  4. Blind camera fingerprinting and image clustering.

    PubMed

    Bloy, Greg J

    2008-03-01

    Previous studies have shown how to "fingerprint" a digital camera given a set of images known to come from the camera. A clustering technique is proposed to construct such fingerprints from a mixed set of images, enabling identification of each image's source camera without any prior knowledge of source.

  5. LROC Targeted Observations for the Next Generation of Scientific Exploration

    NASA Astrophysics Data System (ADS)

    Jolliff, B. L.

    2015-12-01

    Imaging of the Moon at high spatial resolution (0.5 to 2 mpp) by the Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Cameras (NAC) plus topographic data derived from LROC NAC and WAC (Wide Angle Camera) and LOLA (Lunar Orbiting Laser Altimeter), coupled with recently obtained hyperspectral NIR and thermal data, permit studies of composition, mineralogy, and geologic context at essentially an outcrop scale. Such studies pave the way for future landed and sample return missions for high science priority targets. Among such targets are (1) the youngest volcanic rocks on the Moon, including mare basalts formed as recently as ~1 Ga, and irregular mare patches (IMPs) that appear to be even younger [1]; (2) volcanic rocks and complexes with compositions more silica-rich than mare basalts [2-4]; (3) differentiated impact-melt deposits [5,6], ancient volcanics, and compositional anomalies within the South Pole-Aitken basin; (4) exposures of recently discovered key crustal rock types in uplifted structures such as essentially pure anorthosite [7] and spinel-rich rocks [8]; and (5) frozen volatile-element-rich deposits in polar areas [9]. Important data sets include feature sequences of paired NAC images obtained under similar illumination conditions, NAC geometric stereo, from which high-resolution DTMs can be made, and photometric sequences useful for assessing composition in areas of mature cover soils. Examples of each of these target types will be discussed in context of potential future missions. References: [1] Braden et al. (2014) Nat. Geo. 7, 787-791. [2] Glotch et al. (2010) Science, 329, 1510-1513. [3] Greenhagen et al. (2010) Science, 329, 1507-1509. [4] Jolliff et al. (2011) Nat. Geo. 4, 566-571. [5] Vaughan et al (2013) PSS 91, 101-106. [6] Hurwitz and Kring (2014) J. Geophys. Res. 119, 1110-1133 [7] Ohtake et al. (2009) Nature, 461, 236-241 [8] Pieters et al. (2014) Am. Min. 99, 1893-1910. [9] Colaprete et al. (2010) Science 330, 463-468.

  6. Marius Hills: Surface Roughness from LROC and Mini-RF

    NASA Astrophysics Data System (ADS)

    Lawrence, S.; Hawke, B. R.; Bussey, B.; Stopar, J. D.; Denevi, B.; Robinson, M.; Tran, T.

    2010-12-01

    The Lunar Reconnaissance Orbiter Camera (LROC) Team is collecting hundreds of high-resolution (0.5 m/pixel) Narrow Angle Camera (NAC) images of lunar volcanic constructs (domes, “cones”, and associated features) [1,2]. Marius Hills represents the largest concentration of volcanic features on the Moon and is a high-priority target for future exploration [3,4]. NAC images of this region provide new insights into the morphology and geology of specific features at the meter scale, including lava flow fronts, tectonic features, layers, and topography (using LROC stereo imagery) [2]. Here, we report initial results from Mini-RF and LROC collaborative studies of the Marius Hills. Mini-RF uses a hybrid polarimetric architecture to measure surface backscatter characteristics and can acquire data in one of two radar bands, S (12 cm) or X (4 cm) [5]. The spatial resolution of Mini-RF (15 m/pixel) enables correlation of features observed in NAC images to Mini-RF data. Mini-RF S-Band zoom-mode data and daughter products, such as circular polarization ratio (CPR), were directly compared to NAC images. Mini-RF S-Band radar images reveal enhanced radar backscatter associated with volcanic constructs in the Marius Hills region. Mini-RF data show that Marius Hills volcanic constructs have enhanced average CPR values (0.5-0.7) compared to the CPR values of the surrounding mare (~0.4). This result is consistent with the conclusions of [6], and implies that the lava flows comprising the domes in this region are blocky. To quantify the surface roughness [e.g., 6,7] block populations associated with specific geologic features in the Marius Hills region are being digitized from NAC images. Only blocks that can be unambiguously identified (>1 m diameter) are included in the digitization process, producing counts and size estimates of the block population. High block abundances occur mainly at the distal ends of lava flows. The average size of these blocks is 9 m, and 50% of observed

  7. Digital Elevation Models and Derived Products from Lroc Nac Stereo Observations

    NASA Astrophysics Data System (ADS)

    Burns, K. N.; Speyerer, E. J.; Robinson, M. S.; Tran, T.; Rosiek, M. R.; Archinal, B. A.; Howington-Kraus, E.; the LROC Science Team

    2012-08-01

    One of the primary objectives of the Lunar Reconnaissance Orbiter Camera (LROC) is to acquire stereo observations with the Narrow Angle Camera (NAC) to enable production of high resolution digital elevation models (DEMs). This work describes the processes and techniques used in reducing the NAC stereo observations to DEMs through a combination of USGS integrated Software for Imagers and Spectrometers (ISIS) and SOCET SET® from BAE Systems by a team at Arizona State University (ASU). LROC Science Operations Center personnel have thus far reduced 130 stereo observations to DEMs of more than 130 stereo pairs for 11 Constellation Program (CxP) sites and 53 other regions of scientific interest. The NAC DEM spatial sampling is typically 2 meters, and the vertical precision is 1-2 meters. Such high resolution provides the three-dimensional view of the lunar surface required for site selection, hazard avoidance and planning traverses that minimize resource consumption. In addition to exploration analysis, geologists can measure parameters such as elevation, slope, and volume to place constraints on composition and geologic history. The NAC DEMs are released and archived through NASA's Planetary Data System.

  8. Image formation in fundus cameras.

    PubMed

    Pomerantzeff, O; Webb, R H; Delori, F C

    1979-06-01

    Imaging in a fundus camera depends more on design of the system than on correction of the first fundus image as formed by the ophthalmoscopic lens. We show here that the designer may use the free parameters of the ophthalmoscopic lens (contact or noncontact) to correct the latter for observation and illumination of the fundus. In both contact and noncontact systems the fundus is illuminated by forming a ring of light on the patient's cornea around a central area (the corneal window) reserved for observation. On the first surface of the crystalline lens, the light also forms a ring which must accomodate the total entrance pupil (TEP) of the observation system in its middle and which is limited on the outside by the patient's iris. The restrictions that result from this situation define the entrance pupil of the bundle of rays that image the marginal point of the retina. The limits of this bundle are imposed by the choice of the angular field of view and by the size of the patient's pupil.

  9. AIM: Ames Imaging Module Spacecraft Camera

    NASA Technical Reports Server (NTRS)

    Thompson, Sarah

    2015-01-01

    The AIM camera is a small, lightweight, low power, low cost imaging system developed at NASA Ames. Though it has imaging capabilities similar to those of $1M plus spacecraft cameras, it does so on a fraction of the mass, power and cost budget.

  10. Coherent infrared imaging camera (CIRIC)

    SciTech Connect

    Hutchinson, D.P.; Simpson, M.L.; Bennett, C.A.; Richards, R.K.; Emery, M.S.; Crutcher, R.I.; Sitter, D.N. Jr.; Wachter, E.A.; Huston, M.A.

    1995-07-01

    New developments in 2-D, wide-bandwidth HgCdTe (MCT) and GaAs quantum-well infrared photodetectors (QWIP) coupled with Monolithic Microwave Integrated Circuit (MMIC) technology are now making focal plane array coherent infrared (IR) cameras viable. Unlike conventional IR cameras which provide only thermal data about a scene or target, a coherent camera based on optical heterodyne interferometry will also provide spectral and range information. Each pixel of the camera, consisting of a single photo-sensitive heterodyne mixer followed by an intermediate frequency amplifier and illuminated by a separate local oscillator beam, constitutes a complete optical heterodyne receiver. Applications of coherent IR cameras are numerous and include target surveillance, range detection, chemical plume evolution, monitoring stack plume emissions, and wind shear detection.

  11. Single-Camera Panoramic-Imaging Systems

    NASA Technical Reports Server (NTRS)

    Lindner, Jeffrey L.; Gilbert, John

    2007-01-01

    Panoramic detection systems (PDSs) are developmental video monitoring and image-data processing systems that, as their name indicates, acquire panoramic views. More specifically, a PDS acquires images from an approximately cylindrical field of view that surrounds an observation platform. The main subsystems and components of a basic PDS are a charge-coupled- device (CCD) video camera and lens, transfer optics, a panoramic imaging optic, a mounting cylinder, and an image-data-processing computer. The panoramic imaging optic is what makes it possible for the single video camera to image the complete cylindrical field of view; in order to image the same scene without the benefit of the panoramic imaging optic, it would be necessary to use multiple conventional video cameras, which have relatively narrow fields of view.

  12. Uncertainty Analysis of LROC NAC Derived Elevation Models

    NASA Astrophysics Data System (ADS)

    Burns, K.; Yates, D. G.; Speyerer, E.; Robinson, M. S.

    2012-12-01

    One of the primary objectives of the Lunar Reconnaissance Orbiter Camera (LROC) [1] is to gather stereo observations with the Narrow Angle Camera (NAC) to generate digital elevation models (DEMs). From an altitude of 50 km, the NAC acquires images with a pixel scale of 0.5 meters, and a dual NAC observation covers approximately 5 km cross-track by 25 km down-track. This low altitude was common from September 2009 to December 2011. Images acquired during the commissioning phase and those acquired from the fixed orbit (after 11 December 2011) have pixel scales that range from 0.35 meters at the south pole to 2 meters at the north pole. Alimetric observations obtained by the Lunar Orbiter Laser Altimeter (LOLA) provide measurements of ±0.1 m between the spacecraft and the surface [2]. However, uncertainties in the spacecraft positioning can result in offsets (±20m) between altimeter tracks over many orbits. The LROC team is currently developing a tool to automatically register alimetric observations to NAC DEMs [3]. Using a generalized pattern search (GPS) algorithm, the new automatic registration adjusts the spacecraft position and pointing information during times when NAC images, as well as LOLA measurements, of the same region are acquired to provide an absolute reference frame for the DEM. This information is then imported into SOCET SET to aide in creating controlled NAC DEMs. For every DEM, a figure of merit (FOM) map is generated using SOCET SET software. This is a valuable tool for determining the relative accuracy of a specific pixel in a DEM. Each pixel in a FOM map is given a value to determine its "quality" by determining if the specific pixel was shadowed, saturated, suspicious, interpolated/extrapolated, or successfully correlated. The overall quality of a NAC DEM is a function of both the absolute and relative accuracies. LOLA altimetry provides the most accurate absolute geodetic reference frame with which the NAC DEMs can be compared. Offsets

  13. Advanced Pointing Imaging Camera (APIC) Concept

    NASA Astrophysics Data System (ADS)

    Park, R. S.; Bills, B. G.; Jorgensen, J.; Jun, I.; Maki, J. N.; McEwen, A. S.; Riedel, E.; Walch, M.; Watkins, M. M.

    2016-10-01

    The Advanced Pointing Imaging Camera (APIC) concept is envisioned as an integrated system, with optical bench and flight-proven components, designed for deep-space planetary missions with 2-DOF control capability.

  14. LRO Camera Imaging of Potential Landing Sites in the South Pole-Aitken Basin

    NASA Astrophysics Data System (ADS)

    Jolliff, B. L.; Wiseman, S. M.; Gibson, K. E.; Lauber, C.; Robinson, M.; Gaddis, L. R.; Scholten, F.; Oberst, J.; LROC Science; Operations Team

    2010-12-01

    We show results of WAC (Wide Angle Camera) and NAC (Narrow Angle Camera) imaging of candidate landing sites within the South Pole-Aitken (SPA) basin of the Moon obtained by the Lunar Reconnaissance Orbiter during the first full year of operation. These images enable a greatly improved delineation of geologic units, determination of unit thicknesses and stratigraphy, and detailed surface characterization that has not been possible with previous data. WAC imaging encompasses the entire SPA basin, located within an area ranging from ~ 130-250 degrees east longitude and ~15 degrees south latitude to the South Pole, at different incidence angles, with the specific range of incidence dependent on latitude. The WAC images show morphology and surface detail at better than 100 m per pixel, with spatial coverage and quality unmatched by previous data sets. NAC images reveal details at the sub-meter pixel scale that enable new ways to evaluate the origins and stratigraphy of deposits. Key among new results is the capability to discern extents of ancient volcanic deposits that are covered by later crater ejecta (cryptomare) [see Petro et al., this conference] using new, complementary color data from Kaguya and Chandrayaan-1. Digital topographic models derived from WAC and NAC geometric stereo coverage show broad intercrater-plains areas where slopes are acceptably low for high-probability safe landing [see Archinal et al., this conference]. NAC images allow mapping and measurement of small, fresh craters that excavated boulders and thus provide information on surface roughness and depth to bedrock beneath regolith and plains deposits. We use these data to estimate deposit thickness in areas of interest for landing and potential sample collection to better understand the possible provenance of samples. Also, small regions marked by fresh impact craters and their associated boulder fields are readily identified by their bright ejecta patterns and marked as lander keep-out zones

  15. On an assessment of surface roughness estimates from lunar laser altimetry pulse-widths for the Moon from LOLA using LROC narrow-angle stereo DTMs.

    NASA Astrophysics Data System (ADS)

    Muller, Jan-Peter; Poole, William

    2013-04-01

    Neumann et al. [1] proposed that laser altimetry pulse-widths could be employed to derive "within-footprint" surface roughness as opposed to surface roughness estimated from between laser altimetry pierce-points such as the example for Mars [2] and more recently from the 4-pointed star-shaped LOLA (Lunar reconnaissance Orbiter Laser Altimeter) onboard the NASA-LRO [3]. Since 2009, the LOLA has been collecting extensive global laser altimetry data with a 5m footprint and ?25m between the 5 points in a star-shape. In order to assess how accurately surface roughness (defined as simple RMS after slope correction) derived from LROC matches with surface roughness derived from LOLA footprints, publicly released LROC-NA (LRO Camera Narrow Angle) 1m Digital Terrain Models (DTMs) were employed to measure the surface roughness directly within each 5m footprint. A set of 20 LROC-NA DTMs were examined. Initially the match-up between the LOLA and LROC-NA orthorectified images (ORIs) is assessed visually to ensure that the co-registration is better than the LOLA footprint resolution. For each LOLA footprint, the pulse-width geolocation is then retrieved and this is used to "cookie-cut" the surface roughness and slopes derived from the LROC-NA DTMs. The investigation which includes data from a variety of different landforms shows little, if any correlation between surface roughness estimated from DTMs with LOLA pulse-widths at sub-footprint scale. In fact there is only any perceptible correlation between LOLA and LROC-DTMs at baselines of 40-60m for surface roughness and 20m for slopes. [1] Neumann et al. Mars Orbiter Laser Altimeter pulse width measurements and footprint-scale roughness. Geophysical Research Letters (2003) vol. 30 (11), paper 1561. DOI: 10.1029/2003GL017048 [2] Kreslavsky and Head. Kilometer-scale roughness of Mars: results from MOLA data analysis. J Geophys Res (2000) vol. 105 (E11) pp. 26695-26711. [3] Rosenburg et al. Global surface slopes and roughness of the

  16. Generating Stereoscopic Television Images With One Camera

    NASA Technical Reports Server (NTRS)

    Coan, Paul P.

    1996-01-01

    Straightforward technique for generating stereoscopic television images involves use of single television camera translated laterally between left- and right-eye positions. Camera acquires one of images (left- or right-eye image), and video signal from image delayed while camera translated to position where it acquires other image. Length of delay chosen so both images displayed simultaneously or as nearly simultaneously as necessary to obtain stereoscopic effect. Technique amenable to zooming in on small areas within broad scenes. Potential applications include three-dimensional viewing of geological features and meteorological events from spacecraft and aircraft, inspection of workpieces moving along conveyor belts, and aiding ground and water search-and-rescue operations. Also used to generate and display imagery for public education and general information, and possible for medical purposes.

  17. Scintillating Track Image Camera-SCITIC

    NASA Astrophysics Data System (ADS)

    Sato, Akira; Asai, Jyunkichi; Ieiri, Masaharu; Iwata, Soma; Kadowaki, Tetsuhito; Kurosawa, Maki; Nagae, Tomohumi; Nakai, Kozi

    2004-04-01

    A new type of track detector, scintillating track image camera (SCITIC) has been developed. Scintillating track images of particles in a scintillator are focused by an optical lens system on a photocathode on image intesifier tube (IIT). The image signals are amplified by an IIT-cascade and stored by a CCD camera. The performance of the detector has been tested with cosmic-ray muons and with pion- and proton-beams from the KEK 12-GeV proton synchrotron. Data of the test experiments have shown promising features of SCITIC as a triggerable track detector with a variety of possibilities.

  18. Development of gamma ray imaging cameras

    SciTech Connect

    Wehe, D.K.; Knoll, G.F.

    1992-05-28

    In January 1990, the Department of Energy initiated this project with the objective to develop the technology for general purpose, portable gamma ray imaging cameras useful to the nuclear industry. The ultimate goal of this R D initiative is to develop the analog to the color television camera where the camera would respond to gamma rays instead of visible photons. The two-dimensional real-time image would be displayed would indicate the geometric location of the radiation relative to the camera's orientation, while the brightness and color'' would indicate the intensity and energy of the radiation (and hence identify the emitting isotope). There is a strong motivation for developing such a device for applications within the nuclear industry, for both high- and low-level waste repositories, for environmental restoration problems, and for space and fusion applications. At present, there are no general purpose radiation cameras capable of producing spectral images for such practical applications. At the time of this writing, work on this project has been underway for almost 18 months. Substantial progress has been made in the project's two primary areas: mechanically-collimated (MCC) and electronically-collimated camera (ECC) designs. We present developments covering the mechanically-collimated design, and then discuss the efforts on the electronically-collimated camera. The renewal proposal addresses the continuing R D efforts for the third year effort. 8 refs.

  19. Multiple-image oscilloscope camera

    DOEpatents

    Yasillo, Nicholas J.

    1978-01-01

    An optical device for placing automatically a plurality of images at selected locations on one film comprises a stepping motor coupled to a rotating mirror and lens. A mechanical connection from the mirror controls an electronic logical system to allow rotation of the mirror to place a focused image at the desired preselected location. The device is of especial utility when used to place four images on a single film to record oscilloscope views obtained in gamma radiography.

  20. Prediction of Viking lander camera image quality

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Burcher, E. E.; Jobson, D. J.; Wall, S. D.

    1976-01-01

    Formulations are presented that permit prediction of image quality as a function of camera performance, surface radiance properties, and lighting and viewing geometry. Predictions made for a wide range of surface radiance properties reveal that image quality depends strongly on proper camera dynamic range command and on favorable lighting and viewing geometry. Proper camera dynamic range commands depend mostly on the surface albedo that will be encountered. Favorable lighting and viewing geometries depend mostly on lander orientation with respect to the diurnal sun path over the landing site, and tend to be independent of surface albedo and illumination scattering function. Side lighting with low sun elevation angles (10 to 30 deg) is generally favorable for imaging spatial details and slopes, whereas high sun elevation angles are favorable for measuring spectral reflectances.

  1. Improvement of passive THz camera images

    NASA Astrophysics Data System (ADS)

    Kowalski, Marcin; Piszczek, Marek; Palka, Norbert; Szustakowski, Mieczyslaw

    2012-10-01

    Terahertz technology is one of emerging technologies that has a potential to change our life. There are a lot of attractive applications in fields like security, astronomy, biology and medicine. Until recent years, terahertz (THz) waves were an undiscovered, or most importantly, an unexploited area of electromagnetic spectrum. The reasons of this fact were difficulties in generation and detection of THz waves. Recent advances in hardware technology have started to open up the field to new applications such as THz imaging. The THz waves can penetrate through various materials. However, automated processing of THz images can be challenging. The THz frequency band is specially suited for clothes penetration because this radiation does not point any harmful ionizing effects thus it is safe for human beings. Strong technology development in this band have sparked with few interesting devices. Even if the development of THz cameras is an emerging topic, commercially available passive cameras still offer images of poor quality mainly because of its low resolution and low detectors sensitivity. Therefore, THz image processing is very challenging and urgent topic. Digital THz image processing is a really promising and cost-effective way for demanding security and defense applications. In the article we demonstrate the results of image quality enhancement and image fusion of images captured by a commercially available passive THz camera by means of various combined methods. Our research is focused on dangerous objects detection - guns, knives and bombs hidden under some popular types of clothing.

  2. Lroc Observations of Permanently Shadowed Regions: Seeing into the Dark

    NASA Astrophysics Data System (ADS)

    Koeber, S. D.; Robinson, M. S.

    2013-12-01

    Permanently shadowed regions (PSRs) near the lunar poles that receive secondary illumination from nearby Sun facing slopes were imaged by the Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Cameras (NAC). Typically secondary lighting is optimal in polar areas around respective solstices and when the LRO orbit is nearly coincident with the sub-solar point (low spacecraft beta angles). NAC PSR images provide the means to search for evidence of surface frosts and unusual morphologies from ice rich regolith, and aid in planning potential landing sites for future in-situ exploration. Secondary illumination imaging in PSRs requires NAC integration times typically more than ten times greater than nominal imaging. The increased exposure time results in downtrack smear that decreases the spatial resolution of the NAC PSR images. Most long exposure NAC images of PSRs were acquired with exposure times of 24.2-ms (1-m by 40-m pixels, sampled to 20-m) and 12-ms (1-m by 20-m, sampled to 10-m). The initial campaign to acquire long exposure NAC images of PSRs in the north pole region ran from February 2013 to April 2013. Relative to the south polar region, PSRs near the north pole are generally smaller (D<24-km) and located in simple craters. Long exposure NAC images of PSRs in simple craters are often well illuminated by secondary light reflected from Sun-facing crater slopes during the northern summer solstice, allowing many PSRs to be imaged with the shorter exposure time of 12-ms (resampled to 10-m). With the exception of some craters in Peary crater, most northern PSRs with diameters >6-km were successfully imaged (ex. Whipple, Hermite A, and Rozhestvenskiy U). The third PSR south polar campaign began in April 2013 and will continue until October 2013. The third campaign will expand previous NAC coverage of PSRs and follow up on discoveries with new images of higher signal to noise ratio (SNR), higher resolution, and varying secondary illumination conditions

  3. Imaging characteristics of photogrammetric camera systems

    USGS Publications Warehouse

    Welch, R.; Halliday, J.

    1973-01-01

    In view of the current interest in high-altitude and space photographic systems for photogrammetric mapping, the United States Geological Survey (U.S.G.S.) undertook a comprehensive research project designed to explore the practical aspects of applying the latest image quality evaluation techniques to the analysis of such systems. The project had two direct objectives: (1) to evaluate the imaging characteristics of current U.S.G.S. photogrammetric camera systems; and (2) to develop methodologies for predicting the imaging capabilities of photogrammetric camera systems, comparing conventional systems with new or different types of systems, and analyzing the image quality of photographs. Image quality was judged in terms of a number of evaluation factors including response functions, resolving power, and the detectability and measurability of small detail. The limiting capabilities of the U.S.G.S. 6-inch and 12-inch focal length camera systems were established by analyzing laboratory and aerial photographs in terms of these evaluation factors. In the process, the contributing effects of relevant parameters such as lens aberrations, lens aperture, shutter function, image motion, film type, and target contrast procedures for analyzing image quality and predicting and comparing performance capabilities. ?? 1973.

  4. COMPUTER ANALYSIS OF PLANAR GAMMA CAMERA IMAGES

    EPA Science Inventory



    COMPUTER ANALYSIS OF PLANAR GAMMA CAMERA IMAGES

    T Martonen1 and J Schroeter2

    1Experimental Toxicology Division, National Health and Environmental Effects Research Laboratory, U.S. EPA, Research Triangle Park, NC 27711 USA and 2Curriculum in Toxicology, Unive...

  5. Cervical SPECT Camera for Parathyroid Imaging

    SciTech Connect

    None, None

    2012-08-31

    Primary hyperparathyroidism characterized by one or more enlarged parathyroid glands has become one of the most common endocrine diseases in the world affecting about 1 per 1000 in the United States. Standard treatment is highly invasive exploratory neck surgery called Parathyroidectomy. The surgery has a notable mortality rate because of the close proximity to vital structures. The move to minimally invasive parathyroidectomy is hampered by the lack of high resolution pre-surgical imaging techniques that can accurately localize the parathyroid with respect to surrounding structures. We propose to develop a dedicated ultra-high resolution (~ 1 mm) and high sensitivity (10x conventional camera) cervical scintigraphic imaging device. It will be based on a multiple pinhole-camera SPECT system comprising a novel solid state CZT detector that offers the required performance. The overall system will be configured to fit around the neck and comfortably image a patient.

  6. Pre-flight and On-orbit Geometric Calibration of the Lunar Reconnaissance Orbiter Camera

    NASA Astrophysics Data System (ADS)

    Speyerer, E. J.; Wagner, R. V.; Robinson, M. S.; Licht, A.; Thomas, P. C.; Becker, K.; Anderson, J.; Brylow, S. M.; Humm, D. C.; Tschimmel, M.

    2016-04-01

    The Lunar Reconnaissance Orbiter Camera (LROC) consists of two imaging systems that provide multispectral and high resolution imaging of the lunar surface. The Wide Angle Camera (WAC) is a seven color push-frame imager with a 90∘ field of view in monochrome mode and 60∘ field of view in color mode. From the nominal 50 km polar orbit, the WAC acquires images with a nadir ground sampling distance of 75 m for each of the five visible bands and 384 m for the two ultraviolet bands. The Narrow Angle Camera (NAC) consists of two identical cameras capable of acquiring images with a ground sampling distance of 0.5 m from an altitude of 50 km. The LROC team geometrically calibrated each camera before launch at Malin Space Science Systems in San Diego, California and the resulting measurements enabled the generation of a detailed camera model for all three cameras. The cameras were mounted and subsequently launched on the Lunar Reconnaissance Orbiter (LRO) on 18 June 2009. Using a subset of the over 793000 NAC and 207000 WAC images of illuminated terrain collected between 30 June 2009 and 15 December 2013, we improved the interior and exterior orientation parameters for each camera, including the addition of a wavelength dependent radial distortion model for the multispectral WAC. These geometric refinements, along with refined ephemeris, enable seamless projections of NAC image pairs with a geodetic accuracy better than 20 meters and sub-pixel precision and accuracy when orthorectifying WAC images.

  7. X-ray imaging using digital cameras

    NASA Astrophysics Data System (ADS)

    Winch, Nicola M.; Edgar, Andrew

    2012-03-01

    The possibility of using the combination of a computed radiography (storage phosphor) cassette and a semiprofessional grade digital camera for medical or dental radiography is investigated. We compare the performance of (i) a Canon 5D Mk II single lens reflex camera with f1.4 lens and full-frame CMOS array sensor and (ii) a cooled CCD-based camera with a 1/3 frame sensor and the same lens system. Both systems are tested with 240 x 180 mm cassettes which are based on either powdered europium-doped barium fluoride bromide or needle structure europium-doped cesium bromide. The modulation transfer function for both systems has been determined and falls to a value of 0.2 at around 2 lp/mm, and is limited by light scattering of the emitted light from the storage phosphor rather than the optics or sensor pixelation. The modulation transfer function for the CsBr:Eu2+ plate is bimodal, with a high frequency wing which is attributed to the light-guiding behaviour of the needle structure. The detective quantum efficiency has been determined using a radioisotope source and is comparatively low at 0.017 for the CMOS camera and 0.006 for the CCD camera, attributed to the poor light harvesting by the lens. The primary advantages of the method are portability, robustness, digital imaging and low cost; the limitations are the low detective quantum efficiency and hence signal-to-noise ratio for medical doses, and restricted range of plate sizes. Representative images taken with medical doses are shown and illustrate the potential use for portable basic radiography.

  8. 15 CFR 743.3 - Thermal imaging camera reporting.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 15 Commerce and Foreign Trade 2 2010-01-01 2010-01-01 false Thermal imaging camera reporting. 743... REPORTING § 743.3 Thermal imaging camera reporting. (a) General requirement. Exports of thermal imaging cameras must be reported to BIS as provided in this section. (b) Transactions to be reported. Exports...

  9. Investigation of Layered Lunar Mare Lava flows through LROC Imagery and Terrestrial Analogs

    NASA Astrophysics Data System (ADS)

    Needham, H.; Rumpf, M.; Sarah, F.

    2013-12-01

    High resolution images of the lunar surface have revealed layered deposits in the walls of impact craters and pit craters in the lunar maria, which are interpreted to be sequences of stacked lava flows. The goal of our research is to establish quantitative constraints and uncertainties on the thicknesses of individual flow units comprising the layered outcrops, in order to model the cooling history of lunar lava flows. The underlying motivation for this project is to identify locations hosting intercalated units of lava flows and paleoregoliths, which may preserve snapshots of the ancient solar wind and other extra-lunar particles, thereby providing potential sampling localities for future missions to the lunar surface. Our approach involves mapping layered outcrops using high-resolution imagery acquired by the Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC), with constraints on flow unit dimensions provided by Lunar Orbiter Laser Altimeter (LOLA) data. We have measured thicknesses of ~ 2 to > 20 m. However, there is considerable uncertainty in the definition of contacts between adjacent units, primarily because talus commonly obscures contacts and/or prevents lateral tracing of the flow units. In addition, flows may have thicknesses or geomorphological complexity at scales approaching the limit of resolution of the data, which hampers distinguishing one unit from another. To address these issues, we have undertaken a terrestrial analog study using World View 2 satellite imagery of layered lava sequences on Oahu, Hawaii. These data have a resolution comparable to LROC NAC images of 0.5 m. The layered lava sequences are first analyzed in ArcGIS to obtain an initial estimate of the number and thicknesses of flow units identified in the images. We next visit the outcrops in the field to perform detailed measurements of the individual units. We have discovered that the number of flow units identified in the remote sensing data is fewer compared to

  10. Calibrating Images from the MINERVA Cameras

    NASA Astrophysics Data System (ADS)

    Mercedes Colón, Ana

    2016-01-01

    The MINiature Exoplanet Radial Velocity Array (MINERVA) consists of an array of robotic telescopes located on Mount Hopkins, Arizona with the purpose of performing transit photometry and spectroscopy to find Earth-like planets around Sun-like stars. In order to make photometric observations, it is necessary to perform calibrations on the CCD cameras of the telescopes to take into account possible instrument error on the data. In this project, we developed a pipeline that takes optical images, calibrates them using sky flats, darks, and biases to generate a transit light curve.

  11. Methods for identification of images acquired with digital cameras

    NASA Astrophysics Data System (ADS)

    Geradts, Zeno J.; Bijhold, Jurrien; Kieft, Martijn; Kurosawa, Kenji; Kuroki, Kenro; Saitoh, Naoki

    2001-02-01

    From the court we were asked whether it is possible to determine if an image has been made with a specific digital camera. This question has to be answered in child pornography cases, where evidence is needed that a certain picture has been made with a specific camera. We have looked into different methods of examining the cameras to determine if a specific image has been made with a camera: defects in CCDs, file formats that are used, noise introduced by the pixel arrays and watermarking in images used by the camera manufacturer.

  12. Speckle Camera Imaging of the Planet Pluto

    NASA Astrophysics Data System (ADS)

    Howell, Steve B.; Horch, Elliott P.; Everett, Mark E.; Ciardi, David R.

    2012-10-01

    We have obtained optical wavelength (692 nm and 880 nm) speckle imaging of the planet Pluto and its largest moon Charon. Using our DSSI speckle camera attached to the Gemini North 8 m telescope, we collected high resolution imaging with an angular resolution of ∼20 mas, a value at the Gemini-N telescope diffraction limit. We have produced for this binary system the first speckle reconstructed images, from which we can measure not only the orbital separation and position angle for Charon, but also the diameters of the two bodies. Our measurements of these parameters agree, within the uncertainties, with the current best values for Pluto and Charon. The Gemini-N speckle observations of Pluto are presented to illustrate the capabilities of our instrument and the robust production of high accuracy, high spatial resolution reconstructed images. We hope our results will suggest additional applications of high resolution speckle imaging for other objects within our solar system and beyond. Based on observations obtained at the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the National Science Foundation on behalf of the Gemini partnership: the National Science Foundation (United States), the Science and Technology Facilities Council (United Kingdom), the National Research Council (Canada), CONICYT (Chile), the Australian Research Council (Australia), Ministério da Ciência, Tecnologia e Inovação (Brazil) and Ministerio de Ciencia, Tecnología e Innovación Productiva (Argentina).

  13. Simulating images captured by superposition lens cameras

    NASA Astrophysics Data System (ADS)

    Thangarajan, Ashok Samraj; Kakarala, Ramakrishna

    2011-03-01

    As the demand for reduction in the thickness of cameras rises, so too does the interest in thinner lens designs. One such radical approach toward developing a thin lens is obtained from nature's superposition principle as used in the eyes of many insects. But generally the images obtained from those lenses are fuzzy, and require reconstruction algorithms to complete the imaging process. A hurdle to developing such algorithms is that the existing literature does not provide realistic test images, aside from using commercial ray-tracing software which is costly. A solution for that problem is presented in this paper. Here a Gabor Super Lens (GSL), which is based on the superposition principle, is simulated using the public-domain ray-tracing software POV-Ray. The image obtained is of a grating surface as viewed through an actual GSL, which can be used to test reconstruction algorithms. The large computational time in rendering such images requires further optimization, and methods to do so are discussed.

  14. Camera system for multispectral imaging of documents

    NASA Astrophysics Data System (ADS)

    Christens-Barry, William A.; Boydston, Kenneth; France, Fenella G.; Knox, Keith T.; Easton, Roger L., Jr.; Toth, Michael B.

    2009-02-01

    A spectral imaging system comprising a 39-Mpixel monochrome camera, LED-based narrowband illumination, and acquisition/control software has been designed for investigations of cultural heritage objects. Notable attributes of this system, referred to as EurekaVision, include: streamlined workflow, flexibility, provision of well-structured data and metadata for downstream processing, and illumination that is safer for the artifacts. The system design builds upon experience gained while imaging the Archimedes Palimpsest and has been used in studies of a number of important objects in the LOC collection. This paper describes practical issues that were considered by EurekaVision to address key research questions for the study of fragile and unique cultural objects over a range of spectral bands. The system is intended to capture important digital records for access by researchers, professionals, and the public. The system was first used for spectral imaging of the 1507 world map by Martin Waldseemueller, the first printed map to reference "America." It was also used to image sections of the Carta Marina 1516 map by the same cartographer for comparative purposes. An updated version of the system is now being utilized by the Preservation Research and Testing Division of the Library of Congress.

  15. Fast Camera Imaging of Hall Thruster Ignition

    SciTech Connect

    C.L. Ellison, Y. Raitses and N.J. Fisch

    2011-02-24

    Hall thrusters provide efficient space propulsion by electrostatic acceleration of ions. Rotating electron clouds in the thruster overcome the space charge limitations of other methods. Images of the thruster startup, taken with a fast camera, reveal a bright ionization period which settles into steady state operation over 50 μs. The cathode introduces azimuthal asymmetry, which persists for about 30 μs into the ignition. Plasma thrusters are used on satellites for repositioning, orbit correction and drag compensation. The advantage of plasma thrusters over conventional chemical thrusters is that the exhaust energies are not limited by chemical energy to about an electron volt. For xenon Hall thrusters, the ion exhaust velocity can be 15-20 km/s, compared to 5 km/s for a typical chemical thruster

  16. Tectonic Mapping of Mare Frigoris Using Lunar Reconnaissance Orbiter Camera Images

    NASA Astrophysics Data System (ADS)

    Williams, N. R.; Bell, J. F.; Watters, T. R.; Banks, M. E.; Robinson, M. S.

    2012-12-01

    Conventional wisdom has been that extensional tectonism on the Moon largely ended ~3.6 billion years ago and that contractional deformation ended ~1.2 billion years ago. New NASA Lunar Reconnaissance Orbiter Camera (LROC) high resolution images are forcing a re-assessment of this view. Mapping in Mare Frigoris and the surrounding area has revealed many tectonic landforms enabling new investigations of the region's structural evolution. Sinuous wrinkle ridges with hundreds of meters of relief are interpreted as folded basalt layers overlying thrust faults. They have often been associated with lunar mascons identified by positive free-air gravity anomalies where thick basaltic lava causes flexure and subsidence to form ridges. No mascon-like gravity anomaly is associated with Mare Frigoris, yet large ridges deform the mare basalts. Lobate scarps are also found near Frigoris. These asymmetric linear hills inferred to be surface expressions of thrust faults are distributed globally and thought to originate from cooling and radial contraction of the lunar interior. Clusters of meter-scale extensional troughs or graben bounded by normal faults also occur in Frigoris. Tectonic landforms are being mapped in and around Mare Frigoris using LROC Narrow Angle Camera (NAC) images. Preliminary results show that wrinkle ridges in Frigoris occur both near and distal to the basin perimeter, trend E/W in western and central Frigoris, and form a polygonal pattern in the eastern section. Several complex wrinkle ridges are observed to transition into morphologically simpler lobate scarps at mare/highland boundaries, with the contrast in tectonic morphology likely due to the change from layered (mare) to un-layered (highlands) substrate. Lobate scarps in Frigoris occur primarily in the highlands, tend to strike E/W, and often but not always follow the boundary between mare and highlands. Small graben mapped in Frigoris occur in several clusters adjacent to or atop ridges and scarps, and

  17. Spectral Camera based on Ghost Imaging via Sparsity Constraints

    PubMed Central

    Liu, Zhentao; Tan, Shiyu; Wu, Jianrong; Li, Enrong; Shen, Xia; Han, Shensheng

    2016-01-01

    The image information acquisition ability of a conventional camera is usually much lower than the Shannon Limit since it does not make use of the correlation between pixels of image data. Applying a random phase modulator to code the spectral images and combining with compressive sensing (CS) theory, a spectral camera based on true thermal light ghost imaging via sparsity constraints (GISC spectral camera) is proposed and demonstrated experimentally. GISC spectral camera can acquire the information at a rate significantly below the Nyquist rate, and the resolution of the cells in the three-dimensional (3D) spectral images data-cube can be achieved with a two-dimensional (2D) detector in a single exposure. For the first time, GISC spectral camera opens the way of approaching the Shannon Limit determined by Information Theory in optical imaging instruments. PMID:27180619

  18. 15 CFR 743.3 - Thermal imaging camera reporting.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... REPORTING AND NOTIFICATION § 743.3 Thermal imaging camera reporting. (a) General requirement. Exports of... 15 Commerce and Foreign Trade 2 2014-01-01 2014-01-01 false Thermal imaging camera reporting. 743.3 Section 743.3 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign...

  19. Cartography of the Luna-21 landing site and Lunokhod-2 traverse area based on Lunar Reconnaissance Orbiter Camera images and surface archive TV-panoramas

    NASA Astrophysics Data System (ADS)

    Karachevtseva, I. P.; Kozlova, N. A.; Kokhanov, A. A.; Zubarev, A. E.; Nadezhdina, I. E.; Patratiy, V. D.; Konopikhin, A. A.; Basilevsky, A. T.; Abdrakhimov, A. M.; Oberst, J.; Haase, I.; Jolliff, B. L.; Plescia, J. B.; Robinson, M. S.

    2017-02-01

    The Lunar Reconnaissance Orbiter Camera (LROC) system consists of a Wide Angle Camera (WAC) and Narrow Angle Camera (NAC). NAC images (∼0.5 to 1.7 m/pixel) reveal details of the Luna-21 landing site and Lunokhod-2 traverse area. We derived a Digital Elevation Model (DEM) and an orthomosaic for the study region using photogrammetric stereo processing techniques with NAC images. The DEM and mosaic allowed us to analyze the topography and morphology of the landing site area and to map the Lunokhod-2 rover route. The total range of topographic elevation along the traverse was found to be less than 144 m; and the rover encountered slopes of up to 20°. With the orthomosaic tied to the lunar reference frame, we derived coordinates of the Lunokhod-2 landing module and overnight stop points. We identified the exact rover route by following its tracks and determined its total length as 39.16 km, more than was estimated during the mission (37 km), which until recently was a distance record for planetary robotic rovers held for more than 40 years.

  20. Presence capture cameras - a new challenge to the image quality

    NASA Astrophysics Data System (ADS)

    Peltoketo, Veli-Tapani

    2016-04-01

    Commercial presence capture cameras are coming to the markets and a new era of visual entertainment starts to get its shape. Since the true presence capturing is still a very new technology, the real technical solutions are just passed a prototyping phase and they vary a lot. Presence capture cameras have still the same quality issues to tackle as previous phases of digital imaging but also numerous new ones. This work concentrates to the quality challenges of presence capture cameras. A camera system which can record 3D audio-visual reality as it is has to have several camera modules, several microphones and especially technology which can synchronize output of several sources to a seamless and smooth virtual reality experience. Several traditional quality features are still valid in presence capture cameras. Features like color fidelity, noise removal, resolution and dynamic range create the base of virtual reality stream quality. However, co-operation of several cameras brings a new dimension for these quality factors. Also new quality features can be validated. For example, how the camera streams should be stitched together with 3D experience without noticeable errors and how to validate the stitching? The work describes quality factors which are still valid in the presence capture cameras and defines the importance of those. Moreover, new challenges of presence capture cameras are investigated in image and video quality point of view. The work contains considerations how well current measurement methods can be used in presence capture cameras.

  1. Initial results and field applications of a polarization imaging camera

    NASA Astrophysics Data System (ADS)

    Olsen, R. Chris; Eyler, Michael; Puetz, Angela M.; Esterline, Chelsea

    2009-08-01

    The SALSA linear Stokes polarization camera from Bossa Nova Technologies (520-550 nm) uses an electronically rotated polarization filter to measure four states of polarization nearly simultaneously. Some initial imagery results are presented. Preliminary analysis results indicate that the intensity and degree of linear polarization (DOLP) information can be used for image classification purposes. The DOLP images also show that the camera has a good ability to distinguish asphalt patches of different ages. These positive results and the relative simplicity of the camera system show the camera's potential for field applications.

  2. New insight into lunar impact melt mobility from the LRO camera

    USGS Publications Warehouse

    Bray, Veronica J.; Tornabene, Livio L.; Keszthelyi, Laszlo P.; McEwen, Alfred S.; Hawke, B. Ray; Giguere, Thomas A.; Kattenhorn, Simon A.; Garry, William B.; Rizk, Bashar; Caudill, C.M.; Gaddis, Lisa R.; van der Bogert, Carolyn H.

    2010-01-01

    The Lunar Reconnaissance Orbiter Camera (LROC) is systematically imaging impact melt deposits in and around lunar craters at meter and sub-meter scales. These images reveal that lunar impact melts, although morphologically similar to terrestrial lava flows of similar size, exhibit distinctive features (e.g., erosional channels). Although generated in a single rapid event, the post-impact mobility and morphology of lunar impact melts is surprisingly complex. We present evidence for multi-stage influx of impact melt into flow lobes and crater floor ponds. Our volume and cooling time estimates for the post-emplacement melt movements noted in LROC images suggest that new flows can emerge from melt ponds an extended time period after the impact event.

  3. Photorealistic image synthesis and camera validation from 2D images

    NASA Astrophysics Data System (ADS)

    Santos Ferrer, Juan C.; González Chévere, David; Manian, Vidya

    2014-06-01

    This paper presents a new 3D scene reconstruction technique using the Unity 3D game engine. The method presented here allow us to reconstruct the shape of simple objects and more complex ones from multiple 2D images, including infrared and digital images from indoor scenes and only digital images from outdoor scenes and then add the reconstructed object to the simulated scene created in Unity 3D, these scenes are then validated with real world scenes. The method used different cameras settings and explores different properties in the reconstructions of the scenes including light, color, texture, shapes and different views. To achieve the highest possible resolution, it was necessary the extraction of partial textures from visible surfaces. To recover the 3D shapes and the depth of simple objects that can be represented by the geometric bodies, there geometric characteristics were used. To estimate the depth of more complex objects the triangulation method was used, for this the intrinsic and extrinsic parameters were calculated using geometric camera calibration. To implement the methods mentioned above the Matlab tool was used. The technique presented here also let's us to simulate small simple videos, by reconstructing a sequence of multiple scenes of the video separated by small margins of time. To measure the quality of the reconstructed images and video scenes the Fast Low Band Model (FLBM) metric from the Video Quality Measurement (VQM) software was used. Low bandwidth perception based features include edges and motion.

  4. Image processing for cameras with fiber bundle image relay.

    PubMed

    Olivas, Stephen J; Arianpour, Ashkan; Stamenov, Igor; Morrison, Rick; Stack, Ron A; Johnson, Adam R; Agurok, Ilya P; Ford, Joseph E

    2015-02-10

    Some high-performance imaging systems generate a curved focal surface and so are incompatible with focal plane arrays fabricated by conventional silicon processing. One example is a monocentric lens, which forms a wide field-of-view high-resolution spherical image with a radius equal to the focal length. Optical fiber bundles have been used to couple between this focal surface and planar image sensors. However, such fiber-coupled imaging systems suffer from artifacts due to image sampling and incoherent light transfer by the fiber bundle as well as resampling by the focal plane, resulting in a fixed obscuration pattern. Here, we describe digital image processing techniques to improve image quality in a compact 126° field-of-view, 30 megapixel panoramic imager, where a 12 mm focal length F/1.35 lens made of concentric glass surfaces forms a spherical image surface, which is fiber-coupled to six discrete CMOS focal planes. We characterize the locally space-variant system impulse response at various stages: monocentric lens image formation onto the 2.5 μm pitch fiber bundle, image transfer by the fiber bundle, and sensing by a 1.75 μm pitch backside illuminated color focal plane. We demonstrate methods to mitigate moiré artifacts and local obscuration, correct for sphere to plane mapping distortion and vignetting, and stitch together the image data from discrete sensors into a single panorama. We compare processed images from the prototype to those taken with a 10× larger commercial camera with comparable field-of-view and light collection.

  5. Evaluation of mobile phone camera benchmarking using objective camera speed and image quality metrics

    NASA Astrophysics Data System (ADS)

    Peltoketo, Veli-Tapani

    2014-11-01

    When a mobile phone camera is tested and benchmarked, the significance of image quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. However, the speed or rapidity metrics of the mobile phone's camera system has not been used with the quality metrics even if the camera speed has become a more and more important camera performance feature. There are several tasks in this work. First, the most important image quality and speed-related metrics of a mobile phone's camera system are collected from the standards and papers and, also, novel speed metrics are identified. Second, combinations of the quality and speed metrics are validated using mobile phones on the market. The measurements are done toward application programming interface of different operating systems. Finally, the results are evaluated and conclusions are made. The paper defines a solution to combine different image quality and speed metrics to a single benchmarking score. A proposal of the combined benchmarking metric is evaluated using measurements of 25 mobile phone cameras on the market. The paper is a continuation of a previous benchmarking work expanded with visual noise measurement and updates of the latest mobile phone versions.

  6. An evolution of image source camera attribution approaches.

    PubMed

    Jahanirad, Mehdi; Wahab, Ainuddin Wahid Abdul; Anuar, Nor Badrul

    2016-05-01

    Camera attribution plays an important role in digital image forensics by providing the evidence and distinguishing characteristics of the origin of the digital image. It allows the forensic analyser to find the possible source camera which captured the image under investigation. However, in real-world applications, these approaches have faced many challenges due to the large set of multimedia data publicly available through photo sharing and social network sites, captured with uncontrolled conditions and undergone variety of hardware and software post-processing operations. Moreover, the legal system only accepts the forensic analysis of the digital image evidence if the applied camera attribution techniques are unbiased, reliable, nondestructive and widely accepted by the experts in the field. The aim of this paper is to investigate the evolutionary trend of image source camera attribution approaches from fundamental to practice, in particular, with the application of image processing and data mining techniques. Extracting implicit knowledge from images using intrinsic image artifacts for source camera attribution requires a structured image mining process. In this paper, we attempt to provide an introductory tutorial on the image processing pipeline, to determine the general classification of the features corresponding to different components for source camera attribution. The article also reviews techniques of the source camera attribution more comprehensively in the domain of the image forensics in conjunction with the presentation of classifying ongoing developments within the specified area. The classification of the existing source camera attribution approaches is presented based on the specific parameters, such as colour image processing pipeline, hardware- and software-related artifacts and the methods to extract such artifacts. The more recent source camera attribution approaches, which have not yet gained sufficient attention among image forensics

  7. NV-CMOS HD camera for day/night imaging

    NASA Astrophysics Data System (ADS)

    Vogelsong, T.; Tower, J.; Sudol, Thomas; Senko, T.; Chodelka, D.

    2014-06-01

    SRI International (SRI) has developed a new multi-purpose day/night video camera with low-light imaging performance comparable to an image intensifier, while offering the size, weight, ruggedness, and cost advantages enabled by the use of SRI's NV-CMOS HD digital image sensor chip. The digital video output is ideal for image enhancement, sharing with others through networking, video capture for data analysis, or fusion with thermal cameras. The camera provides Camera Link output with HD/WUXGA resolution of 1920 x 1200 pixels operating at 60 Hz. Windowing to smaller sizes enables operation at higher frame rates. High sensitivity is achieved through use of backside illumination, providing high Quantum Efficiency (QE) across the visible and near infrared (NIR) bands (peak QE <90%), as well as projected low noise (<2h+) readout. Power consumption is minimized in the camera, which operates from a single 5V supply. The NVCMOS HD camera provides a substantial reduction in size, weight, and power (SWaP) , ideal for SWaP-constrained day/night imaging platforms such as UAVs, ground vehicles, fixed mount surveillance, and may be reconfigured for mobile soldier operations such as night vision goggles and weapon sights. In addition the camera with the NV-CMOS HD imager is suitable for high performance digital cinematography/broadcast systems, biofluorescence/microscopy imaging, day/night security and surveillance, and other high-end applications which require HD video imaging with high sensitivity and wide dynamic range. The camera comes with an array of lens mounts including C-mount and F-mount. The latest test data from the NV-CMOS HD camera will be presented.

  8. Extraction of cloud statistics from whole sky imaging cameras

    SciTech Connect

    Kegelmeyer, W.P. Jr.

    1994-03-01

    Computer codes have been developed to extract basic cloud statistics from whole sky imaging (WSI) cameras. This report documents, on an algorithmic level, the steps and processes underlying these codes. Appendices comment on code details and on how to adapt to future changes in either the source camera or the host computer.

  9. Imaging Emission Spectra with Handheld and Cellphone Cameras

    ERIC Educational Resources Information Center

    Sitar, David

    2012-01-01

    As point-and-shoot digital camera technology advances it is becoming easier to image spectra in a laboratory setting on a shoestring budget and get immediate results. With this in mind, I wanted to test three cameras to see how their results would differ. Two undergraduate physics students and I used one handheld 7.1 megapixel (MP) digital Cannon…

  10. Imaging Emission Spectra with Handheld and Cellphone Cameras

    NASA Astrophysics Data System (ADS)

    Sitar, David

    2012-12-01

    As point-and-shoot digital camera technology advances it is becoming easier to image spectra in a laboralory setting on a shoestring budget and get immediale results. With this in mind, I wanted to test three cameras to see how their results would differ. Two undergraduate physics students and I used one handheld 7.1 megapixel (MP) digital Cannon point-and-shoot auto focusing camera and two different cellphone cameras: one at 6.1 MP and the other at 5.1 MP.

  11. Mobile phone camera benchmarking: combination of camera speed and image quality

    NASA Astrophysics Data System (ADS)

    Peltoketo, Veli-Tapani

    2014-01-01

    When a mobile phone camera is tested and benchmarked, the significance of quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. For example, ISO 15781 defines several measurements to evaluate various camera system delays. However, the speed or rapidity metrics of the mobile phone's camera system have not been used with the quality metrics even if the camera speed has become more and more important camera performance feature. There are several tasks in this work. Firstly, the most important image quality metrics are collected from the standards and papers. Secondly, the speed related metrics of a mobile phone's camera system are collected from the standards and papers and also novel speed metrics are identified. Thirdly, combinations of the quality and speed metrics are validated using mobile phones in the market. The measurements are done towards application programming interface of different operating system. Finally, the results are evaluated and conclusions are made. The result of this work gives detailed benchmarking results of mobile phone camera systems in the market. The paper defines also a proposal of combined benchmarking metrics, which includes both quality and speed parameters.

  12. A Multimodality Hybrid Gamma-Optical Camera for Intraoperative Imaging

    PubMed Central

    Lees, John E.; Bugby, Sarah L.; Alqahtani, Mohammed S.; Jambi, Layal K.; Dawood, Numan S.; McKnight, William R.; Ng, Aik H.; Perkins, Alan C.

    2017-01-01

    The development of low profile gamma-ray detectors has encouraged the production of small field of view (SFOV) hand-held imaging devices for use at the patient bedside and in operating theatres. Early development of these SFOV cameras was focussed on a single modality—gamma ray imaging. Recently, a hybrid system—gamma plus optical imaging—has been developed. This combination of optical and gamma cameras enables high spatial resolution multi-modal imaging, giving a superimposed scintigraphic and optical image. Hybrid imaging offers new possibilities for assisting clinicians and surgeons in localising the site of uptake in procedures such as sentinel node detection. The hybrid camera concept can be extended to a multimodal detector design which can offer stereoscopic images, depth estimation of gamma-emitting sources, and simultaneous gamma and fluorescence imaging. Recent improvements to the hybrid camera have been used to produce dual-modality images in both laboratory simulations and in the clinic. Hybrid imaging of a patient who underwent thyroid scintigraphy is reported. In addition, we present data which shows that the hybrid camera concept can be extended to estimate the position and depth of radionuclide distribution within an object and also report the first combined gamma and Near-Infrared (NIR) fluorescence images. PMID:28282957

  13. Reading Watermarks from Printed Binary Images with a Camera Phone

    NASA Astrophysics Data System (ADS)

    Pramila, Anu; Keskinarkaus, Anja; Seppänen, Tapio

    In this paper, we propose a method for reading a watermark from a printed binary image with a camera phone. The watermark is a small binary image which is protected with (15, 11) Hamming error coding and embedded in the binary image by utilizing flippability scores of the pixels and block based relationships. The binary image is divided into blocks and fixed number of bits is embedded in each block. A frame is added around the image in order to overcome 3D distortions and lens distortions are corrected by calibrating the camera. The results obtained are encouraging and when the images were captured freehandedly by rotating the camera approximately -2 - 2 degrees, the amount of fully recovered watermarks was 96.3%.

  14. Altered Images: The Camera, Computer, & Beyond.

    ERIC Educational Resources Information Center

    Stieglitz, Mary

    The speech contained in this document originally accompanied a slide presentation on the altered photographic image. The discussion examines the links between photographic tradition and contemporary visual imaging, the current transformation of visual imaging by the computer, and the effects of digital imaging on visual arts. Photography has a…

  15. Autofluorescence imaging of basal cell carcinoma by smartphone RGB camera

    NASA Astrophysics Data System (ADS)

    Lihachev, Alexey; Derjabo, Alexander; Ferulova, Inesa; Lange, Marta; Lihacova, Ilze; Spigulis, Janis

    2015-12-01

    The feasibility of smartphones for in vivo skin autofluorescence imaging has been investigated. Filtered autofluorescence images from the same tissue area were periodically captured by a smartphone RGB camera with subsequent detection of fluorescence intensity decreasing at each image pixel for further imaging the planar distribution of those values. The proposed methodology was tested clinically with 13 basal cell carcinoma and 1 atypical nevus. Several clinical cases and potential future applications of the smartphone-based technique are discussed.

  16. A design of camera simulator for photoelectric image acquisition system

    NASA Astrophysics Data System (ADS)

    Cai, Guanghui; Liu, Wen; Zhang, Xin

    2015-02-01

    In the process of developing the photoelectric image acquisition equipment, it needs to verify the function and performance. In order to make the photoelectric device recall the image data formerly in the process of debugging and testing, a design scheme of the camera simulator is presented. In this system, with FPGA as the control core, the image data is saved in NAND flash trough USB2.0 bus. Due to the access rate of the NAND, flash is too slow to meet the requirement of the sytsem, to fix the problem, the pipeline technique and the High-Band-Buses technique are applied in the design to improve the storage rate. It reads image data out from flash in the control logic of FPGA and output separately from three different interface of Camera Link, LVDS and PAL, which can provide image data for photoelectric image acquisition equipment's debugging and algorithm validation. However, because the standard of PAL image resolution is 720*576, the resolution is different between PAL image and input image, so the image can be output after the resolution conversion. The experimental results demonstrate that the camera simulator outputs three format image sequence correctly, which can be captured and displayed by frame gather. And the three-format image data can meet test requirements of the most equipment, shorten debugging time and improve the test efficiency.

  17. Measuring magnification of virtual images using digital cameras

    NASA Astrophysics Data System (ADS)

    Kutzner, Mickey D.; Snelling, Samantha

    2016-11-01

    The concept of virtual images and why they lead to angular rather than linear magnification of optical images can be vague for students. A particularly straightforward method of obtaining quantitative magnification for simple magnifiers, microscopes, and telescopes uses the technology that students carry in their backpacks—their camera phones.

  18. ProxiScan™: A Novel Camera for Imaging Prostate Cancer

    SciTech Connect

    Ralph James

    2009-10-27

    ProxiScan is a compact gamma camera suited for high-resolution imaging of prostate cancer. Developed by Brookhaven National Laboratory and Hybridyne Imaging Technologies, Inc., ProxiScan won a 2009 R&D 100 Award, sponsored by R&D Magazine to recognize t

  19. An airborne four-camera imaging system for agricultural applications

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper describes the design and testing of an airborne multispectral digital imaging system for remote sensing applications. The system consists of four high resolution charge coupled device (CCD) digital cameras and a ruggedized PC equipped with a frame grabber and image acquisition software. T...

  20. The algorithm for generation of panoramic images for omnidirectional cameras

    NASA Astrophysics Data System (ADS)

    Lazarenko, Vasiliy P.; Yarishev, Sergey; Korotaev, Valeriy

    2015-05-01

    The omnidirectional cameras are used in areas where large field-of-view is important. Omnidirectional cameras can give a complete view of 360° along one of direction. But the distortion of omnidirectional cameras is great, which makes omnidirectional image unreadable. One way to view omnidirectional images in a readable form is the generation of panoramic images from omnidirectional images. At the same time panorama keeps the main advantage of the omnidirectional image - a large field of view. The algorithm for generation panoramas from omnidirectional images consists of several steps. Panoramas can be described as projections onto cylinders, spheres, cubes, or other surfaces that surround a viewing point. In practice, the most commonly used cylindrical, spherical and cubic panoramas. So at the first step we describe panoramas field-of-view by creating virtual surface (cylinder, sphere or cube) from matrix of 3d points in virtual object space. Then we create mapping table by finding coordinates of image points for those 3d points on omnidirectional image by using projection function. At the last step we generate panorama pixel-by-pixel image from original omnidirectional image by using of mapping table. In order to find the projection function of omnidirectional camera we used the calibration procedure, developed by Davide Scaramuzza - Omnidirectional Camera Calibration Toolbox for Matlab. After the calibration, the toolbox provides two functions which express the relation between a given pixel point and its projection onto the unit sphere. After first run of the algorithm we obtain mapping table. This mapping table can be used for real time generation of panoramic images with minimal cost of CPU time.

  1. Applying image quality in cell phone cameras: lens distortion

    NASA Astrophysics Data System (ADS)

    Baxter, Donald; Goma, Sergio R.; Aleksic, Milivoje

    2009-01-01

    This paper describes the framework used in one of the pilot studies run under the I3A CPIQ initiative to quantify overall image quality in cell-phone cameras. The framework is based on a multivariate formalism which tries to predict overall image quality from individual image quality attributes and was validated in a CPIQ pilot program. The pilot study focuses on image quality distortions introduced in the optical path of a cell-phone camera, which may or may not be corrected in the image processing path. The assumption is that the captured image used is JPEG compressed and the cellphone camera is set to 'auto' mode. As the used framework requires that the individual attributes to be relatively perceptually orthogonal, in the pilot study, the attributes used are lens geometric distortion (LGD) and lateral chromatic aberrations (LCA). The goal of this paper is to present the framework of this pilot project starting with the definition of the individual attributes, up to their quantification in JNDs of quality, a requirement of the multivariate formalism, therefore both objective and subjective evaluations were used. A major distinction in the objective part from the 'DSC imaging world' is that the LCA/LGD distortions found in cell-phone cameras, rarely exhibit radial behavior, therefore a radial mapping/modeling cannot be used in this case.

  2. High-Resolution Mars Camera Test Image of Moon (Infrared)

    NASA Technical Reports Server (NTRS)

    2005-01-01

    This crescent view of Earth's Moon in infrared wavelengths comes from a camera test by NASA's Mars Reconnaissance Orbiter spacecraft on its way to Mars. The mission's High Resolution Imaging Science Experiment camera took the image on Sept. 8, 2005, while at a distance of about 10 million kilometers (6 million miles) from the Moon. The dark feature on the right is Mare Crisium. From that distance, the Moon would appear as a star-like point of light to the unaided eye. The test verified the camera's focusing capability and provided an opportunity for calibration. The spacecraft's Context Camera and Optical Navigation Camera also performed as expected during the test.

    The Mars Reconnaissance Orbiter, launched on Aug. 12, 2005, is on course to reach Mars on March 10, 2006. After gradually adjusting the shape of its orbit for half a year, it will begin its primary science phase in November 2006. From the mission's planned science orbit about 300 kilometers (186 miles) above the surface of Mars, the high resolution camera will be able to discern features as small as one meter or yard across.

  3. Photogrammetric Processing of Apollo 15 Metric Camera Oblique Images

    NASA Astrophysics Data System (ADS)

    Edmundson, K. L.; Alexandrov, O.; Archinal, B. A.; Becker, K. J.; Becker, T. L.; Kirk, R. L.; Moratto, Z. M.; Nefian, A. V.; Richie, J. O.; Robinson, M. S.

    2016-06-01

    The integrated photogrammetric mapping system flown on the last three Apollo lunar missions (15, 16, and 17) in the early 1970s incorporated a Metric (mapping) Camera, a high-resolution Panoramic Camera, and a star camera and laser altimeter to provide support data. In an ongoing collaboration, the U.S. Geological Survey's Astrogeology Science Center, the Intelligent Robotics Group of the NASA Ames Research Center, and Arizona State University are working to achieve the most complete cartographic development of Apollo mapping system data into versatile digital map products. These will enable a variety of scientific/engineering uses of the data including mission planning, geologic mapping, geophysical process modelling, slope dependent correction of spectral data, and change detection. Here we describe efforts to control the oblique images acquired from the Apollo 15 Metric Camera.

  4. Visible camera imaging of plasmas in Proto-MPEX

    NASA Astrophysics Data System (ADS)

    Mosby, R.; Skeen, C.; Biewer, T. M.; Renfro, R.; Ray, H.; Shaw, G. C.

    2015-11-01

    The prototype Material Plasma Exposure eXperiment (Proto-MPEX) is a linear plasma device being developed at Oak Ridge National Laboratory (ORNL). This machine plans to study plasma-material interaction (PMI) physics relevant to future fusion reactors. Measurements of plasma light emission will be made on Proto-MPEX using fast, visible framing cameras. The cameras utilize a global shutter, which allows a full frame image of the plasma to be captured and compared at multiple times during the plasma discharge. Typical exposure times are ~10-100 microseconds. The cameras are capable of capturing images at up to 18,000 frames per second (fps). However, the frame rate is strongly dependent on the size of the ``region of interest'' that is sampled. The maximum ROI corresponds to the full detector area, of ~1000x1000 pixels. The cameras have an internal gain, which controls the sensitivity of the 10-bit detector. The detector includes a Bayer filter, for ``true-color'' imaging of the plasma emission. This presentation will exmine the optimized camera settings for use on Proto-MPEX. This work was supported by the US. D.O.E. contract DE-AC05-00OR22725.

  5. Air Pollution Determination Using a Surveillance Internet Protocol Camera Images

    NASA Astrophysics Data System (ADS)

    Chow Jeng, C. J.; Hwee San, Hslim; Matjafri, M. Z.; Abdullah, Abdul, K.

    Air pollution has long been a problem in the industrial nations of the West It has now become an increasing source of environmental degradation in the developing nations of east Asia Malaysia government has built a network to monitor air pollution But the cost of these networks is high and limits the knowledge of pollutant concentration to specific points of the cities A methodology based on a surveillance internet protocol IP camera for the determination air pollution concentrations was presented in this study The objective of this study was to test the feasibility of using IP camera data for estimating real time particulate matter of size less than 10 micron PM10 in the campus of USM The proposed PM10 retrieval algorithm derived from the atmospheric optical properties was employed in the present study In situ data sets of PM10 measurements and sun radiation measurements at the ground surface were collected simultaneously with the IP camera images using a DustTrak meter and a handheld spectroradiometer respectively The digital images were separated into three bands namely red green and blue bands for multispectral algorithm calibration The digital number DN of the IP camera images were converted into radiance and reflectance values After that the reflectance recorded by the digital camera was subtracted by the reflectance of the known surface and we obtained the reflectance caused by the atmospheric components The atmospheric reflectance values were used for regression analysis Regression technique was employed to determine suitable

  6. CMOS Imaging Sensor Technology for Aerial Mapping Cameras

    NASA Astrophysics Data System (ADS)

    Neumann, Klaus; Welzenbach, Martin; Timm, Martin

    2016-06-01

    In June 2015 Leica Geosystems launched the first large format aerial mapping camera using CMOS sensor technology, the Leica DMC III. This paper describes the motivation to change from CCD sensor technology to CMOS for the development of this new aerial mapping camera. In 2002 the DMC first generation was developed by Z/I Imaging. It was the first large format digital frame sensor designed for mapping applications. In 2009 Z/I Imaging designed the DMC II which was the first digital aerial mapping camera using a single ultra large CCD sensor to avoid stitching of smaller CCDs. The DMC III is now the third generation of large format frame sensor developed by Z/I Imaging and Leica Geosystems for the DMC camera family. It is an evolution of the DMC II using the same system design with one large monolithic PAN sensor and four multi spectral camera heads for R,G, B and NIR. For the first time a 391 Megapixel large CMOS sensor had been used as PAN chromatic sensor, which is an industry record. Along with CMOS technology goes a range of technical benefits. The dynamic range of the CMOS sensor is approx. twice the range of a comparable CCD sensor and the signal to noise ratio is significantly better than with CCDs. Finally results from the first DMC III customer installations and test flights will be presented and compared with other CCD based aerial sensors.

  7. Cheetah: A high frame rate, high resolution SWIR image camera

    NASA Astrophysics Data System (ADS)

    Neys, Joel; Bentell, Jonas; O'Grady, Matt; Vermeiren, Jan; Colin, Thierry; Hooylaerts, Peter; Grietens, Bob

    2008-10-01

    A high resolution, high frame rate InGaAs based image sensor and associated camera has been developed. The sensor and the camera are capable of recording and delivering more than 1700 full 640x512pixel frames per second. The FPA utilizes a low lag CTIA current integrator in each pixel, enabling integration times shorter than one microsecond. On-chip logics allows for four different sub windows to be read out simultaneously at even higher rates. The spectral sensitivity of the FPA is situated in the SWIR range [0.9-1.7 μm] and can be further extended into the Visible and NIR range. The Cheetah camera has max 16 GB of on-board memory to store the acquired images and transfer the data over a Gigabit Ethernet connection to the PC. The camera is also equipped with a full CameralinkTM interface to directly stream the data to a frame grabber or dedicated image processing unit. The Cheetah camera is completely under software control.

  8. Establishing imaging sensor specifications for digital still cameras

    NASA Astrophysics Data System (ADS)

    Kriss, Michael A.

    2007-02-01

    Digital Still Cameras, DSCs, have now displaced conventional still cameras in most markets. The heart of a DSC is thought to be the imaging sensor, be it Full Frame CCD, and Interline CCD, a CMOS sensor or the newer Foveon buried photodiode sensors. There is a strong tendency by consumers to consider only the number of mega-pixels in a camera and not to consider the overall performance of the imaging system, including sharpness, artifact control, noise, color reproduction, exposure latitude and dynamic range. This paper will provide a systematic method to characterize the physical requirements of an imaging sensor and supporting system components based on the desired usage. The analysis is based on two software programs that determine the "sharpness", potential for artifacts, sensor "photographic speed", dynamic range and exposure latitude based on the physical nature of the imaging optics, sensor characteristics (including size of pixels, sensor architecture, noise characteristics, surface states that cause dark current, quantum efficiency, effective MTF, and the intrinsic full well capacity in terms of electrons per square centimeter). Examples will be given for consumer, pro-consumer, and professional camera systems. Where possible, these results will be compared to imaging system currently on the market.

  9. Opposition effect of the Moon from LROC WAC data

    NASA Astrophysics Data System (ADS)

    Velikodsky, Yu. I.; Korokhin, V. V.; Shkuratov, Yu. G.; Kaydash, V. G.; Videen, Gorden

    2016-09-01

    LROC WAC images acquired in 5 bands of the visible spectral range were used to study the opposition effect for two mare and two highland regions near the lunar equator. Opposition phase curves were extracted from the images containing the opposition by separating the phase-curve effect from the albedo pattern by comparing WAC images at different phase angles (from 0° to 30°). Akimov's photometric function and the NASA Digital Terrain Model GLD100 were used in the processing. It was found that phase-curve slopes at small phase angles directly correlate with albedo, while at larger phase angles, they are anti-correlated. We suggest a parameter to characterize the coherent-backscattering component of the lunar opposition surge, which is defined as the maximum phase angle for which the opposition-surge slope increases with growing albedo. The width of the coherent-backscattering opposition effect varies from approximately 1.2° for highlands in red light to 3.9° for maria in blue light. The parameter depends on albedo, which is in agreement with the coherent-backscattering theory. The maximum amplitude of the coherent opposition effect is estimated to be near 8%. Maps of albedo and phase-curve slope at phase angles larger than those, at which the coherent-backscattering occurs, were built for the areas under study. Absolute calibration of WAC images was compared with Earth-based observations: the WAC-determined albedo is very close to the mean lunar albedo calculated using available Earth-based observations.

  10. Hybrid Image Fusion for Sharpness Enhancement of Multi-Spectral Lunar Images

    NASA Astrophysics Data System (ADS)

    Awumah, Anna; Mahanti, Prasun; Robinson, Mark

    2016-10-01

    Image fusion enhances the sharpness of a multi-spectral (MS) image by incorporating spatial details from a higher-resolution panchromatic (Pan) image [1,2]. Known applications of image fusion for planetary images are rare, although image fusion is well-known for its applications to Earth-based remote sensing. In a recent work [3], six different image fusion algorithms were implemented and their performances were verified with images from the Lunar Reconnaissance Orbiter (LRO) Camera. The image fusion procedure obtained a high-resolution multi-spectral (HRMS) product from the LRO Narrow Angle Camera (used as Pan) and LRO Wide Angle Camera (used as MS) images. The results showed that the Intensity-Hue-Saturation (IHS) algorithm results in a high-spatial quality product while the Wavelet-based image fusion algorithm best preserves spectral quality among all the algorithms. In this work we show the results of a hybrid IHS-Wavelet image fusion algorithm when applied to LROC MS images. The hybrid method provides the best HRMS product - both in terms of spatial resolution and preservation of spectral details. Results from hybrid image fusion can enable new science and increase the science return from existing LROC images.[1] Pohl, Cle, and John L. Van Genderen. "Review article multisensor image fusion in remote sensing: concepts, methods and applications." International journal of remote sensing 19.5 (1998): 823-854.[2] Zhang, Yun. "Understanding image fusion." Photogramm. Eng. Remote Sens 70.6 (2004): 657-661.[3] Mahanti, Prasun et al. "Enhancement of spatial resolution of the LROC Wide Angle Camera images." Archives, XXIII ISPRS Congress Archives (2016).

  11. Digital camera with apparatus for authentication of images produced from an image file

    NASA Technical Reports Server (NTRS)

    Friedman, Gary L. (Inventor)

    1993-01-01

    A digital camera equipped with a processor for authentication of images produced from an image file taken by the digital camera is provided. The digital camera processor has embedded therein a private key unique to it, and the camera housing has a public key that is so uniquely based upon the private key that digital data encrypted with the private key by the processor may be decrypted using the public key. The digital camera processor comprises means for calculating a hash of the image file using a predetermined algorithm, and second means for encrypting the image hash with the private key, thereby producing a digital signature. The image file and the digital signature are stored in suitable recording means so they will be available together. Apparatus for authenticating at any time the image file as being free of any alteration uses the public key for decrypting the digital signature, thereby deriving a secure image hash identical to the image hash produced by the digital camera and used to produce the digital signature. The apparatus calculates from the image file an image hash using the same algorithm as before. By comparing this last image hash with the secure image hash, authenticity of the image file is determined if they match, since even one bit change in the image hash will cause the image hash to be totally different from the secure hash.

  12. Upgraded cameras for the HESS imaging atmospheric Cherenkov telescopes

    NASA Astrophysics Data System (ADS)

    Giavitto, Gianluca; Ashton, Terry; Balzer, Arnim; Berge, David; Brun, Francois; Chaminade, Thomas; Delagnes, Eric; Fontaine, Gérard; Füßling, Matthias; Giebels, Berrie; Glicenstein, Jean-François; Gräber, Tobias; Hinton, James; Jahnke, Albert; Klepser, Stefan; Kossatz, Marko; Kretzschmann, Axel; Lefranc, Valentin; Leich, Holger; Lüdecke, Hartmut; Lypova, Iryna; Manigot, Pascal; Marandon, Vincent; Moulin, Emmanuel; de Naurois, Mathieu; Nayman, Patrick; Penno, Marek; Ross, Duncan; Salek, David; Schade, Markus; Schwab, Thomas; Simoni, Rachel; Stegmann, Christian; Steppa, Constantin; Thornhill, Julian; Toussnel, François

    2016-08-01

    The High Energy Stereoscopic System (H.E.S.S.) is an array of five imaging atmospheric Cherenkov telescopes, sensitive to cosmic gamma rays of energies between 30 GeV and several tens of TeV. Four of them started operations in 2003 and their photomultiplier tube (PMT) cameras are currently undergoing a major upgrade, with the goals of improving the overall performance of the array and reducing the failure rate of the ageing systems. With the exception of the 960 PMTs, all components inside the camera have been replaced: these include the readout and trigger electronics, the power, ventilation and pneumatic systems and the control and data acquisition software. New designs and technical solutions have been introduced: the readout makes use of the NECTAr analog memory chip, which samples and stores the PMT signals and was developed for the Cherenkov Telescope Array (CTA). The control of all hardware subsystems is carried out by an FPGA coupled to an embedded ARM computer, a modular design which has proven to be very fast and reliable. The new camera software is based on modern C++ libraries such as Apache Thrift, ØMQ and Protocol buffers, offering very good performance, robustness, flexibility and ease of development. The first camera was upgraded in 2015, the other three cameras are foreseen to follow in fall 2016. We describe the design, the performance, the results of the tests and the lessons learned from the first upgraded H.E.S.S. camera.

  13. Image quality testing of assembled IR camera modules

    NASA Astrophysics Data System (ADS)

    Winters, Daniel; Erichsen, Patrik

    2013-10-01

    Infrared (IR) camera modules for the LWIR (8-12_m) that combine IR imaging optics with microbolometer focal plane array (FPA) sensors with readout electronics are becoming more and more a mass market product. At the same time, steady improvements in sensor resolution in the higher priced markets raise the requirement for imaging performance of objectives and the proper alignment between objective and FPA. This puts pressure on camera manufacturers and system integrators to assess the image quality of finished camera modules in a cost-efficient and automated way for quality control or during end-of-line testing. In this paper we present recent development work done in the field of image quality testing of IR camera modules. This technology provides a wealth of additional information in contrast to the more traditional test methods like minimum resolvable temperature difference (MRTD) which give only a subjective overall test result. Parameters that can be measured are image quality via the modulation transfer function (MTF) for broadband or with various bandpass filters on- and off-axis and optical parameters like e.g. effective focal length (EFL) and distortion. If the camera module allows for refocusing the optics, additional parameters like best focus plane, image plane tilt, auto-focus quality, chief ray angle etc. can be characterized. Additionally, the homogeneity and response of the sensor with the optics can be characterized in order to calculate the appropriate tables for non-uniformity correction (NUC). The technology can also be used to control active alignment methods during mechanical assembly of optics to high resolution sensors. Other important points that are discussed are the flexibility of the technology to test IR modules with different form factors, electrical interfaces and last but not least the suitability for fully automated measurements in mass production.

  14. Spatial calibration of full stokes polarization imaging camera

    NASA Astrophysics Data System (ADS)

    Vedel, M.; Breugnot, S.; Lechocinski, N.

    2014-05-01

    Objective and background: We present a new method for the calibration of Bossa Nova Technologies' full Stokes, passive polarization imaging camera SALSA. The SALSA camera is a Division of Time Imaging Polarimeter. It uses custom made Ferroelectric Liquid Crystals mounted directly in front of the camera's CCD. Regular calibration process based on Data Reduction Matrix calculation assumes a perfect spatial uniformity of the FLC. However, alignment of FLC molecules can be disturbed by external constraints like mechanical stress from fixture, temperature variations and humidity. This disarray of the FLC molecules alignment appears as spatial non-uniformity. With typical DRM condition numbers of 2 to 5, the influence on DOLP and DOCP variations over the field of view can get up to 10%. Spatial nonuniformity of commercially available FLC products is the limiting factor for achieving reliable performances over the whole camera's field of view. We developed a field calibration technique based on mapping the CCD into areas of interest, then applying the DRM calculations on those individual areas. Results: First, we provide general background of the SALSA camera's technology, its performances and limitations. Detailed analysis of commercially available FLCs is described. Particularly, the spatial non uniformity influence on the Stokes parameters. Then, the new calibration technique is presented. Several configurations and parameters are tested: even division of the CCD into square-shaped regions, the number of regions, adaptive regions. Finally, the spatial DRM "stitching" process is described, especially for live calculation and display of Stokes parameters.

  15. Coincidence ion imaging with a fast frame camera

    SciTech Connect

    Lee, Suk Kyoung; Cudry, Fadia; Lin, Yun Fei; Lingenfelter, Steven; Winney, Alexander H.; Fan, Lin; Li, Wen

    2014-12-15

    A new time- and position-sensitive particle detection system based on a fast frame CMOS (complementary metal-oxide semiconductors) camera is developed for coincidence ion imaging. The system is composed of four major components: a conventional microchannel plate/phosphor screen ion imager, a fast frame CMOS camera, a single anode photomultiplier tube (PMT), and a high-speed digitizer. The system collects the positional information of ions from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of a PMT processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of ion spots on each camera frame with the peak heights on the corresponding time-of-flight spectrum of a PMT. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide.

  16. A compact gamma camera for biological imaging

    SciTech Connect

    Bradley, E L; Cella, J; Majewski, S; Popov, V; Qian, Jianguo; Saha, M S; Smith, M F; Weisenberger, A G; Welsh, R E

    2006-02-01

    A compact detector, sized particularly for imaging a mouse, is described. The active area of the detector is approximately 46 mm; spl times/ 96 mm. Two flat-panel Hamamatsu H8500 position-sensitive photomultiplier tubes (PSPMTs) are coupled to a pixellated NaI(Tl) scintillator which views the animal through a copper-beryllium (CuBe) parallel-hole collimator specially designed for {sup 125}I. Although the PSPMTs have insensitive areas at their edges and there is a physical gap, corrections for scintillation light collection at the junction between the two tubes results in a uniform response across the entire rectangular area of the detector. The system described has been developed to optimize both sensitivity and resolution for in-vivo imaging of small animals injected with iodinated compounds. We demonstrate an in-vivo application of this detector, particularly to SPECT, by imaging mice injected with approximately 10-15; spl mu/Ci of {sup 125}I.

  17. Laser Imaging Video Camera Sees Through Fire, Fog, Smoke

    NASA Technical Reports Server (NTRS)

    2015-01-01

    Under a series of SBIR contracts with Langley Research Center, inventor Richard Billmers refined a prototype for a laser imaging camera capable of seeing through fire, fog, smoke, and other obscurants. Now, Canton, Ohio-based Laser Imaging through Obscurants (LITO) Technologies Inc. is demonstrating the technology as a perimeter security system at Glenn Research Center and planning its future use in aviation, shipping, emergency response, and other fields.

  18. Radiometric cloud imaging with an uncooled microbolometer thermal infrared camera.

    PubMed

    Shaw, Joseph; Nugent, Paul; Pust, Nathan; Thurairajah, Brentha; Mizutani, Kohei

    2005-07-25

    An uncooled microbolometer-array thermal infrared camera has been incorporated into a remote sensing system for radiometric sky imaging. The radiometric calibration is validated and improved through direct comparison with spectrally integrated data from the Atmospheric Emitted Radiance Interferometer (AERI). With the improved calibration, the Infrared Cloud Imager (ICI) system routinely obtains sky images with radiometric uncertainty less than 0.5 W/(m(2 )sr) for extended deployments in challenging field environments. We demonstrate the infrared cloud imaging technique with still and time-lapse imagery of clear and cloudy skies, including stratus, cirrus, and wave clouds.

  19. Digital Camera with Apparatus for Authentication of Images Produced from an Image File

    NASA Technical Reports Server (NTRS)

    Friedman, Gary L. (Inventor)

    1996-01-01

    A digital camera equipped with a processor for authentication of images produced from an image file taken by the digital camera is provided. The digital camera processor has embedded therein a private key unique to it, and the camera housing has a public key that is so uniquely related to the private key that digital data encrypted with the private key may be decrypted using the public key. The digital camera processor comprises means for calculating a hash of the image file using a predetermined algorithm, and second means for encrypting the image hash with the private key, thereby producing a digital signature. The image file and the digital signature are stored in suitable recording means so they will be available together. Apparatus for authenticating the image file as being free of any alteration uses the public key for decrypting the digital signature, thereby deriving a secure image hash identical to the image hash produced by the digital camera and used to produce the digital signature. The authenticating apparatus calculates from the image file an image hash using the same algorithm as before. By comparing this last image hash with the secure image hash, authenticity of the image file is determined if they match. Other techniques to address time-honored methods of deception, such as attaching false captions or inducing forced perspectives, are included.

  20. Single-shot camera position estimation by crossed grating imaging

    NASA Astrophysics Data System (ADS)

    Juarez-Salazar, Rigoberto; Gaxiola, Leopoldo N.; Diaz-Ramirez, Victor H.

    2017-01-01

    A simple method to estimate the position of a camera device with respect to a reference plane is proposed. The method utilizes a crossed grating in the reference plane and exploits the coordinate transformation induced by the perspective projection. If the focal length is available, the position of the camera can be estimated with a single-shot. Otherwise, the focal length can be firstly estimated from few frames captured at different known displacements. The theoretical principles of the proposed method are given and the functionality of the approach is exhibited by correcting perspective-distorted images. The proposed method is computationally efficient and highly appropriate to be used in dynamic measurement systems.

  1. [Image diagnostic of the retina with fundus cameras].

    PubMed

    Koschmieder, Ingo; Müller, Lothar

    2007-01-01

    Imaging of the retina of the human eye is an essential aid for medical diagnosis. The technical realization of photos of the ocular fundus is not trivial because of the optical properties of the eye. Established devices to obtain images are so called fundus cameras with digital documentation capabilities. New procedures do not need the use of pupils enlarging measures at the patient and work with infrared illumination. The quality of the diagnostic findings depends on the one hand fundamentally on the lay-out of the optical design of the fundus camera. On the other hand there are limitations caused by the eye itself which is always a part of the beam path. Both impacts define the attainable results. Special applications deal with the stereoscopic imaging of the retina or with spectral reflection characteristics.

  2. Fast Source Camera Identification Using Content Adaptive Guided Image Filter.

    PubMed

    Zeng, Hui; Kang, Xiangui

    2016-03-01

    Source camera identification (SCI) is an important topic in image forensics. One of the most effective fingerprints for linking an image to its source camera is the sensor pattern noise, which is estimated as the difference between the content and its denoised version. It is widely believed that the performance of the sensor-based SCI heavily relies on the denoising filter used. This study proposes a novel sensor-based SCI method using content adaptive guided image filter (CAGIF). Thanks to the low complexity nature of the CAGIF, the proposed method is much faster than the state-of-the-art methods, which is a big advantage considering the potential real-time application of SCI. Despite the advantage of speed, experimental results also show that the proposed method can achieve comparable or better performance than the state-of-the-art methods in terms of accuracy.

  3. A novel SPECT camera for molecular imaging of the prostate

    NASA Astrophysics Data System (ADS)

    Cebula, Alan; Gilland, David; Su, Li-Ming; Wagenaar, Douglas; Bahadori, Amir

    2011-10-01

    The objective of this work is to develop an improved SPECT camera for dedicated prostate imaging. Complementing the recent advancements in agents for molecular prostate imaging, this device has the potential to assist in distinguishing benign from aggressive cancers, to improve site-specific localization of cancer, to improve accuracy of needle-guided prostate biopsy of cancer sites, and to aid in focal therapy procedures such as cryotherapy and radiation. Theoretical calculations show that the spatial resolution/detection sensitivity of the proposed SPECT camera can rival or exceed 3D PET and further signal-to-noise advantage is attained with the better energy resolution of the CZT modules. Based on photon transport simulation studies, the system has a reconstructed spatial resolution of 4.8 mm with a sensitivity of 0.0001. Reconstruction of a simulated prostate distribution demonstrates the focal imaging capability of the system.

  4. Influence of Digital Camera Errors on the Photogrammetric Image Processing

    NASA Astrophysics Data System (ADS)

    Sužiedelytė-Visockienė, Jūratė; Bručas, Domantas

    2009-01-01

    The paper deals with the calibration of digital camera Canon EOS 350D, often used for the photogrammetric 3D digitalisation and measurements of industrial and construction site objects. During the calibration data on the optical and electronic parameters, influencing the distortion of images, such as correction of the principal point, focal length of the objective, radial symmetrical and non-symmetrical distortions were obtained. The calibration was performed by means of the Tcc software implementing the polynomial of Chebichev and using a special test-field with the marks, coordinates of which are precisely known. The main task of the research - to determine how parameters of the camera calibration influence the processing of images, i. e. the creation of geometric model, the results of triangulation calculations and stereo-digitalisation. Two photogrammetric projects were created for this task. In first project the non-corrected and in the second the corrected ones, considering the optical errors of the camera obtained during the calibration, images were used. The results of analysis of the images processing is shown in the images and tables. The conclusions are given.

  5. Measuring SO2 ship emissions with an ultraviolet imaging camera

    NASA Astrophysics Data System (ADS)

    Prata, A. J.

    2014-05-01

    Over the last few years fast-sampling ultraviolet (UV) imaging cameras have been developed for use in measuring SO2 emissions from industrial sources (e.g. power plants; typical emission rates ~ 1-10 kg s-1) and natural sources (e.g. volcanoes; typical emission rates ~ 10-100 kg s-1). Generally, measurements have been made from sources rich in SO2 with high concentrations and emission rates. In this work, for the first time, a UV camera has been used to measure the much lower concentrations and emission rates of SO2 (typical emission rates ~ 0.01-0.1 kg s-1) in the plumes from moving and stationary ships. Some innovations and trade-offs have been made so that estimates of the emission rates and path concentrations can be retrieved in real time. Field experiments were conducted at Kongsfjord in Ny Ålesund, Svalbard, where SO2 emissions from cruise ships were made, and at the port of Rotterdam, Netherlands, measuring emissions from more than 10 different container and cargo ships. In all cases SO2 path concentrations could be estimated and emission rates determined by measuring ship plume speeds simultaneously using the camera, or by using surface wind speed data from an independent source. Accuracies were compromised in some cases because of the presence of particulates in some ship emissions and the restriction of single-filter UV imagery, a requirement for fast-sampling (> 10 Hz) from a single camera. Despite the ease of use and ability to determine SO2 emission rates from the UV camera system, the limitation in accuracy and precision suggest that the system may only be used under rather ideal circumstances and that currently the technology needs further development to serve as a method to monitor ship emissions for regulatory purposes. A dual-camera system or a single, dual-filter camera is required in order to properly correct for the effects of particulates in ship plumes.

  6. Distributed imaging using an array of compressive cameras

    NASA Astrophysics Data System (ADS)

    Ke, Jun; Shankar, Premchandra; Neifeld, Mark A.

    2009-01-01

    We describe a distributed computational imaging system that employs an array of feature specific sensors, also known as compressive imagers, to directly measure the linear projections of an object. Two different schemes for implementing these non-imaging sensors are discussed. We consider the task of object reconstruction and quantify the fidelity of reconstruction using the root mean squared error (RMSE) metric. We also study the lifetime of such a distributed sensor network. The sources of energy consumption in a distributed feature specific imaging (DFSI) system are discussed and compared with those in a distributed conventional imaging (DCI) system. A DFSI system consisting of 20 imagers collecting DCT, Hadamard, or PCA features has a lifetime of 4.8× that of the DCI system when the noise level is 20% and the reconstruction RMSE requirement is 6%. To validate the simulation results we emulate a distributed computational imaging system using an experimental setup consisting of an array of conventional cameras.

  7. Innovative Camera and Image Processing System to Characterize Cryospheric Changes

    NASA Astrophysics Data System (ADS)

    Schenk, A.; Csatho, B. M.; Nagarajan, S.

    2010-12-01

    The polar regions play an important role in Earth’s climatic and geodynamic systems. Digital photogrammetric mapping provides a means for monitoring the dramatic changes observed in the polar regions during the past decades. High-resolution, photogrammetrically processed digital aerial imagery provides complementary information to surface measurements obtained by laser altimetry systems. While laser points accurately sample the ice surface, stereo images allow for the mapping of features, such as crevasses, flow bands, shear margins, moraines, leads, and different types of sea ice. Tracking features in repeat images produces a dense velocity vector field that can either serve as validation for interferometrically derived surface velocities or it constitutes a stand-alone product. A multi-modal, photogrammetric platform consists of one or more high-resolution, commercial color cameras, GPS and inertial navigation system as well as optional laser scanner. Such a system, using a Canon EOS-1DS Mark II camera, was first flown on the Icebridge missions Fall 2009 and Spring 2010, capturing hundreds of thousands of images at a frame rate of about one second. While digital images and videos have been used for quite some time for visual inspection, precise 3D measurements with low cost, commercial cameras require special photogrammetric treatment that only became available recently. Calibrating the multi-camera imaging system and geo-referencing the images are absolute prerequisites for all subsequent applications. Commercial cameras are inherently non-metric, that is, their sensor model is only approximately known. Since these cameras are not as rugged as photogrammetric cameras, the interior orientation also changes, due to temperature and pressure changes and aircraft vibration, resulting in large errors in 3D measurements. It is therefore necessary to calibrate the cameras frequently, at least whenever the system is newly installed. Geo-referencing the images is

  8. Characterization of a PET Camera Optimized for ProstateImaging

    SciTech Connect

    Huber, Jennifer S.; Choong, Woon-Seng; Moses, William W.; Qi,Jinyi; Hu, Jicun; Wang, G.C.; Wilson, David; Oh, Sang; Huesman, RonaldH.; Derenzo, Stephen E.

    2005-11-11

    We present the characterization of a positron emission tomograph for prostate imaging that centers a patient between a pair of external curved detector banks (ellipse: 45 cm minor, 70 cm major axis). The distance between detector banks adjusts to allow patient access and to position the detectors as closely as possible for maximum sensitivity with patients of various sizes. Each bank is composed of two axial rows of 20 HR+ block detectors for a total of 80 detectors in the camera. The individual detectors are angled in the transaxial plane to point towards the prostate to reduce resolution degradation in that region. The detectors are read out by modified HRRT data acquisition electronics. Compared to a standard whole-body PET camera, our dedicated-prostate camera has the same sensitivity and resolution, less background (less randoms and lower scatter fraction) and a lower cost. We have completed construction of the camera. Characterization data and reconstructed images of several phantoms are shown. Sensitivity of a point source in the center is 946 cps/mu Ci. Spatial resolution is 4 mm FWHM in the central region.

  9. Camera assembly design proposal for SRF cavity image collection

    SciTech Connect

    Tuozzolo, S.

    2011-10-10

    This project seeks to collect images from the inside of a superconducting radio frequency (SRF) large grain niobium cavity during vertical testing. These images will provide information on multipacting and other phenomena occurring in the SRF cavity during these tests. Multipacting, a process that involves an electron buildup in the cavity and concurrent loss of RF power, is thought to be occurring near the cathode in the SRF structure. Images of electron emission in the structure will help diagnose the source of multipacting in the cavity. Multipacting sources may be eliminated with an alteration of geometric or resonant conditions in the SRF structure. Other phenomena, including unexplained light emissions previously discovered at SLAC, may be present in the cavity. In order to effectively capture images of these events during testing, a camera assembly needs to be installed to the bottom of the RF structure. The SRF assembly operates under extreme environmental conditions: it is kept in a dewar in a bath of 2K liquid helium during these tests, is pumped down to ultra-high vacuum, and is subjected to RF voltages. Because of this, the camera needs to exist as a separate assembly attached to the bottom of the cavity. The design of the camera is constrained by a number of factors that are discussed.

  10. Refocusing images and videos with a conventional compact camera

    NASA Astrophysics Data System (ADS)

    Kang, Lai; Wu, Lingda; Wei, Yingmei; Song, Hanchen; Yang, Zheng

    2015-03-01

    Digital refocusing is an interesting and useful tool for generating dynamic depth-of-field (DOF) effects in many types of photography such as portraits and creative photography. Since most existing digital refocusing methods rely on four-dimensional light field captured by special precisely manufactured devices or a sequence of images captured by a single camera, existing systems are either expensive for wide practical use or incapable of handling dynamic scenes. We present a low-cost approach for refocusing high-resolution (up to 8 mega pixels) images and videos based on a single shot using an easy to build camera-mirror stereo system. Our proposed method consists of four main steps, namely system calibration, image rectification, disparity estimation, and refocusing rendering. The effectiveness of our proposed method has been evaluated extensively using both static and dynamic scenes with various depth ranges. Promising experimental results demonstrate that our method is able to simulate various controllable realistic DOF effects. To the best of our knowledge, our method is the first that allows one to refocus high-resolution images and videos of dynamic scenes captured by a conventional compact camera.

  11. Evaluation of thermal imaging cameras used in fire fighting applications

    NASA Astrophysics Data System (ADS)

    Amon, Francine; Bryner, Nelson; Hamins, Anthony

    2004-08-01

    Thermal imaging cameras are rapidly becoming integral equipment for first responders for use in structure fires. Currently there are no standardized test methods or performance metrics available to the users or manufacturers of these instruments. The Building and Fire Research Laboratory (BFRL) at the National Institute of Standards and Technology (NIST) is developing a testing facility and methods to evaluate the performance of thermal imagers used by fire fighters to search for victims and hot spots in burning structures. The facility will test the performance of currently available imagers and advanced fire detection systems, as well as serve as a test bed for new technology. An evaluation of the performance of different thermal imaging detector technologies under field conditions is also underway. Results of this project will provide a quantifiable physical and scientific basis upon which industry standards for imaging performance, testing protocols and reporting practices related to the performance of thermal imaging cameras can be developed. The background and approach that shape the evaluation procedure for the thermal imagers are the primary focus of this paper.

  12. Engineering design criteria for an image intensifier/image converter camera

    NASA Technical Reports Server (NTRS)

    Sharpsteen, J. T.; Lund, D. L.; Stoap, L. J.; Solheim, C. D.

    1976-01-01

    The design, display, and evaluation of an image intensifier/image converter camera which can be utilized in various requirements of spaceshuttle experiments are described. An image intensifier tube was utilized in combination with two brassboards as power supply and used for evaluation of night photography in the field. Pictures were obtained showing field details which would have been undistinguishable to the naked eye or to an ordinary camera.

  13. Calibration of multiview camera with parallel and decentered image sensors

    NASA Astrophysics Data System (ADS)

    Ali-Bey, M.; Moughamir, S.; Manamanni, N.

    2012-03-01

    This paper focuses on the calibration problem of a multi-view shooting system designed for the production of 3D content for auto-stereoscopic visualization. The considered multi-view camera is characterized by coplanar and decentered image sensors regarding to the corresponding optical axis. Based on the Faugéras and Toscani's calibration approach, a calibration method is herein proposed for the case of multi-view camera with parallel and decentered image sensors. At first, the geometrical model of the shooting system is recalled and some industrial prototypes with some shooting simulations are presented. Next, the development of the proposed calibration method is detailed. Finally, some simulation results are presented before ending with some conclusions about this work.

  14. Traffic monitoring with serial images from airborne cameras

    NASA Astrophysics Data System (ADS)

    Reinartz, Peter; Lachaise, Marie; Schmeer, Elisabeth; Krauss, Thomas; Runge, Hartmut

    The classical means to measure traffic density and velocity depend on local measurements from induction loops and other on site instruments. This information does not give the whole picture of the two-dimensional traffic situation. In order to obtain precise knowledge about the traffic flow of a large area, only airborne cameras or cameras positioned at very high locations (towers, etc.) can provide an up-to-date image of all roads covered. The paper aims at showing the potential of using image time series from these cameras to derive traffic parameters on the basis of single car measurements. To be able to determine precise velocities and other parameters from an image time series, exact geocoding is one of the first requirements for the acquired image data. The methods presented here for determining several traffic parameters for single vehicles and vehicle groups involve recording and evaluating a number of digital or analog aerial images from high altitude and with a large total field of view. Visual and automatic methods for the interpretation of images are compared. It turns out that the recording frequency of the individual images should be at least 1/3 Hz (visual interpretation), but is preferably 3 Hz or more, especially for automatic vehicle tracking. The accuracy and potentials of the methods are analyzed and presented, as well as the usage of a digital road database for improving the tracking algorithm and for integrating the results for further traffic applications. Shortcomings of the methods are given as well as possible improvements regarding methodology and sensor platform.

  15. Initial results using an LCD polarization imaging camera

    NASA Astrophysics Data System (ADS)

    Olsen, R. Chris; Eyler, Michael; Puetz, Angela M.; Smith, Phillip

    2009-05-01

    The SALSA camera from Bossa Nova Technologies uses an electronically rotated polarization filter to measure four states of polarization nearly simultaneously. Initial imagery results are presented, with an investigation of polarization invariants, as affected by illumination, sensing geometry, and atmospheric effects. Applications for the system as it is being developed are in change detection and surface characterization. Preliminary results indicate an ability to distinguish new from old asphalt and disturbed earth from undisturbed earth with some image processing.

  16. Source Camera Identification and Blind Tamper Detections for Images

    DTIC Science & Technology

    2007-04-24

    Classification Based on Fea- Table X. The perf of generic classifiers for image blocks. ture Distributions, Pattern Recognition , vol.29, pp, 51-59. Method False... Recognition performance of camera brands in [1] Special Issue on Data Hiding, IEEE Transactions pairwise com parisons. on Signal Processing, Vol. 41, No. 6...satisfactory. The classification Recognition , vol. 29, pp. 51-59. performance in groups of two is close to 100% and in [9] Popescu and H. Farid

  17. Camera system resolution and its influence on digital image correlation

    SciTech Connect

    Reu, Phillip L.; Sweatt, William; Miller, Timothy; Fleming, Darryn

    2014-09-21

    Digital image correlation (DIC) uses images from a camera and lens system to make quantitative measurements of the shape, displacement, and strain of test objects. This increasingly popular method has had little research on the influence of the imaging system resolution on the DIC results. This paper investigates the entire imaging system and studies how both the camera and lens resolution influence the DIC results as a function of the system Modulation Transfer Function (MTF). It will show that when making spatial resolution decisions (including speckle size) the resolution limiting component should be considered. A consequence of the loss of spatial resolution is that the DIC uncertainties will be increased. This is demonstrated using both synthetic and experimental images with varying resolution. The loss of image resolution and DIC accuracy can be compensated for by increasing the subset size, or better, by increasing the speckle size. The speckle-size and spatial resolution are now a function of the lens resolution rather than the more typical assumption of the pixel size. The study will demonstrate the tradeoffs associated with limited lens resolution.

  18. Camera system resolution and its influence on digital image correlation

    DOE PAGES

    Reu, Phillip L.; Sweatt, William; Miller, Timothy; ...

    2014-09-21

    Digital image correlation (DIC) uses images from a camera and lens system to make quantitative measurements of the shape, displacement, and strain of test objects. This increasingly popular method has had little research on the influence of the imaging system resolution on the DIC results. This paper investigates the entire imaging system and studies how both the camera and lens resolution influence the DIC results as a function of the system Modulation Transfer Function (MTF). It will show that when making spatial resolution decisions (including speckle size) the resolution limiting component should be considered. A consequence of the loss ofmore » spatial resolution is that the DIC uncertainties will be increased. This is demonstrated using both synthetic and experimental images with varying resolution. The loss of image resolution and DIC accuracy can be compensated for by increasing the subset size, or better, by increasing the speckle size. The speckle-size and spatial resolution are now a function of the lens resolution rather than the more typical assumption of the pixel size. The study will demonstrate the tradeoffs associated with limited lens resolution.« less

  19. Model observers to predict human performance in LROC studies of SPECT reconstruction using anatomical priors

    NASA Astrophysics Data System (ADS)

    Lehovich, Andre; Gifford, Howard C.; King, Michael A.

    2008-03-01

    We investigate the use of linear model observers to predict human performance in a localization ROC (LROC) study. The task is to locate gallium-avid tumors in simulated SPECT images of a digital phantom. Our study is intended to find the optimal strength of smoothing priors incorporating various degrees of anatomical knowledge. Although humans reading the images must perform a search task, our models ignore search by assuming the lesion location is known. We use area under the model ROC curve to predict human area under the LROC curve. We used three models, the non-prewhitening matched filter (NPWMF), the channelized nonprewhitening (CNPW), and the channelized Hotelling observer (CHO). All models have access to noise-free reconstructions, which are used to compute the signal template. The NPWMF model does a poor job of predicting human performance. The CNPW and CHO model do a somewhat better job, but still do not qualitatively capture the human results. None of the models accurately predicts the smoothing strength which maximizes human performance.

  20. Mini-RF and LROC observations of mare crater layering relationships

    NASA Astrophysics Data System (ADS)

    Stickle, A. M.; Patterson, G. W.; Cahill, J. T. S.; Bussey, D. B. J.

    2016-07-01

    The lunar maria cover approximately 17% of the Moon's surface. Discerning discrete subsurface layers in the mare provides some constraints on thickness and volume estimates of mare volcanism. Multiple types of data and measurement techniques allow probing the subsurface and provide insights into these layers, including detailed examination of impact craters, mare pits and sinuous rilles, and radar sounders. Unfortunately, radar sounding includes many uncertainties about the material properties of the lunar surface that may influence estimates of layer depth and thickness. Because they distribute material from depth onto the surface, detailed examination of impact ejecta blankets provides a reliable way to examine deeper material using orbital instruments such as cameras, spectrometers, or imaging radars. Here, we utilize Miniature Radio Frequency (Mini-RF) data to investigate the scattering characteristics of ejecta blankets of young lunar craters. We use Circular Polarization Ratio (CPR) information from twenty-two young, fresh lunar craters to examine how the scattering behavior changes as a function of radius from the crater rim. Observations across a range of crater size and relative ages exhibit significant diversity within mare regions. Five of the examined craters exhibit profiles with no shelf of constant CPR near the crater rim. Comparing these CPR profiles with LROC imagery shows that the magnitude of the CPR may be an indication of crater degradation state; this may manifest differently at radar compared to optical wavelengths. Comparisons of radar and optical data also suggest relationships between subsurface stratigraphy and structure in the mare and the block size of the material found within the ejecta blanket. Of the examined craters, twelve have shelves of approximately constant CPR as well as discrete layers outcropping in the subsurface, and nine fall along a trend line when comparing shelf-width with thickness of subsurface layers. These

  1. Parallel phase-sensitive three-dimensional imaging camera

    DOEpatents

    Smithpeter, Colin L.; Hoover, Eddie R.; Pain, Bedabrata; Hancock, Bruce R.; Nellums, Robert O.

    2007-09-25

    An apparatus is disclosed for generating a three-dimensional (3-D) image of a scene illuminated by a pulsed light source (e.g. a laser or light-emitting diode). The apparatus, referred to as a phase-sensitive 3-D imaging camera utilizes a two-dimensional (2-D) array of photodetectors to receive light that is reflected or scattered from the scene and processes an electrical output signal from each photodetector in the 2-D array in parallel using multiple modulators, each having inputs of the photodetector output signal and a reference signal, with the reference signal provided to each modulator having a different phase delay. The output from each modulator is provided to a computational unit which can be used to generate intensity and range information for use in generating a 3-D image of the scene. The 3-D camera is capable of generating a 3-D image using a single pulse of light, or alternately can be used to generate subsequent 3-D images with each additional pulse of light.

  2. Goal-oriented rectification of camera-based document images.

    PubMed

    Stamatopoulos, Nikolaos; Gatos, Basilis; Pratikakis, Ioannis; Perantonis, Stavros J

    2011-04-01

    Document digitization with either flatbed scanners or camera-based systems results in document images which often suffer from warping and perspective distortions that deteriorate the performance of current OCR approaches. In this paper, we present a goal-oriented rectification methodology to compensate for undesirable document image distortions aiming to improve the OCR result. Our approach relies upon a coarse-to-fine strategy. First, a coarse rectification is accomplished with the aid of a computationally low cost transformation which addresses the projection of a curved surface to a 2-D rectangular area. The projection of the curved surface on the plane is guided only by the textual content's appearance in the document image while incorporating a transformation which does not depend on specific model primitives or camera setup parameters. Second, pose normalization is applied on the word level aiming to restore all the local distortions of the document image. Experimental results on various document images with a variety of distortions demonstrate the robustness and effectiveness of the proposed rectification methodology using a consistent evaluation methodology that encounters OCR accuracy and a newly introduced measure using a semi-automatic procedure.

  3. An Efficient Image Compressor for Charge Coupled Devices Camera

    PubMed Central

    Li, Jin; Xing, Fei; You, Zheng

    2014-01-01

    Recently, the discrete wavelet transforms- (DWT-) based compressor, such as JPEG2000 and CCSDS-IDC, is widely seen as the state of the art compression scheme for charge coupled devices (CCD) camera. However, CCD images project on the DWT basis to produce a large number of large amplitude high-frequency coefficients because these images have a large number of complex texture and contour information, which are disadvantage for the later coding. In this paper, we proposed a low-complexity posttransform coupled with compressing sensing (PT-CS) compression approach for remote sensing image. First, the DWT is applied to the remote sensing image. Then, a pair base posttransform is applied to the DWT coefficients. The pair base are DCT base and Hadamard base, which can be used on the high and low bit-rate, respectively. The best posttransform is selected by the lp-norm-based approach. The posttransform is considered as the sparse representation stage of CS. The posttransform coefficients are resampled by sensing measurement matrix. Experimental results on on-board CCD camera images show that the proposed approach significantly outperforms the CCSDS-IDC-based coder, and its performance is comparable to that of the JPEG2000 at low bit rate and it does not have the high excessive implementation complexity of JPEG2000. PMID:25114977

  4. Quantifying biodiversity using digital cameras and automated image analysis.

    NASA Astrophysics Data System (ADS)

    Roadknight, C. M.; Rose, R. J.; Barber, M. L.; Price, M. C.; Marshall, I. W.

    2009-04-01

    Monitoring the effects on biodiversity of extensive grazing in complex semi-natural habitats is labour intensive. There are also concerns about the standardization of semi-quantitative data collection. We have chosen to focus initially on automating the most time consuming aspect - the image analysis. The advent of cheaper and more sophisticated digital camera technology has lead to a sudden increase in the number of habitat monitoring images and information that is being collected. We report on the use of automated trail cameras (designed for the game hunting market) to continuously capture images of grazer activity in a variety of habitats at Moor House National Nature Reserve, which is situated in the North of England at an average altitude of over 600m. Rainfall is high, and in most areas the soil consists of deep peat (1m to 3m), populated by a mix of heather, mosses and sedges. The cameras have been continuously in operation over a 6 month period, daylight images are in full colour and night images (IR flash) are black and white. We have developed artificial intelligence based methods to assist in the analysis of the large number of images collected, generating alert states for new or unusual image conditions. This paper describes the data collection techniques, outlines the quantitative and qualitative data collected and proposes online and offline systems that can reduce the manpower overheads and increase focus on important subsets in the collected data. By converting digital image data into statistical composite data it can be handled in a similar way to other biodiversity statistics thus improving the scalability of monitoring experiments. Unsupervised feature detection methods and supervised neural methods were tested and offered solutions to simplifying the process. Accurate (85 to 95%) categorization of faunal content can be obtained, requiring human intervention for only those images containing rare animals or unusual (undecidable) conditions, and

  5. The Atlases of Vesta derived from Dawn Framing Camera images

    NASA Astrophysics Data System (ADS)

    Roatsch, T.; Kersten, E.; Matz, K.; Preusker, F.; Scholten, F.; Jaumann, R.; Raymond, C. A.; Russell, C. T.

    2013-12-01

    The Dawn Framing Camera acquired during its two HAMO (High Altitude Mapping Orbit) phases in 2011 and 2012 about 6,000 clear filter images with a resolution of about 60 m/pixel. We combined these images in a global ortho-rectified mosaic of Vesta (60 m/pixel resolution). Only very small areas near the northern pole were still in darkness and are missing in the mosaic. The Dawn Framing Camera also acquired about 10,000 high-resolution clear filter images (about 20 m/pixel) of Vesta during its Low Altitude Mapping Orbit (LAMO). Unfortunately, the northern part of Vesta was still in darkness during this phase, good illumination (incidence angle < 70°) was only available for 66.8 % of the surface [1]. We used the LAMO images to calculate another global mosaic of Vesta, this time with 20 m/pixel resolution. Both global mosaics were used to produce atlases of Vesta: a HAMO atlas with 15 tiles at a scale of 1:500,000 and a LAMO atlas with 30 tiles at a scale between 1:200,000 and 1:225,180. The nomenclature used in these atlases is based on names and places historically associated with the Roman goddess Vesta, and is compliant with the rules of the IAU. 65 names for geological features were already approved by the IAU, 39 additional names are currently under review. Selected examples of both atlases will be shown in this presentation. Reference: [1]Roatsch, Th., etal., High-resolution Vesta Low Altitude Mapping Orbit Atlas derived from Dawn Framing Camera images. Planetary and Space Science (2013), http://dx.doi.org/10.1016/j.pss.2013.06.024i

  6. The role of camera-bundled image management software in the consumer digital imaging value chain

    NASA Astrophysics Data System (ADS)

    Mueller, Milton; Mundkur, Anuradha; Balasubramanian, Ashok; Chirania, Virat

    2005-02-01

    This research was undertaken by the Convergence Center at the Syracuse University School of Information Studies (www.digital-convergence.info). Project ICONICA, the name for the research, focuses on the strategic implications of digital Images and the CONvergence of Image management and image CApture. Consumer imaging - the activity that we once called "photography" - is now recognized as in the throes of a digital transformation. At the end of 2003, market researchers estimated that about 30% of the households in the U.S. and 40% of the households in Japan owned digital cameras. In 2004, of the 86 million new cameras sold (excluding one-time use cameras), a majority (56%) were estimated to be digital cameras. Sales of photographic film, while still profitable, are declining precipitously.

  7. CMOS image sensor noise reduction method for image signal processor in digital cameras and camera phones

    NASA Astrophysics Data System (ADS)

    Yoo, Youngjin; Lee, SeongDeok; Choe, Wonhee; Kim, Chang-Yong

    2007-02-01

    Digital images captured from CMOS image sensors suffer Gaussian noise and impulsive noise. To efficiently reduce the noise in Image Signal Processor (ISP), we analyze noise feature for imaging pipeline of ISP where noise reduction algorithm is performed. The Gaussian noise reduction and impulsive noise reduction method are proposed for proper ISP implementation in Bayer domain. The proposed method takes advantage of the analyzed noise feature to calculate noise reduction filter coefficients. Thus, noise is adaptively reduced according to the scene environment. Since noise is amplified and characteristic of noise varies while the image sensor signal undergoes several image processing steps, it is better to remove noise in earlier stage on imaging pipeline of ISP. Thus, noise reduction is carried out in Bayer domain on imaging pipeline of ISP. The method is tested on imaging pipeline of ISP and images captured from Samsung 2M CMOS image sensor test module. The experimental results show that the proposed method removes noise while effectively preserves edges.

  8. Electronic imaging system incorporating a hand-held fundus camera for canine ophthalmology.

    PubMed

    Hoang, H D; Brant, L M; Jaksetic, M D; Lake, S G; Stuart, B P

    2001-11-01

    An electronic imaging system incorporating a hand-held fundus camera was used to collect images of the canine ocular fundus. The electronic imaging system comprised a hand-held fundus camera, an IBM personal computer (PC 350), Microsoft Windows NT 4.0, Adobe Photoshop, and a color printer (Tektronix Phaser 550) and was used to store, edit, and print the images captured by the fundus camera. Hand-held fundus cameras are essential for use in canine ophthalmology. The Nidek NM-100 hand-held fundus camera digitalizes images, enabling their direct transfer into reports and their storage on writeable CDs.

  9. Imaging of Venus from Galileo: Early results and camera performance

    USGS Publications Warehouse

    Belton, M.J.S.; Gierasch, P.; Klaasen, K.P.; Anger, C.D.; Carr, M.H.; Chapman, C.R.; Davies, M.E.; Greeley, R.; Greenberg, R.; Head, J.W.; Neukum, G.; Pilcher, C.B.; Veverka, J.; Fanale, F.P.; Ingersoll, A.P.; Pollock, J.B.; Morrison, D.; Clary, M.C.; Cunningham, W.; Breneman, H.

    1992-01-01

    Three images of Venus have been returned so far by the Galileo spacecraft following an encounter with the planet on UT February 10, 1990. The images, taken at effective wavelengths of 4200 and 9900 A??, characterize the global motions and distribution of haze near the Venus cloud tops and, at the latter wavelength, deep within the main cloud. Previously undetected markings are clearly seen in the near-infrared image. The global distribution of these features, which have maximum contrasts of 3%, is different from that recorded at short wavelengths. In particular, the "polar collar," which is omnipresent in short wavelength images, is absent at 9900 A??. The maximum contrast in the features at 4200 A?? is about 20%. The optical performance of the camera is described and is judged to be nominal. ?? 1992.

  10. Multiwavelength adaptive optical fundus camera and continuous retinal imaging

    NASA Astrophysics Data System (ADS)

    Yang, Han-sheng; Li, Min; Dai, Yun; Zhang, Yu-dong

    2009-08-01

    We have constructed a new version of retinal imaging system with chromatic aberration concerned and the correlated optical design presented in this article is based on the adaptive optics fundus camera modality. In our system, three typical wavelengths of 550nm, 650nm and 480nm were selected. Longitude chromatic aberration (LCA) was traded off to a minimum using ZEMAX program. The whole setup was actually evaluated on human subjects and retinal imaging was performed at continuous frame rates up to 20 Hz. Raw videos at parafovea locations were collected, and cone mosaics as well as retinal vasculature were clearly observed in one single clip. In addition, comparisons under different illumination conditions were also made to confirm our design. Image contrast and the Strehl ratio were effectively increased after dynamic correction of high order aberrations. This system is expected to bring new applications in functional imaging of human retina.

  11. Evolutionary Fuzzy Block-Matching-Based Camera Raw Image Denoising.

    PubMed

    Yang, Chin-Chang; Guo, Shu-Mei; Tsai, Jason Sheng-Hong

    2016-10-03

    An evolutionary fuzzy block-matching-based image denoising algorithm is proposed to remove noise from a camera raw image. Recently, a variance stabilization transform is widely used to stabilize the noise variance, so that a Gaussian denoising algorithm can be used to remove the signal-dependent noise in camera sensors. However, in the stabilized domain, the existed denoising algorithm may blur too much detail. To provide a better estimate of the noise-free signal, a new block-matching approach is proposed to find similar blocks by the use of a type-2 fuzzy logic system (FLS). Then, these similar blocks are averaged with the weightings which are determined by the FLS. Finally, an efficient differential evolution is used to further improve the performance of the proposed denoising algorithm. The experimental results show that the proposed denoising algorithm effectively improves the performance of image denoising. Furthermore, the average performance of the proposed method is better than those of two state-of-the-art image denoising algorithms in subjective and objective measures.

  12. Photon-efficient imaging with a single-photon camera

    PubMed Central

    Shin, Dongeek; Xu, Feihu; Venkatraman, Dheera; Lussana, Rudi; Villa, Federica; Zappa, Franco; Goyal, Vivek K.; Wong, Franco N. C.; Shapiro, Jeffrey H.

    2016-01-01

    Reconstructing a scene's 3D structure and reflectivity accurately with an active imaging system operating in low-light-level conditions has wide-ranging applications, spanning biological imaging to remote sensing. Here we propose and experimentally demonstrate a depth and reflectivity imaging system with a single-photon camera that generates high-quality images from ∼1 detected signal photon per pixel. Previous achievements of similar photon efficiency have been with conventional raster-scanning data collection using single-pixel photon counters capable of ∼10-ps time tagging. In contrast, our camera's detector array requires highly parallelized time-to-digital conversions with photon time-tagging accuracy limited to ∼ns. Thus, we develop an array-specific algorithm that converts coarsely time-binned photon detections to highly accurate scene depth and reflectivity by exploiting both the transverse smoothness and longitudinal sparsity of natural scenes. By overcoming the coarse time resolution of the array, our framework uniquely achieves high photon efficiency in a relatively short acquisition time. PMID:27338821

  13. Touchless sensor capturing five fingerprint images by one rotating camera

    NASA Astrophysics Data System (ADS)

    Noh, Donghyun; Choi, Heeseung; Kim, Jaihie

    2011-11-01

    Conventional touch-based sensors cannot capture the fingerprint images of all five fingers simultaneously due to their flat surfaces because the view of the thumb is not parallel to the other fingers. In addition, touch-based sensors have inherent difficulties, including variations in captured images due to partial contact, nonlinear distortion, inconsistent image quality, and latent images. These degrade recognition performance and user acceptance to using sensors. To overcome these difficulties, we propose a device that adopts a contact-free structure composed of a charge-coupled device (CCD) camera, rotating mirror equipped with a stepping motor, and a green light-emitting diode (LED) illuminator. The device does not make contact with any finger and captures all five fingerprint images simultaneously. We describe and discuss the structure of the proposed device in terms of four aspects: the quality of captured images, verification performance, compatibility with existing touch-based sensors, and ease of use. The experimental results show that the proposed device can capture all five fingerprint images with a high throughput (in 2.5 s), as required at the immigration control office of a country. Also, on average, a captured touchless image takes 57% of a whole rolled image whereas the image captured from a conventional touch-based sensor only takes 41% of a whole rolled image, and they have 63 and 40 true minutiae on average, respectively. Even though touchless images contain 13.18-deg rolling and 9.18-deg pitching distortion on average, 0% equal error rate (EER) is obtained by using five fingerprint images in verification stage.

  14. A two-camera imaging system for pest detection and aerial application

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This presentation reports on the design and testing of an airborne two-camera imaging system for pest detection and aerial application assessment. The system consists of two digital cameras with 5616 x 3744 effective pixels. One camera captures normal color images with blue, green and red bands, whi...

  15. Single-quantum dot imaging with a photon counting camera

    PubMed Central

    Michalet, X.; Colyer, R. A.; Antelman, J.; Siegmund, O.H.W.; Tremsin, A.; Vallerga, J.V.; Weiss, S.

    2010-01-01

    The expanding spectrum of applications of single-molecule fluorescence imaging ranges from fundamental in vitro studies of biomolecular activity to tracking of receptors in live cells. The success of these assays has relied on progresses in organic and non-organic fluorescent probe developments as well as improvements in the sensitivity of light detectors. We describe a new type of detector developed with the specific goal of ultra-sensitive single-molecule imaging. It is a wide-field, photon-counting detector providing high temporal and high spatial resolution information for each incoming photon. It can be used as a standard low-light level camera, but also allows access to a lot more information, such as fluorescence lifetime and spatio-temporal correlations. We illustrate the single-molecule imaging performance of our current prototype using quantum dots and discuss on-going and future developments of this detector. PMID:19689323

  16. Frequency Identification of Vibration Signals Using Video Camera Image Data

    PubMed Central

    Jeng, Yih-Nen; Wu, Chia-Hung

    2012-01-01

    This study showed that an image data acquisition system connecting a high-speed camera or webcam to a notebook or personal computer (PC) can precisely capture most dominant modes of vibration signal, but may involve the non-physical modes induced by the insufficient frame rates. Using a simple model, frequencies of these modes are properly predicted and excluded. Two experimental designs, which involve using an LED light source and a vibration exciter, are proposed to demonstrate the performance. First, the original gray-level resolution of a video camera from, for instance, 0 to 256 levels, was enhanced by summing gray-level data of all pixels in a small region around the point of interest. The image signal was further enhanced by attaching a white paper sheet marked with a black line on the surface of the vibration system in operation to increase the gray-level resolution. Experimental results showed that the Prosilica CV640C CMOS high-speed camera has the critical frequency of inducing the false mode at 60 Hz, whereas that of the webcam is 7.8 Hz. Several factors were proven to have the effect of partially suppressing the non-physical modes, but they cannot eliminate them completely. Two examples, the prominent vibration modes of which are less than the associated critical frequencies, are examined to demonstrate the performances of the proposed systems. In general, the experimental data show that the non-contact type image data acquisition systems are potential tools for collecting the low-frequency vibration signal of a system. PMID:23202026

  17. Frequency identification of vibration signals using video camera image data.

    PubMed

    Jeng, Yih-Nen; Wu, Chia-Hung

    2012-10-16

    This study showed that an image data acquisition system connecting a high-speed camera or webcam to a notebook or personal computer (PC) can precisely capture most dominant modes of vibration signal, but may involve the non-physical modes induced by the insufficient frame rates. Using a simple model, frequencies of these modes are properly predicted and excluded. Two experimental designs, which involve using an LED light source and a vibration exciter, are proposed to demonstrate the performance. First, the original gray-level resolution of a video camera from, for instance, 0 to 256 levels, was enhanced by summing gray-level data of all pixels in a small region around the point of interest. The image signal was further enhanced by attaching a white paper sheet marked with a black line on the surface of the vibration system in operation to increase the gray-level resolution. Experimental results showed that the Prosilica CV640C CMOS high-speed camera has the critical frequency of inducing the false mode at 60 Hz, whereas that of the webcam is 7.8 Hz. Several factors were proven to have the effect of partially suppressing the non-physical modes, but they cannot eliminate them completely. Two examples, the prominent vibration modes of which are less than the associated critical frequencies, are examined to demonstrate the performances of the proposed systems. In general, the experimental data show that the non-contact type image data acquisition systems are potential tools for collecting the low-frequency vibration signal of a system.

  18. Estimation of Cometary Rotation Parameters Based on Camera Images

    NASA Technical Reports Server (NTRS)

    Spindler, Karlheinz

    2007-01-01

    The purpose of the Rosetta mission is the in situ analysis of a cometary nucleus using both remote sensing equipment and scientific instruments delivered to the comet surface by a lander and transmitting measurement data to the comet-orbiting probe. Following a tour of planets including one Mars swing-by and three Earth swing-bys, the Rosetta probe is scheduled to rendezvous with comet 67P/Churyumov-Gerasimenko in May 2014. The mission poses various flight dynamics challenges, both in terms of parameter estimation and maneuver planning. Along with spacecraft parameters, the comet's position, velocity, attitude, angular velocity, inertia tensor and gravitatonal field need to be estimated. The measurements on which the estimation process is based are ground-based measurements (range and Doppler) yielding information on the heliocentric spacecraft state and images taken by an on-board camera yielding informaton on the comet state relative to the spacecraft. The image-based navigation depends on te identification of cometary landmarks (whose body coordinates also need to be estimated in the process). The paper will describe the estimation process involved, focusing on the phase when, after orbit insertion, the task arises to estimate the cometary rotational motion from camera images on which individual landmarks begin to become identifiable.

  19. Noise evaluation of Compton camera imaging for proton therapy

    NASA Astrophysics Data System (ADS)

    Ortega, P. G.; Torres-Espallardo, I.; Cerutti, F.; Ferrari, A.; Gillam, J. E.; Lacasta, C.; Llosá, G.; Oliver, J. F.; Sala, P. R.; Solevi, P.; Rafecas, M.

    2015-02-01

    Compton Cameras emerged as an alternative for real-time dose monitoring techniques for Particle Therapy (PT), based on the detection of prompt-gammas. As a consequence of the Compton scattering process, the gamma origin point can be restricted onto the surface of a cone (Compton cone). Through image reconstruction techniques, the distribution of the gamma emitters can be estimated, using cone-surfaces backprojections of the Compton cones through the image space, along with more sophisticated statistical methods to improve the image quality. To calculate the Compton cone required for image reconstruction, either two interactions, the last being photoelectric absorption, or three scatter interactions are needed. Because of the high energy of the photons in PT the first option might not be adequate, as the photon is not absorbed in general. However, the second option is less efficient. That is the reason to resort to spectral reconstructions, where the incoming γ energy is considered as a variable in the reconstruction inverse problem. Jointly with prompt gamma, secondary neutrons and scattered photons, not strongly correlated with the dose map, can also reach the imaging detector and produce false events. These events deteriorate the image quality. Also, high intensity beams can produce particle accumulation in the camera, which lead to an increase of random coincidences, meaning events which gather measurements from different incoming particles. The noise scenario is expected to be different if double or triple events are used, and consequently, the reconstructed images can be affected differently by spurious data. The aim of the present work is to study the effect of false events in the reconstructed image, evaluating their impact in the determination of the beam particle ranges. A simulation study that includes misidentified events (neutrons and random coincidences) in the final image of a Compton Telescope for PT monitoring is presented. The complete chain of

  20. Pre-clinical and Clinical Evaluation of High Resolution, Mobile Gamma Camera and Positron Imaging Devices

    DTIC Science & Technology

    2010-10-01

    04-1-0594 TITLE: Pre-clinical and Clinical Evaluation of High Resolution, Mobile Gamma Camera and Positron Imaging Devices PRINCIPAL...2004 - 20 SEP 2010 4. TITLE AND SUBTITLE Pre-clinical and clinical evaluation of high resolution, mobile gamma camera and positron imaging devices...a compact and mobile gamma and positron imaging camera . This imaging device has several advantages over conventional systems: (1) greater

  1. Image reconstruction methods for the PBX-M pinhole camera.

    PubMed

    Holland, A; Powell, E T; Fonck, R J

    1991-09-10

    We describe two methods that have been used to reconstruct the soft x-ray emission profile of the PBX-M tokamak from the projected images recorded by the PBX-M pinhole camera [Proc. Soc. Photo-Opt. Instrum. Eng. 691, 111 (1986)]. Both methods must accurately represent the shape of the reconstructed profile while also providing a degree of immunity to noise in the data. The first method is a simple least-squares fit to the data. This has the advantage of being fast and small and thus easily implemented on the PDP-11 computer used to control the video digitizer for the pinhole camera. The second method involves the application of a maximum entropy algorithm to an overdetermined system. This has the advantage of allowing the use of a default profile. This profile contains additional knowledge about the plasma shape that can be obtained from equilibrium fits to the external magnetic measurements. Additionally the reconstruction is guaranteed positive, and the fit to the data can be relaxed by specifying both the amount and distribution of noise in the image. The algorithm described has the advantage of being considerably faster for an overdetermined system than the usual Lagrange multiplier approach to finding the maximum entropy solution [J. Opt. Soc. Am. 62, 511 (1972); Rev. Sci. Instrum. 57, 1557 (1986)].

  2. Ceres Survey Atlas derived from Dawn Framing Camera images

    NASA Astrophysics Data System (ADS)

    Roatsch, Th.; Kersten, E.; Matz, K.-D.; Preusker, F.; Scholten, F.; Jaumann, R.; Raymond, C. A.; Russell, C. T.

    2016-02-01

    The Dawn Framing Camera (FC) acquired almost 900 clear filter images of Ceres with a resolution of about 400 m/pixels during the seven cycles in the Survey orbit in June 2015. We ortho-rectified 42 images from the third cycle and produced a global, high-resolution, controlled mosaic of Ceres. This global mosaic is the basis for a high-resolution Ceres atlas that consists of 3 tiles mapped at a scale of 1:2,000,000. The nomenclature used in this atlas was proposed by the Dawn team and was approved by the International Astronomical Union (IAU). The whole atlas is available to the public through the Dawn GIS web page.

  3. Real-time viewpoint image synthesis using strips of multi-camera images

    NASA Astrophysics Data System (ADS)

    Date, Munekazu; Takada, Hideaki; Kojima, Akira

    2015-03-01

    A real-time viewpoint image generation method is achieved. Video communications with a high sense of reality are needed to make natural connections between users at different places. One of the key technologies to achieve a sense of high reality is image generation corresponding to an individual user's viewpoint. However, generating viewpoint images requires advanced image processing, which is usually too heavy to use for real-time and low-latency purposes. In this paper we propose a real-time viewpoint image generation method using simple blending of multiple camera images taken at equal horizontal intervals and convergence obtained by using approximate information of an object's depth. An image generated from the nearest camera images is visually perceived as an intermediate viewpoint image due to the visual effect of depth-fused 3D (DFD). If the viewpoint is not on the line of the camera array, a viewpoint image could be generated by region splitting. We made a prototype viewpoint image generation system and achieved real-time full-frame operation for stereo HD videos. The users can see their individual viewpoint image for left-and-right and back-and-forth movement toward the screen. Our algorithm is very simple and promising as a means for achieving video communication with high reality.

  4. Embedded image enhancement for high-throughput cameras

    NASA Astrophysics Data System (ADS)

    Geerts, Stan J. C.; Cornelissen, Dion; de With, Peter H. N.

    2014-03-01

    This paper presents image enhancement for a novel Ultra-High-Definition (UHD) video camera offering 4K images and higher. Conventional image enhancement techniques need to be reconsidered for the high-resolution images and the low-light sensitivity of the new sensor. We study two image enhancement functions and evaluate and optimize the algorithms for embedded implementation in programmable logic (FPGA). The enhancement study involves high-quality Auto White Balancing (AWB) and Local Contrast Enhancement (LCE). We have compared multiple algorithms from literature, both with objective and subjective metrics. In order to objectively compare Local Contrast (LC), an existing LC metric is modified for LC measurement in UHD images. For AWB, we have found that color histogram stretching offers a subjective high image quality and it is among the algorithms with the lowest complexity, while giving only a small balancing error. We impose a color-to-color gain constraint, which improves robustness of low-light images. For local contrast enhancement, a combination of contrast preserving gamma and single-scale Retinex is selected. A modified bilateral filter is designed to prevent halo artifacts, while significantly reducing the complexity and simultaneously preserving quality. We show that by cascading contrast preserving gamma and single-scale Retinex, the visibility of details is improved towards the level appropriate for high-quality surveillance applications. The user is offered control over the amount of enhancement. Also, we discuss the mapping of those functions on a heterogeneous platform to come to an effective implementation while preserving quality and robustness.

  5. Gate simulation of Compton Ar-Xe gamma-camera for radionuclide imaging in nuclear medicine

    NASA Astrophysics Data System (ADS)

    Dubov, L. Yu; Belyaev, V. N.; Berdnikova, A. K.; Bolozdynia, A. I.; Akmalova, Yu A.; Shtotsky, Yu V.

    2017-01-01

    Computer simulations of cylindrical Compton Ar-Xe gamma camera are described in the current report. Detection efficiency of cylindrical Ar-Xe Compton camera with internal diameter of 40 cm is estimated as1-3%that is 10-100 times higher than collimated Anger’s camera. It is shown that cylindrical Compton camera can image Tc-99m radiotracer distribution with uniform spatial resolution of 20 mm through the whole field of view.

  6. Color calibration of a CMOS digital camera for mobile imaging

    NASA Astrophysics Data System (ADS)

    Eliasson, Henrik

    2010-01-01

    As white balance algorithms employed in mobile phone cameras become increasingly sophisticated by using, e.g., elaborate white-point estimation methods, a proper color calibration is necessary. Without such a calibration, the estimation of the light source for a given situation may go wrong, giving rise to large color errors. At the same time, the demands for efficiency in the production environment require the calibration to be as simple as possible. Thus it is important to find the correct balance between image quality and production efficiency requirements. The purpose of this work is to investigate camera color variations using a simple model where the sensor and IR filter are specified in detail. As input to the model, spectral data of the 24-color Macbeth Colorchecker was used. This data was combined with the spectral irradiance of mainly three different light sources: CIE A, D65 and F11. The sensor variations were determined from a very large population from which 6 corner samples were picked out for further analysis. Furthermore, a set of 100 IR filters were picked out and measured. The resulting images generated by the model were then analyzed in the CIELAB space and color errors were calculated using the ΔE94 metric. The results of the analysis show that the maximum deviations from the typical values are small enough to suggest that a white balance calibration is sufficient. Furthermore, it is also demonstrated that the color temperature dependence is small enough to justify the use of only one light source in a production environment.

  7. Arthropod eye-inspired digital camera with unique imaging characteristics

    NASA Astrophysics Data System (ADS)

    Xiao, Jianliang; Song, Young Min; Xie, Yizhu; Malyarchuk, Viktor; Jung, Inhwa; Choi, Ki-Joong; Liu, Zhuangjian; Park, Hyunsung; Lu, Chaofeng; Kim, Rak-Hwan; Li, Rui; Crozier, Kenneth B.; Huang, Yonggang; Rogers, John A.

    2014-06-01

    In nature, arthropods have a remarkably sophisticated class of imaging systems, with a hemispherical geometry, a wideangle field of view, low aberrations, high acuity to motion and an infinite depth of field. There are great interests in building systems with similar geometries and properties due to numerous potential applications. However, the established semiconductor sensor technologies and optics are essentially planar, which experience great challenges in building such systems with hemispherical, compound apposition layouts. With the recent advancement of stretchable optoelectronics, we have successfully developed strategies to build a fully functional artificial apposition compound eye camera by combining optics, materials and mechanics principles. The strategies start with fabricating stretchable arrays of thin silicon photodetectors and elastomeric optical elements in planar geometries, which are then precisely aligned and integrated, and elastically transformed to hemispherical shapes. This imaging device demonstrates nearly full hemispherical shape (about 160 degrees), with densely packed artificial ommatidia. The number of ommatidia (180) is comparable to those of the eyes of fire ants and bark beetles. We have illustrated key features of operation of compound eyes through experimental imaging results and quantitative ray-tracing-based simulations. The general strategies shown in this development could be applicable to other compound eye devices, such as those inspired by moths and lacewings (refracting superposition eyes), lobster and shrimp (reflecting superposition eyes), and houseflies (neural superposition eyes).

  8. Solid state television camera has no imaging tube

    NASA Technical Reports Server (NTRS)

    Huggins, C. T.

    1972-01-01

    Camera with characteristics of vidicon camera and greater resolution than home TV receiver uses mosaic of phototransistors. Because of low power and small size, camera has many applications. Mosaics can be used as cathode ray tubes and analog-to-digital converters.

  9. Coded-aperture Compton camera for gamma-ray imaging

    NASA Astrophysics Data System (ADS)

    Farber, Aaron M.

    This dissertation describes the development of a novel gamma-ray imaging system concept and presents results from Monte Carlo simulations of the new design. Current designs for large field-of-view gamma cameras suitable for homeland security applications implement either a coded aperture or a Compton scattering geometry to image a gamma-ray source. Both of these systems require large, expensive position-sensitive detectors in order to work effectively. By combining characteristics of both of these systems, a new design can be implemented that does not require such expensive detectors and that can be scaled down to a portable size. This new system has significant promise in homeland security, astronomy, botany and other fields, while future iterations may prove useful in medical imaging, other biological sciences and other areas, such as non-destructive testing. A proof-of-principle study of the new gamma-ray imaging system has been performed by Monte Carlo simulation. Various reconstruction methods have been explored and compared. General-Purpose Graphics-Processor-Unit (GPGPU) computation has also been incorporated. The resulting code is a primary design tool for exploring variables such as detector spacing, material selection and thickness and pixel geometry. The advancement of the system from a simple 1-dimensional simulation to a full 3-dimensional model is described. Methods of image reconstruction are discussed and results of simulations consisting of both a 4 x 4 and a 16 x 16 object space mesh have been presented. A discussion of the limitations and potential areas of further study is also presented.

  10. Applying and extending ISO/TC42 digital camera resolution standards to mobile imaging products

    NASA Astrophysics Data System (ADS)

    Williams, Don; Burns, Peter D.

    2007-01-01

    There are no fundamental differences between today's mobile telephone cameras and consumer digital still cameras that suggest many existing ISO imaging performance standards do not apply. To the extent that they have lenses, color filter arrays, detectors, apertures, image processing, and are hand held, there really are no operational or architectural differences. Despite this, there are currently differences in the levels of imaging performance. These are driven by physical and economic constraints, and image-capture conditions. Several ISO standards for resolution, well established for digital consumer digital cameras, require care when applied to the current generation of cell phone cameras. In particular, accommodation of optical flare, shading non-uniformity and distortion are recommended. We offer proposals for the application of existing ISO imaging resolution performance standards to mobile imaging products, and suggestions for extending performance standards to the characteristic behavior of camera phones.

  11. A low-cost dual-camera imaging system for aerial applicators

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Agricultural aircraft provide a readily available remote sensing platform as low-cost and easy-to-use consumer-grade cameras are being increasingly used for aerial imaging. In this article, we report on a dual-camera imaging system we recently assembled that can capture RGB and near-infrared (NIR) i...

  12. Color camera computed tomography imaging spectrometer for improved spatial-spectral image accuracy

    NASA Technical Reports Server (NTRS)

    Wilson, Daniel W. (Inventor); Bearman, Gregory H. (Inventor); Johnson, William R. (Inventor)

    2011-01-01

    Computed tomography imaging spectrometers ("CTIS"s) having color focal plane array detectors are provided. The color FPA detector may comprise a digital color camera including a digital image sensor, such as a Foveon X3.RTM. digital image sensor or a Bayer color filter mosaic. In another embodiment, the CTIS includes a pattern imposed either directly on the object scene being imaged or at the field stop aperture. The use of a color FPA detector and the pattern improves the accuracy of the captured spatial and spectral information.

  13. On 3D radar data visualization and merging with camera images

    NASA Astrophysics Data System (ADS)

    Kjellgren, J.

    2008-10-01

    The possibilities to support the interpretation of spatial 3D-radar data visually both with and without camera images are studied. Radar measurements and camera pictures of a person are analyzed. First, the received signal amplitudes distributed in three dimensions, spherical range and two angles, are fed to a selection procedure using amplitude and the scene volume of interest. A number of resolution cells will then form images based on a volume representation depending upon the amplitude and location. Projecting the images of all the cells upon an imaging plane then forms the total image. Different images of a radar data set are performed for different projecting planes. The images were studied to find efficient aspect angles to get the target information of most interest. Rotating the target data around a suitable axis may perform such search. In addition, a visualization method for presenting radar data merged with a camera picture has been developed. An aim in this part of the work has been to keep the high information content of the camera image in the merged image. From the 3D-radar measurements the radar data may be projected upon the imaging plane of a camera with an arbitrary viewing center. This possibility is presented in examples with one camera looking at the target scene from the radar location and another camera looking from an aspect angle differing 45° relative to the aspect angle of the radar.

  14. Influence analysis of the scroll on the image quality of the satellite camera

    NASA Astrophysics Data System (ADS)

    Fan, Chao; Yi, Hong-wei; Liang, Yi-tao

    2009-07-01

    The object distance of the high-resolution satellite camera will be changed when the camera is scroll imaging, which will cause not only the alteration of the image scale, but also the variation of the velocity-height ratio (V/H) of the satellite. The change of the V/H of the camera will induce the asynchronization between the image motion and the traveling of the charge packet on the focal plane, which will deteriorate the image quality of camera seriously. Thus, the variable regulation of the relative velocity and the height of the scroll imaging of the satellite were researched, and the expression the V/H was deduced. Based on this, the influence of the V/H on the image quality was studied from two variable factors: the latitude and the scroll angle. To illustrate this effect quantitatively, using a given round polar orbit, the deterioration of the image quality caused by the scroll imaging was calculated for different integral number of the camera, and regulation interval of the row integration time and the range of the scroll angle were computed. The results showed that, when the integral number of the camera is equal to 32 and 64, the permitted scroll angle are equal to 29.5° and 16° respectively for MTFimage motion >0.95, which will give some helpful engineering reference to learn how the image quality changes during scroll imaging of the satellite camera.

  15. Only Image Based for the 3d Metric Survey of Gothic Structures by Using Frame Cameras and Panoramic Cameras

    NASA Astrophysics Data System (ADS)

    Pérez Ramos, A.; Robleda Prieto, G.

    2016-06-01

    Indoor Gothic apse provides a complex environment for virtualization using imaging techniques due to its light conditions and architecture. Light entering throw large windows in combination with the apse shape makes difficult to find proper conditions to photo capture for reconstruction purposes. Thus, documentation techniques based on images are usually replaced by scanning techniques inside churches. Nevertheless, the need to use Terrestrial Laser Scanning (TLS) for indoor virtualization means a significant increase in the final surveying cost. So, in most cases, scanning techniques are used to generate dense point clouds. However, many Terrestrial Laser Scanner (TLS) internal cameras are not able to provide colour images or cannot reach the image quality that can be obtained using an external camera. Therefore, external quality images are often used to build high resolution textures of these models. This paper aims to solve the problem posted by virtualizing indoor Gothic churches, making that task more affordable using exclusively techniques base on images. It reviews a previous proposed methodology using a DSRL camera with 18-135 lens commonly used for close range photogrammetry and add another one using a HDR 360° camera with four lenses that makes the task easier and faster in comparison with the previous one. Fieldwork and office-work are simplified. The proposed methodology provides photographs in such a good conditions for building point clouds and textured meshes. Furthermore, the same imaging resources can be used to generate more deliverables without extra time consuming in the field, for instance, immersive virtual tours. In order to verify the usefulness of the method, it has been decided to apply it to the apse since it is considered one of the most complex elements of Gothic churches and it could be extended to the whole building.

  16. Faint Object Camera imaging and spectroscopy of NGC 4151

    NASA Technical Reports Server (NTRS)

    Boksenberg, A.; Catchpole, R. M.; Macchetto, F.; Albrecht, R.; Barbieri, C.; Blades, J. C.; Crane, P.; Deharveng, J. M.; Disney, M. J.; Jakobsen, P.

    1995-01-01

    We describe ultraviolet and optical imaging and spectroscopy within the central few arcseconds of the Seyfert galaxy NGC 4151, obtained with the Faint Object Camera on the Hubble Space Telescope. A narrowband image including (O III) lambda(5007) shows a bright nucleus centered on a complex biconical structure having apparent opening angle approximately 65 deg and axis at a position angle along 65 deg-245 deg; images in bands including Lyman-alpha and C IV lambda(1550) and in the optical continuum near 5500 A, show only the bright nucleus. In an off-nuclear optical long-slit spectrum we find a high and a low radial velocity component within the narrow emission lines. We identify the low-velocity component with the bright, extended, knotty structure within the cones, and the high-velocity component with more confined diffuse emission. Also present are strong continuum emission and broad Balmer emission line components, which we attribute to the extended point spread function arising from the intense nuclear emission. Adopting the geometry pointed out by Pedlar et al. (1993) to explain the observed misalignment of the radio jets and the main optical structure we model an ionizing radiation bicone, originating within a galactic disk, with apex at the active nucleus and axis centered on the extended radio jets. We confirm that through density bounding the gross spatial structure of the emission line region can be reproduced with a wide opening angle that includes the line of sight, consistent with the presence of a simple opaque torus allowing direct view of the nucleus. In particular, our modelling reproduces the observed decrease in position angle with distance from the nucleus, progressing initially from the direction of the extended radio jet, through our optical structure, and on to the extended narrow-line region. We explore the kinematics of the narrow-line low- and high-velocity components on the basis of our spectroscopy and adopted model structure.

  17. High Performance Imaging Streak Camera for the National Ignition Facility

    SciTech Connect

    Opachich, Y. P.; Kalantar, D.; MacPhee, A.; Holder, J.; Kimbrough, J.; Bell, P. M.; Bradley, D.; Hatch, B.; Brown, C.; Landen, O.; Perfect, B. H.; Guidry, B.; Mead, A.; Charest, M.; Palmer, N.; Homoelle, D.; Browning, D.; Silbernagel, C.; Brienza-Larsen, G.; Griffin, M.; Lee, J. J.; Haugh, M. J.

    2012-01-01

    An x-ray streak camera platform has been characterized and implemented for use at the National Ignition Facility. The camera has been modified to meet the experiment requirements of the National Ignition Campaign and to perform reliably in conditions that produce high EMI. A train of temporal UV timing markers has been added to the diagnostic in order to calibrate the temporal axis of the instrument and the detector efficiency of the streak camera was improved by using a CsI photocathode. The performance of the streak camera has been characterized and is summarized in this paper. The detector efficiency and cathode measurements are also presented.

  18. Quality controls for gamma cameras and PET cameras: development of a free open-source ImageJ program

    NASA Astrophysics Data System (ADS)

    Carlier, Thomas; Ferrer, Ludovic; Berruchon, Jean B.; Cuissard, Regis; Martineau, Adeline; Loonis, Pierre; Couturier, Olivier

    2005-04-01

    Acquisition data and treatments for quality controls of gamma cameras and Positron Emission Tomography (PET) cameras are commonly performed with dedicated program packages, which are running only on manufactured computers and differ from each other, depending on camera company and program versions. The aim of this work was to develop a free open-source program (written in JAVA language) to analyze data for quality control of gamma cameras and PET cameras. The program is based on the free application software ImageJ and can be easily loaded on any computer operating system (OS) and thus on any type of computer in every nuclear medicine department. Based on standard parameters of quality control, this program includes 1) for gamma camera: a rotation center control (extracted from the American Association of Physics in Medicine, AAPM, norms) and two uniformity controls (extracted from the Institute of Physics and Engineering in Medicine, IPEM, and National Electronic Manufacturers Association, NEMA, norms). 2) For PET systems, three quality controls recently defined by the French Medical Physicist Society (SFPM), i.e. spatial resolution and uniformity in a reconstructed slice and scatter fraction, are included. The determination of spatial resolution (thanks to the Point Spread Function, PSF, acquisition) allows to compute the Modulation Transfer Function (MTF) in both modalities of cameras. All the control functions are included in a tool box which is a free ImageJ plugin and could be soon downloaded from Internet. Besides, this program offers the possibility to save on HTML format the uniformity quality control results and a warning can be set to automatically inform users in case of abnormal results. The architecture of the program allows users to easily add any other specific quality control program. Finally, this toolkit is an easy and robust tool to perform quality control on gamma cameras and PET cameras based on standard computation parameters, is free, run on

  19. SCC500: next-generation infrared imaging camera core products with highly flexible architecture for unique camera designs

    NASA Astrophysics Data System (ADS)

    Rumbaugh, Roy N.; Grealish, Kevin; Kacir, Tom; Arsenault, Barry; Murphy, Robert H.; Miller, Scott

    2003-09-01

    A new 4th generation MicroIR architecture is introduced as the latest in the highly successful Standard Camera Core (SCC) series by BAE SYSTEMS to offer an infrared imaging engine with greatly reduced size, weight, power, and cost. The advanced SCC500 architecture provides great flexibility in configuration to include multiple resolutions, an industry standard Real Time Operating System (RTOS) for customer specific software application plug-ins, and a highly modular construction for unique physical and interface options. These microbolometer based camera cores offer outstanding and reliable performance over an extended operating temperature range to meet the demanding requirements of real-world environments. A highly integrated lens and shutter is included in the new SCC500 product enabling easy, drop-in camera designs for quick time-to-market product introductions.

  20. Stereo Reconstruction of Atmospheric Cloud Surfaces from Fish-Eye Camera Images

    NASA Astrophysics Data System (ADS)

    Katai-Urban, G.; Otte, V.; Kees, N.; Megyesi, Z.; Bixel, P. S.

    2016-06-01

    In this article a method for reconstructing atmospheric cloud surfaces using a stereo camera system is presented. The proposed camera system utilizes fish-eye lenses in a flexible wide baseline camera setup. The entire workflow from the camera calibration to the creation of the 3D point set is discussed, but the focus is mainly on cloud segmentation and on the image processing steps of stereo reconstruction. Speed requirements, geometric limitations, and possible extensions of the presented method are also covered. After evaluating the proposed method on artificial cloud images, this paper concludes with results and discussion of possible applications for such systems.

  1. Fly's Eye camera system: optical imaging using a hexapod platform

    NASA Astrophysics Data System (ADS)

    Jaskó, Attila; Pál, András.; Vida, Krisztián.; Mészáros, László; Csépány, Gergely; Mező, György

    2014-07-01

    The Fly's Eye Project is a high resolution, high coverage time-domain survey in multiple optical passbands: our goal is to cover the entire visible sky above the 30° horizontal altitude with a cadence of ~3 min. Imaging is going to be performed by 19 wide-field cameras mounted on a hexapod platform resembling a fly's eye. Using a hexapod developed and built by our team allows us to create a highly fault-tolerant instrument that uses the sky as a reference to define its own tracking motion. The virtual axis of the platform is automatically aligned with the Earth's rotational axis; therefore the same mechanics can be used independently from the geographical location of the device. Its enclosure makes it capable of autonomous observing and withstanding harsh environmental conditions. We briefly introduce the electrical, mechanical and optical design concepts of the instrument and summarize our early results, focusing on sidereal tracking. Due to the hexapod design and hence the construction is independent from the actual location, it is considerably easier to build, install and operate a network of such devices around the world.

  2. Cloud Detection with the Earth Polychromatic Imaging Camera (EPIC)

    NASA Technical Reports Server (NTRS)

    Meyer, Kerry; Marshak, Alexander; Lyapustin, Alexei; Torres, Omar; Wang, Yugie

    2011-01-01

    The Earth Polychromatic Imaging Camera (EPIC) on board the Deep Space Climate Observatory (DSCOVR) would provide a unique opportunity for Earth and atmospheric research due not only to its Lagrange point sun-synchronous orbit, but also to the potential for synergistic use of spectral channels in both the UV and visible spectrum. As a prerequisite for most applications, the ability to detect the presence of clouds in a given field of view, known as cloud masking, is of utmost importance. It serves to determine both the potential for cloud contamination in clear-sky applications (e.g., land surface products and aerosol retrievals) and clear-sky contamination in cloud applications (e.g., cloud height and property retrievals). To this end, a preliminary cloud mask algorithm has been developed for EPIC that applies thresholds to reflected UV and visible radiances, as well as to reflected radiance ratios. This algorithm has been tested with simulated EPIC radiances over both land and ocean scenes, with satisfactory results. These test results, as well as algorithm sensitivity to potential instrument uncertainties, will be presented.

  3. 131I activity quantification of gamma camera planar images

    NASA Astrophysics Data System (ADS)

    Barquero, Raquel; Garcia, Hugo P.; Incio, Monica G.; Minguez, Pablo; Cardenas, Alexander; Martínez, Daniel; Lassmann, Michael

    2017-02-01

    A procedure to estimate the activity in target tissues in patients during the therapeutic administration of 131I radiopharmaceutical treatment for thyroid conditions (hyperthyroidism and differentiated thyroid cancer) using a gamma camera (GC) with a high energy (HE) collimator, is proposed. Planar images are acquired for lesions of different sizes r, and at different distances d, in two HE GC systems. Defining a region of interest (ROI) on the image of size r, total counts n g are measured. Sensitivity S (cps MBq-1) in each acquisition is estimated as the product of the geometric G and the intrinsic efficiency η 0. The mean fluence of 364 keV photons arriving at the ROI per disintegration G, is calculated with the MCNPX code, simulating the entire GC and the HE collimator. Intrinsic efficiency η 0 is estimated from a calibration measurement of a plane reference source of 131I in air. Values of G and S for two GC systems—Philips Skylight and Siemens e-cam—are calculated. The total range of possible sensitivity values in thyroidal imaging in the e-cam and skylight GC measure from 7 cps MBq-1 to 35 cps MBq-1, and from 6 cps MBq-1 to 29 cps MBq-1, respectively. These sensitivity values have been verified with the SIMIND code, with good agreement between them. The results have been validated with experimental measurements in air, and in a medium with scatter and attenuation. The counts in the ROI can be produced by direct, scatter and penetration photons. The fluence value for direct photons is constant for any r and d values, but scatter and penetration photons show different values related to specific r and d values, resulting in the large sensitivity differences found. The sensitivity in thyroidal GC planar imaging is strongly dependent on uptake size, and distance from the GC. An individual value for the acquisition sensitivity of each lesion can significantly alleviate the level of uncertainty in the measurement of thyroid uptake activity for each patient.

  4. (131)I activity quantification of gamma camera planar images.

    PubMed

    Barquero, Raquel; Garcia, Hugo P; Incio, Monica G; Minguez, Pablo; Cardenas, Alexander; Martínez, Daniel; Lassmann, Michael

    2017-02-07

    A procedure to estimate the activity in target tissues in patients during the therapeutic administration of (131)I radiopharmaceutical treatment for thyroid conditions (hyperthyroidism and differentiated thyroid cancer) using a gamma camera (GC) with a high energy (HE) collimator, is proposed. Planar images are acquired for lesions of different sizes r, and at different distances d, in two HE GC systems. Defining a region of interest (ROI) on the image of size r, total counts n g are measured. Sensitivity S (cps MBq(-1)) in each acquisition is estimated as the product of the geometric G and the intrinsic efficiency η 0. The mean fluence of 364 keV photons arriving at the ROI per disintegration G, is calculated with the MCNPX code, simulating the entire GC and the HE collimator. Intrinsic efficiency η 0 is estimated from a calibration measurement of a plane reference source of (131)I in air. Values of G and S for two GC systems-Philips Skylight and Siemens e-cam-are calculated. The total range of possible sensitivity values in thyroidal imaging in the e-cam and skylight GC measure from 7 cps MBq(-1) to 35 cps MBq(-1), and from 6 cps MBq(-1) to 29 cps MBq(-1), respectively. These sensitivity values have been verified with the SIMIND code, with good agreement between them. The results have been validated with experimental measurements in air, and in a medium with scatter and attenuation. The counts in the ROI can be produced by direct, scatter and penetration photons. The fluence value for direct photons is constant for any r and d values, but scatter and penetration photons show different values related to specific r and d values, resulting in the large sensitivity differences found. The sensitivity in thyroidal GC planar imaging is strongly dependent on uptake size, and distance from the GC. An individual value for the acquisition sensitivity of each lesion can significantly alleviate the level of uncertainty in the measurement of thyroid uptake activity for

  5. Development of filter exchangeable 3CCD camera for multispectral imaging acquisition

    NASA Astrophysics Data System (ADS)

    Lee, Hoyoung; Park, Soo Hyun; Kim, Moon S.; Noh, Sang Ha

    2012-05-01

    There are a lot of methods to acquire multispectral images. Dynamic band selective and area-scan multispectral camera has not developed yet. This research focused on development of a filter exchangeable 3CCD camera which is modified from the conventional 3CCD camera. The camera consists of F-mounted lens, image splitter without dichroic coating, three bandpass filters, three image sensors, filer exchangeable frame and electric circuit for parallel image signal processing. In addition firmware and application software have developed. Remarkable improvements compared to a conventional 3CCD camera are its redesigned image splitter and filter exchangeable frame. Computer simulation is required to visualize a pathway of ray inside of prism when redesigning image splitter. Then the dimensions of splitter are determined by computer simulation which has options of BK7 glass and non-dichroic coating. These properties have been considered to obtain full wavelength rays on all film planes. The image splitter is verified by two line lasers with narrow waveband. The filter exchangeable frame is designed to make swap bandpass filters without displacement change of image sensors on film plane. The developed 3CCD camera is evaluated to application of detection to scab and bruise on Fuji apple. As a result, filter exchangeable 3CCD camera could give meaningful functionality for various multispectral applications which need to exchange bandpass filter.

  6. 3D reconstruction from images taken with a coaxial camera rig

    NASA Astrophysics Data System (ADS)

    Kirby, Richard; Whitaker, Ross

    2016-09-01

    A coaxial camera rig consists of a pair of cameras which acquire images along the same optical axis but at different distances from the scene using different focal length optics. The coaxial geometry permits the acquisition of image pairs through a substantially smaller opening than would be required by a traditional binocular stereo camera rig. This is advantageous in applications where physical space is limited, such as in an endoscope. 3D images acquired through an endoscope are desirable, but the lack of physical space for a traditional stereo baseline is problematic. While image acquisition along a common optical axis has been known for many years; 3D reconstruction from such image pairs has not been possible in the center region due to the very small disparity between corresponding points. This characteristic of coaxial image pairs has been called the unrecoverable point problem. We introduce a novel method to overcome the unrecoverable point problem in coaxial camera rigs, using a variational methods optimization algorithm to map pairs of optical flow fields from different focal length cameras in a coaxial camera rig. Our method uses the ratio of the optical flow fields for 3D reconstruction. This results in accurate image pair alignment and produces accurate dense depth maps. We test our method on synthetic optical flow fields and on real images. We demonstrate our method's accuracy by evaluating against a ground-truth. Accuracy is comparable to a traditional binocular stereo camera rig, but without the traditional stereo baseline and with substantially smaller occlusions.

  7. MEM-FLIM: all-solid-state camera for fluorescence lifetime imaging

    NASA Astrophysics Data System (ADS)

    Zhao, Qiaole; Young, Ian Ted; Schelen, Ben; Schouten, Raymond; van den Oever, Rein; Leenen, René; van Kuijk, Harry; Peters, Inge; Polderdijk, Frank; Bosiers, Jan; Jalink, Kees; de Jong, Sander; van Geest, Bert; Stoop, Karel

    2012-03-01

    We have built an all-solid-state camera which is directly modulated at the pixel level for frequency domain fluorescence lifetime imaging microscopy (FLIM) measurement. This novel camera eliminates the need for an image intensifier through the use of an application-specific CCD design, which is being used in a frequency domain FLIM system. The first stage of evaluation for the camera has been carried out. Camera characteristics such as noise distribution, dark current influence, camera gain, sampling density, sensitivity, linearity of photometric response, and contrast modulation transfer function have been studied through experiments. We are able to do lifetime measurement using MEM-FLIM cameras for various objects, e.g. fluorescence plastic test slides, fluorescein solution, fixed GFP cells, and GFP - Actin stained live cells.

  8. Applications of the BAE SYSTEMS MicroIR uncooled infrared thermal imaging cameras

    NASA Astrophysics Data System (ADS)

    Wickman, Heather A.; Henebury, John J., Jr.; Long, Dennis R.

    2003-09-01

    MicroIR uncooled infrared imaging modules (based on VOx microbolometers), developed and manufactured at BAE SYSTEMS, are integrated into ruggedized, weatherproof camera systems and are currently supporting numerous security and surveillance applications. The introduction of uncooled thermal imaging has permitted the expansion of traditional surveillance and security perimeters. MicroIR cameras go beyond the imagery limits of visible and low-light short wavelength infrared sensors, providing continual, uninterrupted, high quality imagery both day and night. Coupled with an appropriate lens assembly, MicroIR cameras offer exemplary imagery performance that lends itself to a more comprehensive level of surveillance. With the current increased emphasis on security and surveillance, MicroIR Cameras are evolving as an unquestionably beneficial instrument in the security and surveillance arenas. This paper will elaborate on the attributes of the cameras, and discuss the development and the deployment, both present and future, of BAE SYSTEMS MicroIR Cameras.

  9. A combined microphone and camera calibration technique with application to acoustic imaging.

    PubMed

    Legg, Mathew; Bradley, Stuart

    2013-10-01

    We present a calibration technique for an acoustic imaging microphone array, combined with a digital camera. Computer vision and acoustic time of arrival data are used to obtain microphone coordinates in the camera reference frame. Our new method allows acoustic maps to be plotted onto the camera images without the need for additional camera alignment or calibration. Microphones and cameras may be placed in an ad-hoc arrangement and, after calibration, the coordinates of the microphones are known in the reference frame of a camera in the array. No prior knowledge of microphone positions, inter-microphone spacings, or air temperature is required. This technique is applied to a spherical microphone array and a mean difference of 3 mm was obtained between the coordinates obtained with this calibration technique and those measured using a precision mechanical method.

  10. A hybrid version of the Whipple observatory's air Cherenkov imaging camera for use in moonlight

    NASA Astrophysics Data System (ADS)

    Chantell, M. C.; Akerlof, C. W.; Badran, H. M.; Buckley, J.; Carter-Lewis, D. A.; Cawley, M. F.; Connaughton, V.; Fegan, D. J.; Fleury, P.; Gaidos, J.; Hillas, A. M.; Lamb, R. C.; Pare, E.; Rose, H. J.; Rovero, A. C.; Sarazin, X.; Sembroski, G.; Schubnell, M. S.; Urban, M.; Weekes, T. C.; Wilson, C.

    1997-02-01

    A hybrid version of the Whipple Observatory's atmospheric Cherenkov imaging camera that permits observation during periods of bright moonlight is described. The hybrid camera combines a blue-light blocking filter with the standard Whipple imaging camera to reduce sensitivity to wavelengths greater than 360 nm. Data taken with this camera are found to be free from the effects of the moonlit night-sky after the application of simple off-line noise filtering. This camera has been used to successfully detect TeV gamma rays, in bright moon light, from both the Crab Nebula and the active galactic nucleus Markarian 421 at the 4.9σ and 3.9σ levels of statistical significance, respectively. The energy threshold of the camera is estimated to be 1.1 ( +0.6/-0.3) TeV from Monte Carlo simulations.

  11. The imaging system design of three-line LMCCD mapping camera

    NASA Astrophysics Data System (ADS)

    Zhou, Huai-de; Liu, Jin-Guo; Wu, Xing-Xing; Lv, Shi-Liang; Zhao, Ying; Yu, Da

    2011-08-01

    In this paper, the authors introduced the theory about LMCCD (line-matrix CCD) mapping camera firstly. On top of the introduction were consists of the imaging system of LMCCD mapping camera. Secondly, some pivotal designs which were Introduced about the imaging system, such as the design of focal plane module, the video signal's procession, the controller's design of the imaging system, synchronous photography about forward and nadir and backward camera and the nadir camera of line-matrix CCD. At last, the test results of LMCCD mapping camera imaging system were introduced. The results as following: the precision of synchronous photography about forward and nadir and backward camera is better than 4 ns and the nadir camera of line-matrix CCD is better than 4 ns too; the photography interval of line-matrix CCD of the nadir camera can satisfy the butter requirements of LMCCD focal plane module; the SNR tested in laboratory is better than 95 under typical working condition(the solar incidence degree is 30, the reflectivity of the earth's surface is 0.3) of each CCD image; the temperature of the focal plane module is controlled under 30° in a working period of 15 minutes. All of these results can satisfy the requirements about the synchronous photography, the temperature control of focal plane module and SNR, Which give the guarantee of precision for satellite photogrammetry.

  12. Patient-initiated camera phone images in general practice: a qualitative study of illustrated narratives

    PubMed Central

    Tan, Lawrence; Hu, Wendy; Brooker, Ron

    2014-01-01

    Background Camera phones have become ubiquitous in the digital age. Patients are beginning to bring images recorded on their mobile phones to share with their GP during medical consultations. Aim To explore GP perceptions about the effect of patient-initiated camera phone images on the consultation. Design and setting An interview study of GPs based in rural and urban locations in Australia. Methods Semi-structured telephone interviews with nine GPs about their experiences with patient-initiated camera phone images. Results GPs described how patient-initiated camera phone photos and videos contributed to the diagnostic process, management and continuity of care. These images gave GPs in the study additional insight into the patient’s world. Potential harm resulting from inappropriate use of camera phones by patients was also identified. Conclusion Patient-initiated camera phone images can empower patients by illustrating their narratives, thus contributing to improved communication in general practice. Potential harm could result from inappropriate use of these images. GPs shown images on patients’ camera phones should make the most of this opportunity for improved understanding of the patient’s world. There are however, potential medicolegal implications such as informed consent, protection of patient and doctor privacy, and the risk of misdiagnosis. PMID:24771843

  13. Effects of frame rate and image resolution on pulse rate measured using multiple camera imaging photoplethysmography

    NASA Astrophysics Data System (ADS)

    Blackford, Ethan B.; Estepp, Justin R.

    2015-03-01

    Non-contact, imaging photoplethysmography uses cameras to facilitate measurements including pulse rate, pulse rate variability, respiration rate, and blood perfusion by measuring characteristic changes in light absorption at the skin's surface resulting from changes in blood volume in the superficial microvasculature. Several factors may affect the accuracy of the physiological measurement including imager frame rate, resolution, compression, lighting conditions, image background, participant skin tone, and participant motion. Before this method can gain wider use outside basic research settings, its constraints and capabilities must be well understood. Recently, we presented a novel approach utilizing a synchronized, nine-camera, semicircular array backed by measurement of an electrocardiogram and fingertip reflectance photoplethysmogram. Twenty-five individuals participated in six, five-minute, controlled head motion artifact trials in front of a black and dynamic color backdrop. Increasing the input channel space for blind source separation using the camera array was effective in mitigating error from head motion artifact. Herein we present the effects of lower frame rates at 60 and 30 (reduced from 120) frames per second and reduced image resolution at 329x246 pixels (one-quarter of the original 658x492 pixel resolution) using bilinear and zero-order downsampling. This is the first time these factors have been examined for a multiple imager array and align well with previous findings utilizing a single imager. Examining windowed pulse rates, there is little observable difference in mean absolute error or error distributions resulting from reduced frame rates or image resolution, thus lowering requirements for systems measuring pulse rate over sufficient length time windows.

  14. First results from the Faint Object Camera - Images of the gravitational lens system G2237 + 0305

    NASA Technical Reports Server (NTRS)

    Crane, P.; Albrecht, R.; Barbieri, C.; Blades, J. C.; Boksenberg, A.

    1991-01-01

    Images of the gravitational lens system G2237 + 0305 have been obtained with the Faint Object Camera on board the Hubble Space Telescope. A preliminary analysis of these images is reported here and includes measurements of the relative positions and magnitudes of the lensed images of the QSO, and of the lensing galaxy. No evidence is found for a fifth lensed image.

  15. Acoustical Imaging Cameras for the Inspection and Condition Assessment of Hydraulic Structures

    DTIC Science & Technology

    2010-08-01

    feasibility of using acoustical imaging for underwater inspection of structures. INTRODUCTION: Visibility in clear water for the human eye and optical ...but higher resolution than sidescan or multibeam acoustical images • Nonhomogeneity of returned signal caused by variation in angles of signals...acoustical imaging. To obtain higher resolutions than other acoustical imaging technologies such as multibeam and sidescan systems, acoustical camera

  16. D Hyperspectral Frame Imager Camera Data in Photogrammetric Mosaicking

    NASA Astrophysics Data System (ADS)

    Mäkeläinen, A.; Saari, H.; Hippi, I.; Sarkeala, J.; Soukkamäki, J.

    2013-08-01

    A new 2D hyperspectral frame camera system has been developed by VTT (Technical Research Center of Finland) and Rikola Ltd. It contains frame based and very light camera with RGB-NIR sensor and it is suitable for light weight and cost effective UAV planes. MosaicMill Ltd. has converted the camera data into proper format for photogrammetric processing, and camera's geometrical accuracy and stability are evaluated to guarantee required accuracies for end user applications. MosaicMill Ltd. has also applied its' EnsoMOSAIC technology to process hyperspectral data into orthomosaics. This article describes the main steps and results on applying hyperspectral sensor in orthomosaicking. The most promising results as well as challenges in agriculture and forestry are also described.

  17. Pantir - a Dual Camera Setup for Precise Georeferencing and Mosaicing of Thermal Aerial Images

    NASA Astrophysics Data System (ADS)

    Weber, I.; Jenal, A.; Kneer, C.; Bongartz, J.

    2015-03-01

    Research and monitoring in fields like hydrology and agriculture are applications of airborne thermal infrared (TIR) cameras, which suffer from low spatial resolution and low quality lenses. Common ground control points (GCPs), lacking thermal activity and being relatively small in size, cannot be used in TIR images. Precise georeferencing and mosaicing however is necessary for data analysis. Adding a high resolution visible light camera (VIS) with a high quality lens very close to the TIR camera, in the same stabilized rig, allows us to do accurate geoprocessing with standard GCPs after fusing both images (VIS+TIR) using standard image registration methods.

  18. Nonlinear color-image decomposition for image processing of a digital color camera

    NASA Astrophysics Data System (ADS)

    Saito, Takahiro; Aizawa, Haruya; Yamada, Daisuke; Komatsu, Takashi

    2009-01-01

    This paper extends the BV (Bounded Variation) - G and/or the BV-L1 variational nonlinear image-decomposition approaches, which are considered to be useful for image processing of a digital color camera, to genuine color-image decomposition approaches. For utilizing inter-channel color cross-correlations, this paper first introduces TV (Total Variation) norms of color differences and TV norms of color sums into the BV-G and/or BV-L1 energy functionals, and then derives denoising-type decomposition-algorithms with an over-complete wavelet transform, through applying the Besov-norm approximation to the variational problems. Our methods decompose a noisy color image without producing undesirable low-frequency colored artifacts in its separated BV-component, and they achieve desirable high-quality color-image decomposition, which is very robust against colored random noise.

  19. An airborne multispectral imaging system based on two consumer-grade cameras for agricultural remote sensing

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper describes the design and evaluation of an airborne multispectral imaging system based on two identical consumer-grade cameras for agricultural remote sensing. The cameras are equipped with a full-frame complementary metal oxide semiconductor (CMOS) sensor with 5616 × 3744 pixels. One came...

  20. Mass movement slope streaks imaged by the Mars Orbiter Camera

    NASA Astrophysics Data System (ADS)

    Sullivan, Robert; Thomas, Peter; Veverka, Joseph; Malin, Michael; Edgett, Kenneth S.

    2001-10-01

    Narrow, fan-shaped dark streaks on steep Martian slopes were originally observed in Viking Orbiter images, but a definitive explanation was not possible because of resolution limitations. Pictures acquired by the Mars Orbiter Camera (MOC) aboard the Mars Global Surveyor (MGS) spacecraft show innumerable examples of dark slope streaks distributed widely, but not uniformly, across the brighter equatorial regions, as well as individual details of these features that were not visible in Viking Orbiter data. Dark slope streaks (as well as much rarer bright slope streaks) represent one of the most widespread and easily recognized styles of mass movement currently affecting the Martian surface. New dark streaks have formed since Viking and even during the MGS mission, confirming earlier suppositions that higher contrast dark streaks are younger, and fade (brighten) with time. The darkest slope streaks represent ~10% contrast with surrounding slope materials. No small outcrops supplying dark material (or bright material, for bright streaks) have been found at streak apexes. Digitate downslope ends indicate slope streak formation involves a ground-hugging flow subject to deflection by minor topographic obstacles. The model we favor explains most dark slope streaks as scars from dust avalanches following oversteepening of air fall deposits. This process is analogous to terrestrial avalanches of oversteepened dry, loose snow which produce shallow avalanche scars with similar morphologies. Low angles of internal friction typically 10-30¡ for terrestrial loess and clay materials suggest that mass movement of (low-cohesion) Martian dusty air fall is possible on a wide range of gradients. Martian gravity, presumed low density of the air fall deposits, and thin (unresolved by MOC) failed layer depths imply extremely low cohesive strength at time of failure, consistent with expectations for an air fall deposit of dust particles. As speed increases during a dust avalanche, a

  1. Imaging Asteroid 4 Vesta Using the Framing Camera

    NASA Technical Reports Server (NTRS)

    Keller, H. Uwe; Nathues, Andreas; Coradini, Angioletta; Jaumann, Ralf; Jorda, Laurent; Li, Jian-Yang; Mittlefehldt, David W.; Mottola, Stefano; Raymond, C. A.; Schroeder, Stefan E.

    2011-01-01

    The Framing Camera (FC) onboard the Dawn spacecraft serves a dual purpose. Next to its central role as a prime science instrument it is also used for the complex navigation of the ion drive spacecraft. The CCD detector with 1024 by 1024 pixels provides the stability for a multiyear mission and its high requirements of photometric accuracy over the wavelength band from 400 to 1000 nm covered by 7 band-pass filters. Vesta will be observed from 3 orbit stages with image scales of 227, 63, and 17 m/px, respectively. The mapping of Vesta s surface with medium resolution will be only completed during the exit phase when the north pole will be illuminated. A detailed pointing strategy will cover the surface at least twice at similar phase angles to provide stereo views for reconstruction of the topography. During approach the phase function of Vesta was determined over a range of angles not accessible from earth. This is the first step in deriving the photometric function of the surface. Combining the topography based on stereo tie points with the photometry in an iterative procedure will disclose details of the surface morphology at considerably smaller scales than the pixel scale. The 7 color filters are well positioned to provide information on the spectral slope in the visible, the depth of the strong pyroxene absorption band, and their variability over the surface. Cross calibration with the VIR spectrometer that extends into the near IR will provide detailed maps of Vesta s surface mineralogy and physical properties. Georeferencing all these observation will result in a coherent and unique data set. During Dawn s approach and capture FC has already demonstrated its performance. The strong variation observed by the Hubble Space Telescope can now be correlated with surface units and features. We will report on results obtained from images taken during survey mode covering the whole illuminated surface. Vesta is a planet-like differentiated body, but its surface

  2. Comparison of three thermal cameras with canine hip area thermographic images.

    PubMed

    Vainionpää, Mari; Raekallio, Marja; Tuhkalainen, Elina; Hänninen, Hannele; Alhopuro, Noora; Savolainen, Maija; Junnila, Jouni; Hielm-Björkman, Anna; Snellman, Marjatta; Vainio, Outi

    2012-12-01

    The objective of this study was to compare the method of thermography by using three different resolution thermal cameras and basic software for thermographic images, separating the two persons taking the thermographic images (thermographers) from the three persons interpreting the thermographic images (interpreters). This was accomplished by studying the repeatability between thermographers and interpreters. Forty-nine client-owned dogs of 26 breeds were enrolled in the study. The thermal cameras used were of different resolutions-80 × 80, 180 × 180 and 320 × 240 pixels. Two trained thermographers took thermographic images of the hip area in all dogs using all three cameras. A total of six thermographic images per dog were taken. The thermographic images were analyzed using appropriate computer software, FLIR QuickReport 2.1. Three trained interpreters independently evaluated the mean temperatures of hip joint areas of the six thermographic images for each dog. The repeatability between thermographers was >0.975 with the two higher-resolution cameras and 0.927 with the lowest resolution camera. The repeatability between interpreters was >0.97 with each camera. Thus, the between-interpreter variation was small. The repeatability between thermographers and interpreters was considered high enough to encourage further studies with thermographic imaging in dogs.

  3. Photometric Calibration and Image Stitching for a Large Field of View Multi-Camera System

    PubMed Central

    Lu, Yu; Wang, Keyi; Fan, Gongshu

    2016-01-01

    A new compact large field of view (FOV) multi-camera system is introduced. The camera is based on seven tiny complementary metal-oxide-semiconductor sensor modules covering over 160° × 160° FOV. Although image stitching has been studied extensively, sensor and lens differences have not been considered in previous multi-camera devices. In this study, we have calibrated the photometric characteristics of the multi-camera device. Lenses were not mounted on the sensor in the process of radiometric response calibration to eliminate the influence of the focusing effect of uniform light from an integrating sphere. Linearity range of the radiometric response, non-linearity response characteristics, sensitivity, and dark current of the camera response function are presented. The R, G, and B channels have different responses for the same illuminance. Vignetting artifact patterns have been tested. The actual luminance of the object is retrieved by sensor calibration results, and is used to blend images to make panoramas reflect the objective luminance more objectively. This compensates for the limitation of stitching images that are more realistic only through the smoothing method. The dynamic range limitation of can be resolved by using multiple cameras that cover a large field of view instead of a single image sensor with a wide-angle lens. The dynamic range is expanded by 48-fold in this system. We can obtain seven images in one shot with this multi-camera system, at 13 frames per second. PMID:27077857

  4. Photometric Calibration and Image Stitching for a Large Field of View Multi-Camera System.

    PubMed

    Lu, Yu; Wang, Keyi; Fan, Gongshu

    2016-04-11

    A new compact large field of view (FOV) multi-camera system is introduced. The camera is based on seven tiny complementary metal-oxide-semiconductor sensor modules covering over 160° × 160° FOV. Although image stitching has been studied extensively, sensor and lens differences have not been considered in previous multi-camera devices. In this study, we have calibrated the photometric characteristics of the multi-camera device. Lenses were not mounted on the sensor in the process of radiometric response calibration to eliminate the influence of the focusing effect of uniform light from an integrating sphere. Linearity range of the radiometric response, non-linearity response characteristics, sensitivity, and dark current of the camera response function are presented. The R, G, and B channels have different responses for the same illuminance. Vignetting artifact patterns have been tested. The actual luminance of the object is retrieved by sensor calibration results, and is used to blend images to make panoramas reflect the objective luminance more objectively. This compensates for the limitation of stitching images that are more realistic only through the smoothing method. The dynamic range limitation of can be resolved by using multiple cameras that cover a large field of view instead of a single image sensor with a wide-angle lens. The dynamic range is expanded by 48-fold in this system. We can obtain seven images in one shot with this multi-camera system, at 13 frames per second.

  5. A Prediction Method of TV Camera Image for Space Manual-control Rendezvous and Docking

    NASA Astrophysics Data System (ADS)

    Zhen, Huang; Qing, Yang; Wenrui, Wu

    Space manual-control rendezvous and docking (RVD) is a key technology for accomplishing the RVD mission in manned space engineering, especially when automatic control system is out of work. The pilot on chase spacecraft manipulates the hand-stick by the image of target spacecraft captured by TV camera. From the TV image, the relative position and attitude of chase and target spacecrafts can be shown. Therefore, the size, the position, the brightness and the shadow of the target on TV camera are key to guarantee the success of manual-control RVD. A method of predicting the on-orbit TV camera image at different relative positions and light conditions during the process of RVD is discussed. Firstly, the basic principle of capturing the image of cross drone on target spacecraft by TV camera is analyzed theoretically, based which the strategy of manual-control RVD is discussed in detail. Secondly, the relationship between the displayed size or position and the real relative distance of chase and target spacecrafts is presented, the brightness and reflection by the target spacecraft at different light conditions are decribed, the shadow on cross drone caused by the chase or target spacecraft is analyzed. Thirdly, a prediction method of on-orbit TV camera images at certain orbit and light condition is provided, and the characteristics of TV camera image during the RVD is analyzed. Finally, the size, the position, the brightness and the shadow of target spacecraft on TV camera image at typical orbit is simulated. The result, by comparing the simulated images with the real images captured by the TV camera on Shenzhou manned spaceship , shows that the prediction method is reasonable

  6. No-reference sharpness assessment of camera-shaken images by analysis of spectral structure.

    PubMed

    Oh, Taegeun; Park, Jincheol; Seshadrinathan, Kalpana; Lee, Sanghoon; Bovik, Alan Conrad

    2014-12-01

    The tremendous explosion of image-, video-, and audio-enabled mobile devices, such as tablets and smart-phones in recent years, has led to an associated dramatic increase in the volume of captured and distributed multimedia content. In particular, the number of digital photographs being captured annually is approaching 100 billion in just the U.S. These pictures are increasingly being acquired by inexperienced, casual users under highly diverse conditions leading to a plethora of distortions, including blur induced by camera shake. In order to be able to automatically detect, correct, or cull images impaired by shake-induced blur, it is necessary to develop distortion models specific to and suitable for assessing the sharpness of camera-shaken images. Toward this goal, we have developed a no-reference framework for automatically predicting the perceptual quality of camera-shaken images based on their spectral statistics. Two kinds of features are defined that capture blur induced by camera shake. One is a directional feature, which measures the variation of the image spectrum across orientations. The second feature captures the shape, area, and orientation of the spectral contours of camera shaken images. We demonstrate the performance of an algorithm derived from these features on new and existing databases of images distorted by camera shake.

  7. Imaging multi-energy gamma-ray fields with a Compton scatter camera

    NASA Astrophysics Data System (ADS)

    Martin, J. B.; Dogan, N.; Gormley, J. E.; Knoll, G. F.; O'Donnell, M.; Wehe, D. K.

    1994-08-01

    Multi-energy gamma-ray fields have been imaged with a ring Compton scatter camera (RCC). The RCC is intended for industrial applications, where there is a need to image multiple gamma-ray lines from spatially extended sources. To our knowledge, the ability of a Compton scatter camera to perform this task had not previously been demonstrated. Gamma rays with different incident energies are distinguished based on the total energy deposited in the camera elements. For multiple gamma-ray lines, separate images are generated for each line energy. Random coincidences and other interfering interactions have been investigated. Camera response has been characterized for energies from 0.511 to 2.75 MeV. Different gamma-ray lines from extended sources have been measured and images reconstructed using both direct and iterative algorithms.

  8. Development of gamma ray imaging cameras. Progress report for second year

    SciTech Connect

    Wehe, D.K.; Knoll, G.F.

    1992-05-28

    In January 1990, the Department of Energy initiated this project with the objective to develop the technology for general purpose, portable gamma ray imaging cameras useful to the nuclear industry. The ultimate goal of this R&D initiative is to develop the analog to the color television camera where the camera would respond to gamma rays instead of visible photons. The two-dimensional real-time image would be displayed would indicate the geometric location of the radiation relative to the camera`s orientation, while the brightness and ``color`` would indicate the intensity and energy of the radiation (and hence identify the emitting isotope). There is a strong motivation for developing such a device for applications within the nuclear industry, for both high- and low-level waste repositories, for environmental restoration problems, and for space and fusion applications. At present, there are no general purpose radiation cameras capable of producing spectral images for such practical applications. At the time of this writing, work on this project has been underway for almost 18 months. Substantial progress has been made in the project`s two primary areas: mechanically-collimated (MCC) and electronically-collimated camera (ECC) designs. We present developments covering the mechanically-collimated design, and then discuss the efforts on the electronically-collimated camera. The renewal proposal addresses the continuing R&D efforts for the third year effort. 8 refs.

  9. A 58 x 62 pixel Si:Ga array camera for 5 - 14 micron astronomical imaging

    NASA Technical Reports Server (NTRS)

    Gezari, D. Y.; Folz, W. C.; Woods, L. A.; Wooldridge, J. B.

    1989-01-01

    A new infrared array camera system has been successfully applied to high background 5 - 14 micron astronomical imaging photometry observations, using a hybrid 58 x 62 pixel Si:Ga array detector. The off-axis reflective optical design incorporating a parabolic camera mirror, circular variable filter wheel, and cold aperture stop produces diffraction-limited images with negligible spatial distortion and minimum thermal background loading. The camera electronic system architecture is divided into three subsystems: (1) high-speed analog front end, including 2-channel preamp module, array address timing generator, bias power suppies, (2) two 16 bit, 3 microsec per conversion A/D converters interfaced to an arithmetic array processor, and (3) an LSI 11/73 camera control and data analysis computer. The background-limited observational noise performance of the camera at the NASA/IRTF telescope is NEFD (1 sigma) = 0.05 Jy/pixel min exp 1/2.

  10. Computer-generated holograms using multiview images captured by a small number of sparsely arranged cameras.

    PubMed

    Ohsawa, Yusuke; Yamaguchi, Kazuhiro; Ichikawa, Tsubasa; Sakamoto, Yuji

    2013-01-01

    Computer-generated holograms (CGHs) using multiview images (MVIs) are holograms generated with multiple ordinary cameras. This process typically requires a huge number of cameras arranged at high density. In this paper, we propose a method to improve CGH using MVIs that obtains the MVIs by using voxel models rather than cameras. In the proposed method the voxel model is generated using the shape-from-silhouette (SFS) technique. We perform SFS using a small number of cameras arranged sparsely to create voxel models of objects and then generate the required number of images from these models by volume rendering. This enables us to generate CGHs using MVIs with just a small number of sparsely arranged cameras. Moreover, the proposed method arrange CGHs using MVIs at arbitrary positions.

  11. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization

    NASA Astrophysics Data System (ADS)

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj

    2015-03-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.

  12. Modulated electron-multiplied fluorescence lifetime imaging microscope: all-solid-state camera for fluorescence lifetime imaging

    NASA Astrophysics Data System (ADS)

    Zhao, Qiaole; Schelen, Ben; Schouten, Raymond; van den Oever, Rein; Leenen, René; van Kuijk, Harry; Peters, Inge; Polderdijk, Frank; Bosiers, Jan; Raspe, Marcel; Jalink, Kees; de Jong, Jan Geert Sander; van Geest, Bert; Stoop, Karel; Young, Ian Ted

    2012-12-01

    We have built an all-solid-state camera that is directly modulated at the pixel level for frequency-domain fluorescence lifetime imaging microscopy (FLIM) measurements. This novel camera eliminates the need for an image intensifier through the use of an application-specific charge coupled device design in a frequency-domain FLIM system. The first stage of evaluation for the camera has been carried out. Camera characteristics such as noise distribution, dark current influence, camera gain, sampling density, sensitivity, linearity of photometric response, and optical transfer function have been studied through experiments. We are able to do lifetime measurement using our modulated, electron-multiplied fluorescence lifetime imaging microscope (MEM-FLIM) camera for various objects, e.g., fluorescein solution, fixed green fluorescent protein (GFP) cells, and GFP-actin stained live cells. A detailed comparison of a conventional microchannel plate (MCP)-based FLIM system and the MEM-FLIM system is presented. The MEM-FLIM camera shows higher resolution and a better image quality. The MEM-FLIM camera provides a new opportunity for performing frequency-domain FLIM.

  13. Modulated electron-multiplied fluorescence lifetime imaging microscope: all-solid-state camera for fluorescence lifetime imaging.

    PubMed

    Zhao, Qiaole; Schelen, Ben; Schouten, Raymond; van den Oever, Rein; Leenen, René; van Kuijk, Harry; Peters, Inge; Polderdijk, Frank; Bosiers, Jan; Raspe, Marcel; Jalink, Kees; Geert Sander de Jong, Jan; van Geest, Bert; Stoop, Karel; Young, Ian Ted

    2012-12-01

    We have built an all-solid-state camera that is directly modulated at the pixel level for frequency-domain fluorescence lifetime imaging microscopy (FLIM) measurements. This novel camera eliminates the need for an image intensifier through the use of an application-specific charge coupled device design in a frequency-domain FLIM system. The first stage of evaluation for the camera has been carried out. Camera characteristics such as noise distribution, dark current influence, camera gain, sampling density, sensitivity, linearity of photometric response, and optical transfer function have been studied through experiments. We are able to do lifetime measurement using our modulated, electron-multiplied fluorescence lifetime imaging microscope (MEM-FLIM) camera for various objects, e.g., fluorescein solution, fixed green fluorescent protein (GFP) cells, and GFP-actin stained live cells. A detailed comparison of a conventional microchannel plate (MCP)-based FLIM system and the MEM-FLIM system is presented. The MEM-FLIM camera shows higher resolution and a better image quality. The MEM-FLIM camera provides a new opportunity for performing frequency-domain FLIM.

  14. Be Foil "Filter Knee Imaging" NSTX Plasma with Fast Soft X-ray Camera

    SciTech Connect

    B.C. Stratton; S. von Goeler; D. Stutman; K. Tritz; L.E. Zakharov

    2005-08-08

    A fast soft x-ray (SXR) pinhole camera has been implemented on the National Spherical Torus Experiment (NSTX). This paper presents observations and describes the Be foil Filter Knee Imaging (FKI) technique for reconstructions of a m/n=1/1 mode on NSTX. The SXR camera has a wide-angle (28{sup o}) field of view of the plasma. The camera images nearly the entire diameter of the plasma and a comparable region in the vertical direction. SXR photons pass through a beryllium foil and are imaged by a pinhole onto a P47 scintillator deposited on a fiber optic faceplate. An electrostatic image intensifier demagnifies the visible image by 6:1 to match it to the size of the charge-coupled device (CCD) chip. A pair of lenses couples the image to the CCD chip.

  15. Initial Results and Field Applications of a Polarization Imaging Camera

    DTIC Science & Technology

    2010-01-01

    Remote Sensing Center, Naval Postgraduate School, Monterey, CA 93943 ABSTRACT The SALSA linear Stokes polarization camera from Bossa Nova...DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release; distribution unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT The SALSA linear...electrically driven elements such as birefringent ceramics and liquid crystals have now allowed for fast time-division polarimeters4. Their SALSA

  16. Method used to test the imaging consistency of binocular camera's left-right optical system

    NASA Astrophysics Data System (ADS)

    Liu, Meiying; Wang, Hu; Liu, Jie; Xue, Yaoke; Yang, Shaodong; Zhao, Hui

    2016-09-01

    To binocular camera, the consistency of optical parameters of the left and the right optical system is an important factor that will influence the overall imaging consistency. In conventional testing procedure of optical system, there lacks specifications suitable for evaluating imaging consistency. In this paper, considering the special requirements of binocular optical imaging system, a method used to measure the imaging consistency of binocular camera is presented. Based on this method, a measurement system which is composed of an integrating sphere, a rotary table and a CMOS camera has been established. First, let the left and the right optical system capture images in normal exposure time under the same condition. Second, a contour image is obtained based on the multiple threshold segmentation result and the boundary is determined using the slope of contour lines near the pseudo-contour line. Third, the constraint of gray level based on the corresponding coordinates of left-right images is established and the imaging consistency could be evaluated through standard deviation σ of the imaging grayscale difference D (x, y) between the left and right optical system. The experiments demonstrate that the method is suitable for carrying out the imaging consistency testing for binocular camera. When the standard deviation 3σ distribution of imaging gray difference D (x, y) between the left and right optical system of the binocular camera does not exceed 5%, it is believed that the design requirements have been achieved. This method could be used effectively and paves the way for the imaging consistency testing of the binocular camera.

  17. Exploring the imaging properties of thin lenses for cryogenic infrared cameras

    NASA Astrophysics Data System (ADS)

    Druart, Guillaume; Verdet, Sebastien; Guerineau, Nicolas; Magli, Serge; Chambon, Mathieu; Grulois, Tatiana; Matallah, Noura

    2016-05-01

    Designing a cryogenic camera is a good strategy to miniaturize and simplify an infrared camera using a cooled detector. Indeed, the integration of optics inside the cold shield allows to simply athermalize the design, guarantees a cold pupil and releases the constraint on having a high back focal length for small focal length systems. By this way, cameras made of a single lens or two lenses are viable systems with good optical features and a good stability in image correction. However it involves a relatively significant additional optical mass inside the dewar and thus increases the cool down time of the camera. ONERA is currently exploring a minimalist strategy consisting in giving an imaging function to thin optical plates that are found in conventional dewars. By this way, we could make a cryogenic camera that has the same cool down time as a traditional dewar without an imagery function. Two examples will be presented: the first one is a camera using a dual-band infrared detector made of a lens outside the dewar and a lens inside the cold shield, the later having the main optical power of the system. We were able to design a cold plano-convex lens with a thickness lower than 1mm. The second example is an evolution of a former cryogenic camera called SOIE. We replaced the cold meniscus by a plano-convex Fresnel lens with a decrease of the optical thermal mass of 66%. The performances of both cameras will be compared.

  18. Development of CCD Cameras for Soft X-ray Imaging at the National Ignition Facility

    SciTech Connect

    Teruya, A. T.; Palmer, N. E.; Schneider, M. B.; Bell, P. M.; Sims, G.; Toerne, K.; Rodenburg, K.; Croft, M.; Haugh, M. J.; Charest, M. R.; Romano, E. D.; Jacoby, K. D.

    2013-09-01

    The Static X-Ray Imager (SXI) is a National Ignition Facility (NIF) diagnostic that uses a CCD camera to record time-integrated X-ray images of target features such as the laser entrance hole of hohlraums. SXI has two dedicated positioners on the NIF target chamber for viewing the target from above and below, and the X-ray energies of interest are 870 eV for the “soft” channel and 3 – 5 keV for the “hard” channels. The original cameras utilize a large format back-illuminated 2048 x 2048 CCD sensor with 24 micron pixels. Since the original sensor is no longer available, an effort was recently undertaken to build replacement cameras with suitable new sensors. Three of the new cameras use a commercially available front-illuminated CCD of similar size to the original, which has adequate sensitivity for the hard X-ray channels but not for the soft. For sensitivity below 1 keV, Lawrence Livermore National Laboratory (LLNL) had additional CCDs back-thinned and converted to back-illumination for use in the other two new cameras. In this paper we describe the characteristics of the new cameras and present performance data (quantum efficiency, flat field, and dynamic range) for the front- and back-illuminated cameras, with comparisons to the original cameras.

  19. UCXp camera imaging principle and key technologies of data post-processing

    NASA Astrophysics Data System (ADS)

    Yuan, Fangyan; Li, Guoqing; Zuo, Zhengli; Liu, Jianmin; Wu, Liang; Yu, Xiaoping; Zhao, Haitao

    2014-03-01

    The large format digital aerial camera product UCXp was introduced into the Chinese market in 2008, the image consists of 17310 columns and 11310 rows with a pixel size of 6 mm. The UCXp camera has many advantages compared with the same generation camera, with multiple lenses exposed almost at the same time and no oblique lens. The camera has a complex imaging process whose principle will be detailed in this paper. On the other hand, the UCXp image post-processing method, including data pre-processing and orthophoto production, will be emphasized in this article. Based on the data of new Beichuan County, this paper will describe the data processing and effects.

  20. Recognizable-image selection for fingerprint recognition with a mobile-device camera.

    PubMed

    Lee, Dongjae; Choi, Kyoungtaek; Choi, Heeseung; Kim, Jaihie

    2008-02-01

    This paper proposes a recognizable-image selection algorithm for fingerprint-verification systems that use a camera embedded in a mobile device. A recognizable image is defined as the fingerprint image which includes the characteristics that are sufficiently discriminating an individual from other people. While general camera systems obtain focused images by using various gradient measures to estimate high-frequency components, mobile cameras cannot acquire recognizable images in the same way because the obtained images may not be adequate for fingerprint recognition, even if they are properly focused. A recognizable image has to meet the following two conditions: First, valid region in the recognizable image should be large enough compared with other nonrecognizable images. Here, a valid region is a well-focused part, and ridges in the region are clearly distinguishable from valleys. In order to select valid regions, this paper proposes a new focus-measurement algorithm using the secondary partial derivatives and a quality estimation utilizing the coherence and symmetry of gradient distribution. Second, rolling and pitching degrees of a finger measured from the camera plane should be within some limit for a recognizable image. The position of a core point and the contour of a finger are used to estimate the degrees of rolling and pitching. Experimental results show that our proposed method selects valid regions and estimates the degrees of rolling and pitching properly. In addition, fingerprint-verification performance is improved by detecting the recognizable images.

  1. Matching Images to Models: Camera Calibration for 3-D Surface Reconstruction

    NASA Technical Reports Server (NTRS)

    Morris, Robin D.; Smelyanskiy, Vadim N.; Cheeseman. Peter C.; Norvig, Peter (Technical Monitor)

    2001-01-01

    In a previous paper we described a system which recursively recovers a super-resolved three dimensional surface model from a set of images of the surface. In that paper we assumed that the camera calibration for each image was known. In this paper we solve two problems. Firstly, if an estimate of the surface is already known, the problem is to calibrate a new image relative to the existing surface model. Secondly, if no surface estimate is available, the relative camera calibration between the images in the set must be estimated. This will allow an initial surface model to be estimated. Results of both types of estimation are given.

  2. Evaluation of detector material and radiation source position on Compton camera's ability for multitracer imaging.

    PubMed

    Uche, C Z; Round, W H; Cree, M J

    2012-09-01

    We present a study on the effects of detector material, radionuclide source and source position on the Compton camera aimed at realistic characterization of the camera's performance in multitracer imaging as it relates to brain imaging. The GEANT4 Monte Carlo simulation software was used to model the physics of radiation transport and interactions with matter. Silicon (Si) and germanium (Ge) detectors were evaluated for the scatterer, and cadmium zinc telluride (CZT) and cerium-doped lanthanum bromide (LaBr(3):Ce) were considered for the absorber. Image quality analyses suggest that the use of Si as the scatterer and CZT as the absorber would be preferred. Nevertheless, two simulated Compton camera models (Si/CZT and Si/LaBr(3):Ce Compton cameras) that are considered in this study demonstrated good capabilities for multitracer imaging in that four radiotracers within the nuclear medicine energy range are clearly visualized by the cameras. It is found however that beyond a range difference of about 2 cm for (113m)In and (18)F radiotracers in a brain phantom, there may be a need to rotate the Compton camera for efficient brain imaging.

  3. Suite of proposed imaging performance metrics and test methods for fire service thermal imaging cameras

    NASA Astrophysics Data System (ADS)

    Amon, Francine; Lock, Andrew; Bryner, Nelson

    2008-04-01

    The use of thermal imaging cameras (TIC) by the fire service is increasing as fire fighters become more aware of the value of these tools. The National Fire Protection Association (NFPA) is currently developing a consensus standard for design and performance requirements for TIC as used by the fire service. This standard will include performance requirements for TIC design robustness and image quality. The National Institute of Standards and Technology facilitates this process by providing recommendations for science-based performance metrics and test methods to the NFPA technical committee charged with the development of this standard. A suite of imaging performance metrics and test methods based on the harsh operating environment and limitations of use particular to the fire service has been proposed for inclusion in the standard. The performance metrics include large area contrast, effective temperature range, spatial resolution, nonuniformity, and thermal sensitivity. Test methods to measure TIC performance for these metrics are in various stages of development. An additional procedure, image recognition, has also been developed to facilitate the evaluation of TIC design robustness. The pass/fail criteria for each of these imaging performance metrics are derived from perception tests in which image contrast, brightness, noise, and spatial resolution are degraded to the point that users can no longer consistently perform tasks involving TIC due to poor image quality.

  4. X-ray imaging using a consumer-grade digital camera

    NASA Astrophysics Data System (ADS)

    Winch, N. M.; Edgar, A.

    2011-10-01

    The recent advancements in consumer-grade digital camera technology and the introduction of high-resolution, high sensitivity CsBr:Eu 2+ storage phosphor imaging plates make possible a new cost-effective technique for X-ray imaging. The imaging plate is bathed with red stimulating light by high-intensity light-emitting diodes, and the photostimulated image is captured with a digital single-lens reflex (SLR) camera. A blue band-pass optical filter blocks the stimulating red light but transmits the blue photostimulated luminescence. Using a Canon D5 Mk II camera and an f1.4 wide-angle lens, the optical image of a 240×180 mm 2 Konica CsBr:Eu 2+ imaging plate from a position 230 mm in front of the camera lens can be focussed so as to laterally fill the 35×23.3 mm 2 camera sensor, and recorded in 2808×1872 pixel elements, corresponding to an equivalent pixel size on the plate of 88 μm. The analogue-to-digital conversion from the camera electronics is 13 bits, but the dynamic range of the imaging system as a whole is limited in practice by noise to about 2.5 orders of magnitude. The modulation transfer function falls to 0.2 at a spatial frequency of 2.2 line pairs/mm. The limiting factor of the spatial resolution is light scattering in the plate rather than the camera optics. The limiting factors for signal-to-noise ratio are shot noise in the light, and dark noise in the CMOS sensor. Good quality images of high-contrast objects can be recorded with doses of approximately 1 mGy. The CsBr:Eu 2+ plate has approximately three times the readout sensitivity of a similar BaFBr:Eu 2+ plate.

  5. A Compton camera prototype for prompt gamma medical imaging

    NASA Astrophysics Data System (ADS)

    Thirolf, P. G.; Aldawood, S.; Böhmer, M.; Bortfeldt, J.; Castelhano, I.; Dedes, G.; Fiedler, F.; Gernhäuser, R.; Golnik, C.; Helmbrecht, S.; Hueso-González, F.; Kolff, H. v. d.; Kormoll, T.; Lang, C.; Liprandi, S.; Lutter, R.; Marinšek, T.; Maier, L.; Pausch, G.; Petzoldt, J.; Römer, K.; Schaart, D.; Parodi, K.

    2016-05-01

    Compton camera prototype for a position-sensitive detection of prompt γ rays from proton-induced nuclear reactions is being developed in Garching. The detector system allows to track the Comptonscattered electrons. The camera consists of a monolithic LaBr3:Ce scintillation absorber crystal, read out by a multi-anode PMT, preceded by a stacked array of 6 double-sided silicon strip detectors acting as scatterers. The LaBr3:Ce crystal has been characterized with radioactive sources. Online commissioning measurements were performed with a pulsed deuteron beam at the Garching Tandem accelerator and with a clinical proton beam at the OncoRay facility in Dresden. The determination of the interaction point of the photons in the monolithic crystal was investigated.

  6. Imaging high-dimensional spatial entanglement with a camera.

    PubMed

    Edgar, M P; Tasca, D S; Izdebski, F; Warburton, R E; Leach, J; Agnew, M; Buller, G S; Boyd, R W; Padgett, M J

    2012-01-01

    The light produced by parametric down-conversion shows strong spatial entanglement that leads to violations of EPR criteria for separability. Historically, such studies have been performed by scanning a single-element, single-photon detector across a detection plane. Here we show that modern electron-multiplying charge-coupled device cameras can measure correlations in both position and momentum across a multi-pixel field of view. This capability allows us to observe entanglement of around 2,500 spatial states and demonstrate Einstein-Podolsky-Rosen type correlations by more than two orders of magnitude. More generally, our work shows that cameras can lead to important new capabilities in quantum optics and quantum information science.

  7. Exploring the feasibility of iris recognition for visible spectrum iris images obtained using smartphone camera

    NASA Astrophysics Data System (ADS)

    Trokielewicz, Mateusz; Bartuzi, Ewelina; Michowska, Katarzyna; Andrzejewska, Antonina; Selegrat, Monika

    2015-09-01

    In the age of modern, hyperconnected society that increasingly relies on mobile devices and solutions, implementing a reliable and accurate biometric system employing iris recognition presents new challenges. Typical biometric systems employing iris analysis require expensive and complicated hardware. We therefore explore an alternative way using visible spectrum iris imaging. This paper aims at answering several questions related to applying iris biometrics for images obtained in the visible spectrum using smartphone camera. Can irides be successfully and effortlessly imaged using a smartphone's built-in camera? Can existing iris recognition methods perform well when presented with such images? The main advantage of using near-infrared (NIR) illumination in dedicated iris recognition cameras is good performance almost independent of the iris color and pigmentation. Are the images obtained from smartphone's camera of sufficient quality even for the dark irides? We present experiments incorporating simple image preprocessing to find the best visibility of iris texture, followed by a performance study to assess whether iris recognition methods originally aimed at NIR iris images perform well with visible light images. To our best knowledge this is the first comprehensive analysis of iris recognition performance using a database of high-quality images collected in visible light using the smartphones flashlight together with the application of commercial off-the-shelf (COTS) iris recognition methods.

  8. High-frame-rate intensified fast optically shuttered TV cameras with selected imaging applications

    SciTech Connect

    Yates, G.J.; King, N.S.P.

    1994-08-01

    This invited paper focuses on high speed electronic/electro-optic camera development by the Applied Physics Experiments and Imaging Measurements Group (P-15) of Los Alamos National Laboratory`s Physics Division over the last two decades. The evolution of TV and image intensifier sensors and fast readout fast shuttered cameras are discussed. Their use in nuclear, military, and medical imaging applications are presented. Several salient characteristics and anomalies associated with single-pulse and high repetition rate performance of the cameras/sensors are included from earlier studies to emphasize their effects on radiometric accuracy of electronic framing cameras. The Group`s test and evaluation capabilities for characterization of imaging type electro-optic sensors and sensor components including Focal Plane Arrays, gated Image Intensifiers, microchannel plates, and phosphors are discussed. Two new unique facilities, the High Speed Solid State Imager Test Station (HSTS) and the Electron Gun Vacuum Test Chamber (EGTC) arc described. A summary of the Group`s current and developmental camera designs and R&D initiatives are included.

  9. Hybrid Compton camera/coded aperture imaging system

    DOEpatents

    Mihailescu, Lucian [Livermore, CA; Vetter, Kai M [Alameda, CA

    2012-04-10

    A system in one embodiment includes an array of radiation detectors; and an array of imagers positioned behind the array of detectors relative to an expected trajectory of incoming radiation. A method in another embodiment includes detecting incoming radiation with an array of radiation detectors; detecting the incoming radiation with an array of imagers positioned behind the array of detectors relative to a trajectory of the incoming radiation; and performing at least one of Compton imaging using at least the imagers and coded aperture imaging using at least the imagers. A method in yet another embodiment includes detecting incoming radiation with an array of imagers positioned behind an array of detectors relative to a trajectory of the incoming radiation; and performing Compton imaging using at least the imagers.

  10. High-resolution position-sensitive proportional counter camera for radiochromatographic imaging

    SciTech Connect

    Schuresko, D.D.; Kopp, M.K.; Harter, J.A.; Bostick, W.D.

    1988-12-01

    A high-resolution proportional counter camera for imaging two- dimensional (2-D) distributions of radionuclides is described. The camera can accommodate wet or dry samples that are separated from the counter gas volume by a 6-..mu..m Mylar membrane. Using 95% Xe-5% CO/sub 2/ gas at 3-MPa pressure and electronic collimation based upon pulse energy discrimination, the camera's performance characteristics for /sup 14/C distributions are as follows: active area--10 by 10 cm, position resolution--0.5 mm, total background--300 disintegrations per minute, and count-rate capability--10/sup 5/ disintegrations per second. With computerized data acquisition, the camera is a significant improvement in analytical instrumentation for imaging 2-D radionuclide distributions over present-day commercially available technology. (Note: This manuscript was completed in July 1983). 13 refs., 10 figs.

  11. Design Considerations Of A Compton Camera For Low Energy Medical Imaging

    NASA Astrophysics Data System (ADS)

    Harkness, L. J.; Boston, A. J.; Boston, H. C.; Cresswell, J. R.; Grint, A. N.; Lazarus, I.; Judson, D. S.; Nolan, P. J.; Oxley, D. C.; Simpson, J.

    2009-12-01

    Development of a Compton camera for low energy medical imaging applications is underway. The ProSPECTus project aims to utilize position sensitive detectors to generate high quality images using electronic collimation. This method has the potential to significantly increase the imaging efficiency compared with mechanically collimated SPECT systems, a highly desirable improvement on clinical systems. Design considerations encompass the geometrical optimisation and evaluation of image quality from the system which is to be built and assessed.

  12. The Mast Cameras and Mars Descent Imager (MARDI) for the 2009 Mars Science Laboratory

    NASA Technical Reports Server (NTRS)

    Malin, M. C.; Bell, J. F.; Cameron, J.; Dietrich, W. E.; Edgett, K. S.; Hallet, B.; Herkenhoff, K. E.; Lemmon, M. T.; Parker, T. J.; Sullivan, R. J.

    2005-01-01

    Based on operational experience gained during the Mars Exploration Rover (MER) mission, we proposed and were selected to conduct two related imaging experiments: (1) an investigation of the geology and short-term atmospheric vertical wind profile local to the Mars Science Laboratory (MSL) landing site using descent imaging, and (2) a broadly-based scientific investigation of the MSL locale employing visible and very near infra-red imaging techniques from a pair of mast-mounted, high resolution cameras. Both instruments share a common electronics design, a design also employed for the MSL Mars Hand Lens Imager (MAHLI) [1]. The primary differences between the cameras are in the nature and number of mechanisms and specific optics tailored to each camera s requirements.

  13. Microchannel plate pinhole camera for 20 to 100 keV x-ray imaging

    SciTech Connect

    Wang, C.L.; Leipelt, G.R.; Nilson, D.G.

    1984-10-03

    We present the design and construction of a sensitive pinhole camera for imaging suprathermal x-rays. Our device is a pinhole camera consisting of four filtered pinholes and microchannel plate electron multiplier for x-ray detection and signal amplification. We report successful imaging of 20, 45, 70, and 100 keV x-ray emissions from the fusion targets at our Novette laser facility. Such imaging reveals features of the transport of hot electrons and provides views deep inside the target.

  14. ProxiScan™: A Novel Camera for Imaging Prostate Cancer

    ScienceCinema

    Ralph James

    2016-07-12

    ProxiScan is a compact gamma camera suited for high-resolution imaging of prostate cancer. Developed by Brookhaven National Laboratory and Hybridyne Imaging Technologies, Inc., ProxiScan won a 2009 R&D 100 Award, sponsored by R&D Magazine to recognize t

  15. A high-resolution airborne four-camera imaging system for agricultural remote sensing

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper describes the design and testing of an airborne multispectral digital imaging system for remote sensing applications. The system consists of four high resolution charge coupled device (CCD) digital cameras and a ruggedized PC equipped with a frame grabber and image acquisition software. T...

  16. A Compton camera for spectroscopic imaging from 100keV to 1MeV

    NASA Astrophysics Data System (ADS)

    Earnhart, Jonathan Raby Dewitt

    The objective of this work is to investigate Compton camera technology for spectroscopic imaging of gamma rays in the 100keV to 1MeV range. An efficient, specific purpose Monte Carlo code was developed to investigate the image formation process in Compton cameras. The code is based on a pathway sampling technique with extensive use of variance reduction techniques. The code includes detailed Compton scattering physics, including incoherent scattering functions, Doppler broadening, and multiple scattering. Experiments were performed with two different camera configurations for a scene containing a 75Se source and a 137Cs source. The first camera was based on a fixed silicon detector in the front plane and a CdZnTe detector mounted in the stage. The second camera configuration was based on two CdZnTe detectors. Both systems were able to reconstruct images of 75Se, using the 265keV line, and 137Cs, using the 662keV line. Only the silicon-CdZnTe camera was able to resolve the low intensity 400keV line of 75Se. Neither camera was able to reconstruct the 75Se source location using the 136keV line. The energy resolution of the silicon-CdZnTe camera system was 4% at 662keV. This camera reproduced the location of the 137Cs source by event circle image reconstruction with angular resolutions of 10° for a source on the camera axis and 14° for a source 30° off axis. Typical detector pair efficiencies were measured as 3 x 10-11 at 662keV. The dual CdZnTe camera had an energy resolution of 3.2% at 662keV. This camera reproduced the location of the 137Cs source by event circle image reconstruction with angular resolutions of 8° for a source on the camera axis and 12° for a source 20° off axis. Typical detector pair efficiencies were measured as 7 x 10-11 at 662keV. Of the two prototype camera configurations tested, the silicon-CdZnTe configuration had superior imaging characteristics. This configuration is less sensitive to effects caused by source decay cascades and random

  17. The Geospectral Camera: a Compact and Geometrically Precise Hyperspectral and High Spatial Resolution Imager

    NASA Astrophysics Data System (ADS)

    Delauré, B.; Michiels, B.; Biesemans, J.; Livens, S.; Van Achteren, T.

    2013-04-01

    Small unmanned aerial vehicles are increasingly being employed for environmental monitoring at local scale, which drives the demand for compact and lightweight spectral imagers. This paper describes the geospectral camera, which is a novel compact imager concept. The camera is built around an innovative detector which has two sensor elements on a single chip and therefore offers the functionality of two cameras within the volume of a single one. The two sensor elements allow the camera to derive both spectral information as well as geometric information (high spatial resolution imagery and a digital surface model) of the scene of interest. A first geospectral camera prototype has been developed. It uses a linear variable optical filter which is installed in front of one of the two sensors of the MEDUSA CMOS imager chip. A accompanying software approach has been developed which exploits the simultaneous information of the two sensors in order to extract an accurate spectral image product. This method has been functionally demonstrated by applying it on image data acquired during an airborne acquisition.

  18. Constructing a Database from Multiple 2D Images for Camera Pose Estimation and Robot Localization

    NASA Technical Reports Server (NTRS)

    Wolf, Michael; Ansar, Adnan I.; Brennan, Shane; Clouse, Daniel S.; Padgett, Curtis W.

    2012-01-01

    The LMDB (Landmark Database) Builder software identifies persistent image features (landmarks) in a scene viewed multiple times and precisely estimates the landmarks 3D world positions. The software receives as input multiple 2D images of approximately the same scene, along with an initial guess of the camera poses for each image, and a table of features matched pair-wise in each frame. LMDB Builder aggregates landmarks across an arbitrarily large collection of frames with matched features. Range data from stereo vision processing can also be passed to improve the initial guess of the 3D point estimates. The LMDB Builder aggregates feature lists across all frames, manages the process to promote selected features to landmarks, and iteratively calculates the 3D landmark positions using the current camera pose estimations (via an optimal ray projection method), and then improves the camera pose estimates using the 3D landmark positions. Finally, it extracts image patches for each landmark from auto-selected key frames and constructs the landmark database. The landmark database can then be used to estimate future camera poses (and therefore localize a robotic vehicle that may be carrying the cameras) by matching current imagery to landmark database image patches and using the known 3D landmark positions to estimate the current pose.

  19. Periscope-camera system for visible and infrared imaging diagnostics on TFTR

    SciTech Connect

    Medley, S.S.; Dimock, D.L.; Hayes, S.; Long, D.; Lowrence, J.L.; Mastrocola, V.; Renda, G.; Ulrickson, M.; Young, K.M.

    1985-05-01

    An optical diagnostic consisting of a periscope which relays images of the torus interior to an array of cameras is used on the Tokamak Fusion Test Reactor (TFTR) to view plasma discharge phenomena and inspect vacuum vessel internal structures in both visible and near-infrared wavelength regions. Three periscopes view through 20-cm-diameter fused-silica windows which are spaced around the torus midplane to provide a viewing coverage of approximately 75% of the vacuum vessel internal surface area. The periscopes have f/8 optics and motor-driven controls for focusing, magnification selection (5/sup 0/, 20/sup 0/, and 60/sup 0/ field of view), elevation and azimuth setting, mast rotation, filter selection, iris aperture, and viewing port selection. The four viewing ports on each periscope are equipped with multiple imaging devices which include: (1) an inspection eyepiece, (2) standard (RCA TC2900) and fast (RETICON) framing rate television cameras, (3) a PtSi CCD infrared imaging camera, (4) a 35 mm Nikon F3 still camera, or (5) a 16 mm Locam II movie camera with variable framing up to 500 fps. Operation of the periscope-camera system is controlled either locally or remotely through a computer-CAMAC interface. A description of the equipment and examples of its application are presented.

  20. A high-speed, pressurised multi-wire gamma camera for dynamic imaging in nuclear medicine

    NASA Astrophysics Data System (ADS)

    Barr, A.; Bonaldi, L.; Carugno, G.; Charpak, G.; Iannuzzi, D.; Nicoletto, M.; Pepato, A.; Ventura, S.

    2002-01-01

    High count rate detectors are of particular interest in nuclear medicine as they permit lower radiation doses to be received by the patient and allow dynamic images of high statistical quality to be obtained. We have developed a high-speed gamma camera based on a multi-wire proportional chamber. The chamber is filled with a xenon gas mixture and has been operated at pressures ranging from 5 to 10 bar. With an active imaging area of 25 cm×25 cm, the chamber has been equipped with an advanced, high rate, digital, electronic read-out system which carries out pulse shaping, energy discrimination, XY coincidence and cluster selection at speeds of up to a few megahertz. In order to ensure stable, long-term operation of the camera without degradation in performance, a gas purification system was designed and integrated into the camera. Measurements have been carried out to determine the properties and applicability of the camera using photon sources in the 20-120 keV energy range. We present some design features of the camera and selected results obtained from preliminary measurements carried out to measure its performance characteristics. Initial images obtained from the camera will also be presented.

  1. Intial synchroscan streak camera imaging at the A0 photoinjector

    SciTech Connect

    Lumpkin, A.H.; Ruan, J.; /Fermilab

    2008-04-01

    At the Fermilab A0 photoinjector facility, bunch-length measurements of the laser micropulse and the e-beam micropulse have been done in the past with a single-sweep module of the Hamamatsu C5680 streak camera with an intrinsic shot-to-shot trigger jitter of 10 to 20 ps. We have upgraded the camera system with the synchroscan module tuned to 81.25 MHz to provide synchronous summing capability with less than 1.5-ps FWHM trigger jitter and a phase-locked delay box to provide phase stability of {approx}1 ps over 10s of minutes. This allowed us to measure both the UV laser pulse train at 244 nm and the e-beam via optical transition radiation (OTR). Due to the low electron beam energies and OTR signals, we typically summed over 50 micropulses with 1 nC per micropulse. We also did electron beam bunch length vs. micropulse charge measurements to identify a significant e-beam micropulse elongation from 10 to 30 ps (FWHM) for charges from 1 to 4.6 nC. This effect is attributed to space-charge effects in the PC gun as reproduced by ASTRA calculations. Chromatic temporal dispersion effects in the optics were also characterized and will be reported.

  2. Enhancing swimming pool safety by the use of range-imaging cameras

    NASA Astrophysics Data System (ADS)

    Geerardyn, D.; Boulanger, S.; Kuijk, M.

    2015-05-01

    Drowning is the cause of death of 372.000 people, each year worldwide, according to the report of November 2014 of the World Health Organization.1 Currently, most swimming pools only use lifeguards to detect drowning people. In some modern swimming pools, camera-based detection systems are nowadays being integrated. However, these systems have to be mounted underwater, mostly as a replacement of the underwater lighting. In contrast, we are interested in range imaging cameras mounted on the ceiling of the swimming pool, allowing to distinguish swimmers at the surface from drowning people underwater, while keeping the large field-of-view and minimizing occlusions. However, we have to take into account that the water surface of a swimming pool is not a flat, but mostly rippled surface, and that the water is transparent for visible light, but less transparent for infrared or ultraviolet light. We investigated the use of different types of 3D cameras to detect objects underwater at different depths and with different amplitudes of surface perturbations. Specifically, we performed measurements with a commercial Time-of-Flight camera, a commercial structured-light depth camera and our own Time-of-Flight system. Our own system uses pulsed Time-of-Flight and emits light of 785 nm. The measured distances between the camera and the object are influenced through the perturbations on the water surface. Due to the timing of our Time-of-Flight camera, our system is theoretically able to minimize the influence of the reflections of a partially-reflecting surface. The combination of a post image-acquisition filter compensating for the perturbations and the use of a light source with shorter wavelengths to enlarge the depth range can improve the current commercial cameras. As a result, we can conclude that low-cost range imagers can increase swimming pool safety, by inserting a post-processing filter and the use of another light source.

  3. IR camera system with an advanced image processing technologies

    NASA Astrophysics Data System (ADS)

    Ohkubo, Syuichi; Tamura, Tetsuo

    2016-05-01

    We have developed image processing technologies for resolving issues caused by the inherent UFPA (uncooled focal plane array) sensor characteristics to spread its applications. For example, large time constant of an uncooled IR (infra-red) sensor limits its application field, because motion blur is caused in monitoring the objective moving at high speed. The developed image processing technologies can eliminate the blur and retrieve almost the equivalent image observed in still motion. This image processing is based on the idea that output of the IR sensor is construed as the convolution of radiated IR energy from the objective and impulse response of the IR sensor. With knowledge of the impulse response and moving speed of the objective, the IR energy from the objective can be de-convolved from the observed images. We have successfully retrieved the image without blur using the IR sensor of 15 ms time constant under the conditions in which the objective is moving at the speed of about 10 pixels/60 Hz. The image processing for reducing FPN (fixed pattern noise) has also been developed. UFPA having the responsivity in the narrow wavelength region, e.g., around 8 μm is appropriate for measuring the surface of glass. However, it suffers from severe FPN due to lower sensitivity compared with 8-13 μm. The developed image processing exploits the images of the shutter itself, and can reduce FPN significantly.

  4. Geometric Calibration of the Orion Optical Navigation Camera using Star Field Images

    NASA Astrophysics Data System (ADS)

    Christian, John A.; Benhacine, Lylia; Hikes, Jacob; D'Souza, Christopher

    2016-12-01

    The Orion Multi Purpose Crew Vehicle will be capable of autonomously navigating in cislunar space using images of the Earth and Moon. Optical navigation systems, such as the one proposed for Orion, require the ability to precisely relate the observed location of an object in a 2D digital image with the true corresponding line-of-sight direction in the camera's sensor frame. This relationship is governed by the camera's geometric calibration parameters — typically described by a set of five intrinsic parameters and five lens distortion parameters. While pre-flight estimations of these parameters will exist, environmental conditions often necessitate on-orbit recalibration. This calibration will be performed for Orion using an ensemble of star field images. This manuscript provides a detailed treatment of the theory and mathematics that will form the foundation of Orion's on-orbit camera calibration. Numerical results and examples are also presented.

  5. Evolution of INO Uncooled Infrared Cameras Towards Very High Resolution Imaging

    NASA Astrophysics Data System (ADS)

    Bergeron, Alain; Jerominek, Hubert; Chevalier, Claude; Le Noc, Loïc; Tremblay, Bruno; Alain, Christine; Martel, Anne; Blanchard, Nathalie; Morissette, Martin; Mercier, Luc; Gagnon, Lucie; Couture, Patrick; Desnoyers, Nichola; Demers, Mathieu; Lamontagne, Frédéric; Lévesque, Frédéric; Verreault, Sonia; Duchesne, François; Lambert, Julie; Girard, Marc; Savard, Maxime; Châteauneuf, François

    2011-02-01

    Along the years INO has been involved in development of various uncooled infrared devices. Todays, the infrared imagers exhibit good resolutions and find their niche in numerous applications. Nevertheless, there is still a trend toward high resolution imaging for demanding applications. At the same time, low-resolution for mass market applications are sought for low-cost imaging solutions. These two opposite requirements reflect the evolution of infrared cameras from the origin, when only few pixel-count FPAs were available, to megapixel-count FPA of the recent years. This paper reviews the evolution of infrared camera technologies at INO from the uncooled bolometer detector capability up to the recent achievement of 1280×960 pixels infrared camera core using INO's patented microscan technology.

  6. A 5-18 micron array camera for high-background astronomical imaging

    NASA Technical Reports Server (NTRS)

    Gezari, Daniel Y.; Folz, Walter C.; Woods, Lawrence A.; Varosi, Frank

    1992-01-01

    A new infrared array camera system using a Hughes/SBRC 58 x 62 pixel hybrid Si:Ga array detector has been successfully applied to high-background 5-18-micron astronomical imaging observations. The off-axis reflective optical system minimizes thermal background loading and produces diffraction-limited images with negligible spatial distortion. The noise equivalent flux density (NEFD) of the camera at 10 microns on the 3.0-m NASA/Infrared Telescope Facility with broadband interference filters and 0.26 arcsec pixel is NEFD = 0.01 Jy/sq rt min per pixel (1sigma), and it operates at a frame rate of 30 Hz with no compromise in observational efficiency. The electronic and optical design of the camera, its photometric characteristics, examples of observational results, and techniques for successful array imaging in a high- background astronomical application are discussed.

  7. A novel IR polarization imaging system designed by a four-camera array

    NASA Astrophysics Data System (ADS)

    Liu, Fei; Shao, Xiaopeng; Han, Pingli

    2014-05-01

    A novel IR polarization staring imaging system employing a four-camera-array is designed for target detection and recognition, especially man-made targets hidden in complex battle field. The design bases on the existence of the difference in infrared radiation's polarization characteristics, which is particularly remarkable between artificial objects and the natural environment. The system designed employs four cameras simultaneously to capture the00 polarization difference to replace the commonly used systems engaging only one camera. Since both types of systems have to obtain intensity images in four different directions (I0 , I45 , I90 , I-45 ), the four-camera design allows better real-time capability and lower error without the mechanical rotating parts which is essential to one-camera systems. Information extraction and detailed analysis demonstrate that the caught polarization images include valuable polarization information which can effectively increase the images' contrast and make it easier to segment the target even the hidden target from various scenes.

  8. Achieving thermography with a thermal security camera using uncooled amorphous silicon microbolometer image sensors

    NASA Astrophysics Data System (ADS)

    Wang, Yu-Wei; Tesdahl, Curtis; Owens, Jim; Dorn, David

    2012-06-01

    Advancements in uncooled microbolometer technology over the last several years have opened up many commercial applications which had been previously cost prohibitive. Thermal technology is no longer limited to the military and government market segments. One type of thermal sensor with low NETD which is available in the commercial market segment is the uncooled amorphous silicon (α-Si) microbolometer image sensor. Typical thermal security cameras focus on providing the best image quality by auto tonemaping (contrast enhancing) the image, which provides the best contrast depending on the temperature range of the scene. While this may provide enough information to detect objects and activities, there are further benefits of being able to estimate the actual object temperatures in a scene. This thermographic ability can provide functionality beyond typical security cameras by being able to monitor processes. Example applications of thermography[2] with thermal camera include: monitoring electrical circuits, industrial machinery, building thermal leaks, oil/gas pipelines, power substations, etc...[3][5] This paper discusses the methodology of estimating object temperatures by characterizing/calibrating different components inside a thermal camera utilizing an uncooled amorphous silicon microbolometer image sensor. Plots of system performance across camera operating temperatures will be shown.

  9. Person Recognition System Based on a Combination of Body Images from Visible Light and Thermal Cameras

    PubMed Central

    Nguyen, Dat Tien; Hong, Hyung Gil; Kim, Ki Wan; Park, Kang Ryoung

    2017-01-01

    The human body contains identity information that can be used for the person recognition (verification/recognition) problem. In this paper, we propose a person recognition method using the information extracted from body images. Our research is novel in the following three ways compared to previous studies. First, we use the images of human body for recognizing individuals. To overcome the limitations of previous studies on body-based person recognition that use only visible light images for recognition, we use human body images captured by two different kinds of camera, including a visible light camera and a thermal camera. The use of two different kinds of body image helps us to reduce the effects of noise, background, and variation in the appearance of a human body. Second, we apply a state-of-the art method, called convolutional neural network (CNN) among various available methods, for image features extraction in order to overcome the limitations of traditional hand-designed image feature extraction methods. Finally, with the extracted image features from body images, the recognition task is performed by measuring the distance between the input and enrolled samples. The experimental results show that the proposed method is efficient for enhancing recognition accuracy compared to systems that use only visible light or thermal images of the human body. PMID:28300783

  10. Person Recognition System Based on a Combination of Body Images from Visible Light and Thermal Cameras.

    PubMed

    Nguyen, Dat Tien; Hong, Hyung Gil; Kim, Ki Wan; Park, Kang Ryoung

    2017-03-16

    The human body contains identity information that can be used for the person recognition (verification/recognition) problem. In this paper, we propose a person recognition method using the information extracted from body images. Our research is novel in the following three ways compared to previous studies. First, we use the images of human body for recognizing individuals. To overcome the limitations of previous studies on body-based person recognition that use only visible light images for recognition, we use human body images captured by two different kinds of camera, including a visible light camera and a thermal camera. The use of two different kinds of body image helps us to reduce the effects of noise, background, and variation in the appearance of a human body. Second, we apply a state-of-the art method, called convolutional neural network (CNN) among various available methods, for image features extraction in order to overcome the limitations of traditional hand-designed image feature extraction methods. Finally, with the extracted image features from body images, the recognition task is performed by measuring the distance between the input and enrolled samples. The experimental results show that the proposed method is efficient for enhancing recognition accuracy compared to systems that use only visible light or thermal images of the human body.

  11. Vehicle occupancy detection camera position optimization using design of experiments and standard image references

    NASA Astrophysics Data System (ADS)

    Paul, Peter; Hoover, Martin; Rabbani, Mojgan

    2013-03-01

    Camera positioning and orientation is important to applications in domains such as transportation since the objects to be imaged vary greatly in shape and size. In a typical transportation application that requires capturing still images, inductive loops buried in the ground or laser trigger sensors are used when a vehicle reaches the image capture zone to trigger the image capture system. The camera in such a system is in a fixed position pointed at the roadway and at a fixed orientation. Thus the problem is to determine the optimal location and orientation of the camera when capturing images from a wide variety of vehicles. Methods from Design for Six Sigma, including identifying important parameters and noise sources and performing systematically designed experiments (DOE) can be used to determine an effective set of parameter settings for the camera position and orientation under these conditions. In the transportation application of high occupancy vehicle lane enforcement, the number of passengers in the vehicle is to be counted. Past work has described front seat vehicle occupant counting using a camera mounted on an overhead gantry looking through the front windshield in order to capture images of vehicle occupants. However, viewing rear seat passengers is more problematic due to obstructions including the vehicle body frame structures and seats. One approach is to view the rear seats through the side window. In this situation the problem of optimally positioning and orienting the camera to adequately capture the rear seats through the side window can be addressed through a designed experiment. In any automated traffic enforcement system it is necessary for humans to be able to review any automatically captured digital imagery in order to verify detected infractions. Thus for defining an output to be optimized for the designed experiment, a human defined standard image reference (SIR) was used to quantify the quality of the line-of-sight to the rear seats of

  12. Driving micro-optical imaging systems towards miniature camera applications

    NASA Astrophysics Data System (ADS)

    Brückner, Andreas; Duparré, Jacques; Dannberg, Peter; Leitel, Robert; Bräuer, Andreas

    2010-05-01

    Up to now, multi channel imaging systems have been increasingly studied and approached from various directions in the academic domain due to their promising large field of view at small system thickness. However, specific drawbacks of each of the solutions prevented the diffusion into corresponding markets so far. Most severe problems are a low image resolution and a low sensitivity compared to a conventional single aperture lens besides the lack of a cost-efficient method of fabrication and assembly. We propose a microoptical approach to ultra-compact optics for real-time vision systems that are inspired by the compound eyes of insects. The demonstrated modules achieve a VGA resolution with 700x550 pixels within an optical package of 6.8mm x 5.2mm and a total track length of 1.4mm. The partial images that are separately recorded within different optical channels are stitched together to form a final image of the whole field of view by means of image processing. These software tools allow to correct the distortion of the individual partial images so that the final image is also free of distortion. The so-called electronic cluster eyes are realized by state-of-the-art microoptical fabrication techniques and offer a resolution and sensitivity potential that makes them suitable for consumer, machine vision and medical imaging applications.

  13. Determination of visual range during fog and mist using digital camera images

    NASA Astrophysics Data System (ADS)

    Taylor, John R.; Moogan, Jamie C.

    2010-08-01

    During the winter of 2008, daily time series of images of five "unit-cell chequerboard" targets were acquired using a digital camera. The camera and targets were located in the Majura Valley approximately 3 km from Canberra airport. We show how the contrast between the black and white sections of the targets is related to the meteorological range (or standard visual range), and compare estimates of this quantity derived from images acquired during fog and mist conditions with those from the Vaisala FD-12 visibility meter operated by the Bureau of Meteorology at Canberra Airport. The two sets of ranges are consistent but show the variability of visibility in the patchy fog conditions that often prevail in the Majura Valley. Significant spatial variations of the light extinction coefficient were found to occur over the longest 570 m optical path sampled by the imaging system. Visual ranges could be estimated out to ten times the distance to the furthest target, or approximately 6 km, in these experiments. Image saturation of the white sections of the targets was the major limitation on the quantitative interpretation of the images. In the future, the camera images will be processed in real time so that the camera exposure can be adjusted to avoid saturation.

  14. Imaging with depth extension: where are the limits in fixed- focus cameras?

    NASA Astrophysics Data System (ADS)

    Bakin, Dmitry; Keelan, Brian

    2008-08-01

    The integration of novel optics designs, miniature CMOS sensors, and powerful digital processing into a single imaging module package is driving progress in handset camera systems in terms of performance, size (thinness) and cost. The miniature cameras incorporating high resolution sensors and fixed-focus Extended Depth of Field (EDOF) optics allow close-range reading of printed material (barcode patterns, business cards), while providing high quality imaging in more traditional applications. These cameras incorporate modified optics and digital processing to recover the soft-focus images and restore sharpness over a wide range of object distances. The effects a variety of parameters of the imaging module on the EDOF range were analyzed for a family of high resolution CMOS modules. The parameters include various optical properties of the imaging lens, and the characteristics of the sensor. The extension factors for the EDOF imaging module were defined in terms of an improved absolute resolution in object space while maintaining focus at infinity. This definition was applied for the purpose of identifying the minimally resolvable object details in mobile cameras with bar-code reading feature.

  15. Development of a compact scintillator-based high-resolution Compton camera for molecular imaging

    NASA Astrophysics Data System (ADS)

    Kishimoto, A.; Kataoka, J.; Koide, A.; Sueoka, K.; Iwamoto, Y.; Taya, T.; Ohsuka, S.

    2017-02-01

    The Compton camera, which shows gamma-ray distribution utilizing the kinematics of Compton scattering, is a promising detector capable of imaging across a wide range of energy. In this study, we aim to construct a small-animal molecular imaging system in a wide energy range by using the Compton camera. We developed a compact medical Compton camera based on a Ce-doped Gd3Al2Ga3O12 (Ce:GAGG) scintillator and multi-pixel photon counter (MPPC). A basic performance confirmed that for 662 keV, the typical energy resolution was 7.4 % (FWHM) and the angular resolution was 4.5° (FWHM). We then used the medical Compton camera to conduct imaging experiments based on a 3-D imaging reconstruction algorithm using the multi-angle data acquisition method. The result confirmed that for a 137Cs point source at a distance of 4 cm, the image had a spatial resolution of 3.1 mm (FWHM). Furthermore, we succeeded in producing 3-D multi-color image of different simultaneous energy sources (22Na [511 keV], 137Cs [662 keV], and 54Mn [834 keV]).

  16. Brute Force Matching Between Camera Shots and Synthetic Images from Point Clouds

    NASA Astrophysics Data System (ADS)

    Boerner, R.; Kröhnert, M.

    2016-06-01

    3D point clouds, acquired by state-of-the-art terrestrial laser scanning techniques (TLS), provide spatial information about accuracies up to several millimetres. Unfortunately, common TLS data has no spectral information about the covered scene. However, the matching of TLS data with images is important for monoplotting purposes and point cloud colouration. Well-established methods solve this issue by matching of close range images and point cloud data by fitting optical camera systems on top of laser scanners or rather using ground control points. The approach addressed in this paper aims for the matching of 2D image and 3D point cloud data from a freely moving camera within an environment covered by a large 3D point cloud, e.g. a 3D city model. The key advantage of the free movement affects augmented reality applications or real time measurements. Therefore, a so-called real image, captured by a smartphone camera, has to be matched with a so-called synthetic image which consists of reverse projected 3D point cloud data to a synthetic projection centre whose exterior orientation parameters match the parameters of the image, assuming an ideal distortion free camera.

  17. One high-accuracy camera calibration algorithm based on computer vision images

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Huang, Jianming; Wei, Xiangquan

    2015-12-01

    Camera calibration is the first step of computer vision and one of the most active research fields nowadays. In order to improve the measurement precision, the internal parameters of the camera should be accurately calibrated. So one high-accuracy camera calibration algorithm is proposed based on the images of planar targets or tridimensional targets. By using the algorithm, the internal parameters of the camera are calibrated based on the existing planar target at the vision-based navigation experiment. The experimental results show that the accuracy of the proposed algorithm is obviously improved compared with the conventional linear algorithm, Tsai general algorithm, and Zhang Zhengyou calibration algorithm. The algorithm proposed by the article can satisfy the need of computer vision and provide reference for precise measurement of the relative position and attitude.

  18. Development of a handheld fluorescence imaging camera for intraoperative sentinel lymph node mapping

    NASA Astrophysics Data System (ADS)

    Szyc, Łukasz; Bonifer, Stefanie; Walter, Alfred; Jagemann, Uwe; Grosenick, Dirk; Macdonald, Rainer

    2015-05-01

    We present a compact fluorescence imaging system developed for real-time sentinel lymph node mapping. The device uses two near-infrared wavelengths to record fluorescence and anatomical images with a single charge-coupled device camera. Experiments on lymph node and tissue phantoms confirmed that the amount of dye in superficial lymph nodes can be better estimated due to the absorption correction procedure integrated in our device. Because of the camera head's small size and low weight, all accessible regions of tissue can be reached without the need for any adjustments.

  19. Automatic control of a robot camera for broadcasting based on cameramen's techniques and subjective evaluation and analysis of reproduced images.

    PubMed

    Kato, D; Katsuura, T; Koyama, H

    2000-03-01

    With the goal of achieving an intelligent robot camera system that can take dynamic images automatically through humanlike, natural camera work, we analyzed how images were shot, subjectively evaluated reproduced images, and examined effects of camerawork, using camera control technique as a parameter. It was found that (1) A high evaluation is obtained when human-based data are used for the position adjusting velocity curve of the target; (2) Evaluation scores are relatively high for images taken with feedback-feedforward camera control method for target movement in one direction; (3) Keeping the target within the image area using the control method that imitates human camera handling becomes increasingly difficult when the target changes both direction and velocity and becomes bigger and faster, and (4) The mechanical feedback method can cope with rapid changes in the target's direction and velocity, constantly keeping the target within the image area, though the viewer finds the image rather mechanical as opposed to humanlike.

  20. Online gamma-camera imaging of 103Pd seeds (OGIPS) for permanent breast seed implantation

    NASA Astrophysics Data System (ADS)

    Ravi, Ananth; Caldwell, Curtis B.; Keller, Brian M.; Reznik, Alla; Pignol, Jean-Philippe

    2007-09-01

    Permanent brachytherapy seed implantation is being investigated as a mode of accelerated partial breast irradiation for early stage breast cancer patients. Currently, the seeds are poorly visualized during the procedure making it difficult to perform a real-time correction of the implantation if required. The objective was to determine if a customized gamma-camera can accurately localize the seeds during implantation. Monte Carlo simulations of a CZT based gamma-camera were used to assess whether images of suitable quality could be derived by detecting the 21 keV photons emitted from 74 MBq 103Pd brachytherapy seeds. A hexagonal parallel hole collimator with a hole length of 38 mm, hole diameter of 1.2 mm and 0.2 mm septa, was modeled. The design of the gamma-camera was evaluated on a realistic model of the breast and three layers of the seed distribution (55 seeds) based on a pre-implantation CT treatment plan. The Monte Carlo simulations showed that the gamma-camera was able to localize the seeds with a maximum error of 2.0 mm, using only two views and 20 s of imaging. A gamma-camera can potentially be used as an intra-procedural image guidance system for quality assurance for permanent breast seed implantation.

  1. A Fast Visible Camera Divertor-Imaging Diagnostic on DIII-D

    SciTech Connect

    Roquemore, A; Maingi, R; Lasnier, C; Nishino, N; Evans, T; Fenstermacher, M; Nagy, A

    2007-06-19

    In recent campaigns, the Photron Ultima SE fast framing camera has proven to be a powerful diagnostic when applied to imaging divertor phenomena on the National Spherical Torus Experiment (NSTX). Active areas of NSTX divertor research addressed with the fast camera include identification of types of EDGE Localized Modes (ELMs)[1], dust migration, impurity behavior and a number of phenomena related to turbulence. To compare such edge and divertor phenomena in low and high aspect ratio plasmas, a multi-institutional collaboration was developed for fast visible imaging on NSTX and DIII-D. More specifically, the collaboration was proposed to compare the NSTX small type V ELM regime [2] and the residual ELMs observed during Type I ELM suppression with external magnetic perturbations on DIII-D[3]. As part of the collaboration effort, the Photron camera was installed recently on DIII-D with a tangential view similar to the view implemented on NSTX, enabling a direct comparison between the two machines. The rapid implementation was facilitated by utilization of the existing optics that coupled the visible spectral output from the divertor vacuum ultraviolet UVTV system, which has a view similar to the view developed for the divertor tangential TV camera [4]. A remote controlled filter wheel was implemented, as was the radiation shield required for the DIII-D installation. The installation and initial operation of the camera are described in this paper, and the first images from the DIII-D divertor are presented.

  2. The Lunar Student Imaging Project (LSIP): Bringing the Excitement of Lunar Exploration to Students Using LRO Mission Data

    NASA Astrophysics Data System (ADS)

    Taylor, W. L.; Roberts, D.; Burnham, R.; Robinson, M. S.

    2009-12-01

    In June 2009, NASA launched the Lunar Reconnaissance Orbiter (LRO) - the first mission in NASA's Vision for Space Exploration, a plan to return to the Moon and then to travel to Mars and beyond. LRO is equipped with seven instruments including the Lunar Reconnaissance Orbiter Camera (LROC), a system of two narrow-angle cameras and one wide-angle camera, controlled by scientists in the School of Earth and Space Exploration at Arizona State University. The orbiter will have a one-year primary mission in a 50 km polar orbit. The measurements from LROC will uncover much-needed information about potential landing sites and will help generate a meter scale map of the lunar surface. With support from NASA Goddard Space Flight Center, the LROC Science Operations Center and the ASU Mars Education Program, have partnered to develop an inquiry-based student program, the Lunar Student Imaging Project (LSIP). Based on the nationally recognized, Mars Student Imaging Project (MSIP), LSIP uses cutting-edge NASA content and remote sensing data to involve students in authentic lunar exploration. This program offers students (grades 5-14) immersive experiences where they can: 1) target images of the lunar surface, 2) interact with NASA planetary scientists, mission engineers and educators, and 3) gain access to NASA curricula and materials developed to enhance STEM learning. Using a project based learning model, students drive their own research and learn first hand what it’s like to do real planetary science. The LSIP curriculum contains a resource manual and program guide (including lunar feature identification charts, classroom posters, and lunar exploration time line) and a series of activities covering image analysis, relative age dating and planetary comparisons. LSIP will be based upon the well-tested MSIP model, and will encompass onsite as well as distance learning components.

  3. Acquisition method improvement for Bossa Nova Technologies' full Stokes, passive polarization imaging camera SALSA

    NASA Astrophysics Data System (ADS)

    El Ketara, M.; Vedel, M.; Breugnot, S.

    2016-05-01

    For some applications, the need for fast polarization acquisition is essential (if the scene observed is moving or changing quickly). In this paper, we present a new acquisition method for Bossa Nova Technologies' full Stokes passive polarization imaging camera, the SALSA. This polarization imaging camera is based on "Division of Time polarimetry" architecture. The use of this technique presents the advantage of preserving the full resolution of the image observed all the while reducing the speed acquisition time. The goal of this new acquisition method is to overcome the limitations associated with Division of Time acquisition technique as well as to obtain high-speed polarization imaging while maintaining the image resolution. The efficiency of this new method is demonstrated in this paper through different experiments.

  4. Camera Sensor Arrangement for Crop/Weed Detection Accuracy in Agronomic Images

    PubMed Central

    Romeo, Juan; Guerrero, José Miguel; Montalvo, Martín; Emmi, Luis; Guijarro, María; Gonzalez-de-Santos, Pablo; Pajares, Gonzalo

    2013-01-01

    In Precision Agriculture, images coming from camera-based sensors are commonly used for weed identification and crop line detection, either to apply specific treatments or for vehicle guidance purposes. Accuracy of identification and detection is an important issue to be addressed in image processing. There are two main types of parameters affecting the accuracy of the images, namely: (a) extrinsic, related to the sensor's positioning in the tractor; (b) intrinsic, related to the sensor specifications, such as CCD resolution, focal length or iris aperture, among others. Moreover, in agricultural applications, the uncontrolled illumination, existing in outdoor environments, is also an important factor affecting the image accuracy. This paper is exclusively focused on two main issues, always with the goal to achieve the highest image accuracy in Precision Agriculture applications, making the following two main contributions: (a) camera sensor arrangement, to adjust extrinsic parameters and (b) design of strategies for controlling the adverse illumination effects. PMID:23549361

  5. Camera sensor arrangement for crop/weed detection accuracy in agronomic images.

    PubMed

    Romeo, Juan; Guerrero, José Miguel; Montalvo, Martín; Emmi, Luis; Guijarro, María; Gonzalez-de-Santos, Pablo; Pajares, Gonzalo

    2013-04-02

    In Precision Agriculture, images coming from camera-based sensors are commonly used for weed identification and crop line detection, either to apply specific treatments or for vehicle guidance purposes. Accuracy of identification and detection is an important issue to be addressed in image processing. There are two main types of parameters affecting the accuracy of the images, namely: (a) extrinsic, related to the sensor's positioning in the tractor; (b) intrinsic, related to the sensor specifications, such as CCD resolution, focal length or iris aperture, among others. Moreover, in agricultural applications, the uncontrolled illumination, existing in outdoor environments, is also an important factor affecting the image accuracy. This paper is exclusively focused on two main issues, always with the goal to achieve the highest image accuracy in Precision Agriculture applications, making the following two main contributions: (a) camera sensor arrangement, to adjust extrinsic parameters and (b) design of strategies for controlling the adverse illumination effects.

  6. Photothermal camera port accessory for microscopic thermal diffusivity imaging

    NASA Astrophysics Data System (ADS)

    Escola, Facundo Zaldívar; Kunik, Darío; Mingolo, Nelly; Martínez, Oscar Eduardo

    2016-06-01

    The design of a scanning photothermal accessory is presented, which can be attached to the camera port of commercial microscopes to measure thermal diffusivity maps with micrometer resolution. The device is based on the thermal expansion recovery technique, which measures the defocusing of a probe beam due to the curvature induced by the local heat delivered by a focused pump beam. The beam delivery and collecting optics are built using optical fiber technology, resulting in a robust optical system that provides collinear pump and probe beams without any alignment adjustment necessary. The quasiconfocal configuration for the signal collection using the same optical fiber sets very restrictive conditions on the positioning and alignment of the optical components of the scanning unit, and a detailed discussion of the design equations is presented. The alignment procedure is carefully described, resulting in a system so robust and stable that no further alignment is necessary for the day-to-day use, becoming a tool that can be used for routine quality control, operated by a trained technician.

  7. Suppressing the image smear of the vibration modulation transfer function for remote-sensing optical cameras.

    PubMed

    Li, Jin; Liu, Zilong; Liu, Si

    2017-02-20

    In on-board photographing processes of satellite cameras, the platform vibration can generate image motion, distortion, and smear, which seriously affect the image quality and image positioning. In this paper, we create a mathematical model of a vibrating modulate transfer function (VMTF) for a remote-sensing camera. The total MTF of a camera is reduced by the VMTF, which means the image quality is degraded. In order to avoid the degeneration of the total MTF caused by vibrations, we use an Mn-20Cu-5Ni-2Fe (M2052) manganese copper alloy material to fabricate a vibration-isolation mechanism (VIM). The VIM can transform platform vibration energy into irreversible thermal energy with its internal twin crystals structure. Our experiment shows the M2052 manganese copper alloy material is good enough to suppress image motion below 125 Hz, which is the vibration frequency of satellite platforms. The camera optical system has a higher MTF after suppressing the vibration of the M2052 material than before.

  8. GNSS Carrier Phase Integer Ambiguity Resolution with Camera and Satellite images

    NASA Astrophysics Data System (ADS)

    Henkel, Patrick

    2015-04-01

    Ambiguity Resolution is the key to high precision position and attitude determination with GNSS. However, ambiguity resolution of kinematic receivers becomes challenging in environments with substantial multipath, limited satellite availability and erroneous cycle slip corrections. There is a need for other sensors, e.g. inertial sensors that allow an independent prediction of the position. The change of the predicted position over time can then be used for cycle slip detection and correction. In this paper, we provide a method to improve the initial ambiguity resolution for RTK and PPP with vision-based position information. Camera images are correlated with geo-referenced aerial/ satellite images to obtain an independent absolute position information. This absolute position information is then coupled with the GNSS and INS measurements in an extended Kalman filter to estimate the position, velocity, acceleration, attitude, angular rates, code multipath and biases of the accelerometers and gyroscopes. The camera and satellite images are matched based on some characteristic image points (e.g. corners of street markers). We extract these characteristic image points from the camera images by performing the following steps: An inverse mapping (homogenous projection) is applied to transform the camera images from the driver's perspective to bird view. Subsequently, we detect the street markers by performing (a) a color transformation and reduction with adaptive brightness correction to focus on relevant features, (b) a subsequent morphological operation to enhance the structure recognition, (c) an edge and corner detection to extract feature points, and (d) a point matching of the corner points with a template to recognize the street markers. We verified the proposed method with two low-cost u-blox LEA 6T GPS receivers, the MPU9150 from Invensense, the ASCOS RTK corrections and a PointGrey camera. The results show very precise and seamless position and attitude

  9. Real-time imaging of methane gas leaks using a single-pixel camera.

    PubMed

    Gibson, Graham M; Sun, Baoqing; Edgar, Matthew P; Phillips, David B; Hempler, Nils; Maker, Gareth T; Malcolm, Graeme P A; Padgett, Miles J

    2017-02-20

    We demonstrate a camera which can image methane gas at video rates, using only a single-pixel detector and structured illumination. The light source is an infrared laser diode operating at 1.651μm tuned to an absorption line of methane gas. The light is structured using an addressable micromirror array to pattern the laser output with a sequence of Hadamard masks. The resulting backscattered light is recorded using a single-pixel InGaAs detector which provides a measure of the correlation between the projected patterns and the gas distribution in the scene. Knowledge of this correlation and the patterns allows an image to be reconstructed of the gas in the scene. For the application of locating gas leaks the frame rate of the camera is of primary importance, which in this case is inversely proportional to the square of the linear resolution. Here we demonstrate gas imaging at ~25 fps while using 256 mask patterns (corresponding to an image resolution of 16×16). To aid the task of locating the source of the gas emission, we overlay an upsampled and smoothed image of the low-resolution gas image onto a high-resolution color image of the scene, recorded using a standard CMOS camera. We demonstrate for an illumination of only 5mW across the field-of-view imaging of a methane gas leak of ~0.2 litres/minute from a distance of ~1 metre.

  10. Low-complexity camera digital signal imaging for video document projection system

    NASA Astrophysics Data System (ADS)

    Hsia, Shih-Chang; Tsai, Po-Shien

    2011-04-01

    We present high-performance and low-complexity algorithms for real-time camera imaging applications. The main functions of the proposed camera digital signal processing (DSP) involve color interpolation, white balance, adaptive binary processing, auto gain control, and edge and color enhancement for video projection systems. A series of simulations demonstrate that the proposed method can achieve good image quality while keeping computation cost and memory requirements low. On the basis of the proposed algorithms, the cost-effective hardware core is developed using Verilog HDL. The prototype chip has been verified with one low-cost programmable device. The real-time camera system can achieve 1270 × 792 resolution with the combination of extra components and can demonstrate each DSP function.

  11. The systematic error in digital image correlation induced by self-heating of a digital camera

    NASA Astrophysics Data System (ADS)

    Ma, Shaopeng; Pang, Jiazhi; Ma, Qinwei

    2012-02-01

    The systematic strain measurement error in digital image correlation (DIC) induced by self-heating of digital CCD and CMOS cameras was extensively studied, and an experimental and data analysis procedure has been proposed and two parameters have been suggested to examine and evaluate this. Six digital cameras of four different types were tested to define the strain errors, and it was found that each camera needed between 1 and 2 h to reach a stable heat balance, with a measured temperature increase of around 10 °C. During the temperature increase, the virtual image expansion will cause a 70-230 µɛ strain error in the DIC measurement, which is large enough to be noticed in most DIC experiments and hence should be eliminated.

  12. Low Frequency Flats for Imaging Cameras on the Hubble Space Telescope

    NASA Astrophysics Data System (ADS)

    Kossakowski, Diana; Avila, Roberto J.; Borncamp, David; Grogin, Norman A.

    2017-01-01

    We created a revamped Low Frequency Flat (L-Flat) algorithm for the Hubble Space Telescope (HST) and all of its imaging cameras. The current program that makes these calibration files does not compile on modern computer systems and it requires translation to Python. We took the opportunity to explore various methods that reduce the scatter of photometric observations using chi-squared optimizers along with Markov Chain Monte Carlo (MCMC). We created simulations to validate the algorithms and then worked with the UV photometry of the globular cluster NGC6681 to update the calibration files for the Advanced Camera for Surveys (ACS) and Solar Blind Channel (SBC). The new software was made for general usage and therefore can be applied to any of the current imaging cameras on HST.

  13. Gamma camera calibration and validation for quantitative SPECT imaging with (177)Lu.

    PubMed

    D'Arienzo, M; Cazzato, M; Cozzella, M L; Cox, M; D'Andrea, M; Fazio, A; Fenwick, A; Iaccarino, G; Johansson, L; Strigari, L; Ungania, S; De Felice, P

    2016-06-01

    Over the last years (177)Lu has received considerable attention from the clinical nuclear medicine community thanks to its wide range of applications in molecular radiotherapy, especially in peptide-receptor radionuclide therapy (PRRT). In addition to short-range beta particles, (177)Lu emits low energy gamma radiation of 113keV and 208keV that allows gamma camera quantitative imaging. Despite quantitative cancer imaging in molecular radiotherapy having been proven to be a key instrument for the assessment of therapeutic response, at present no general clinically accepted quantitative imaging protocol exists and absolute quantification studies are usually based on individual initiatives. The aim of this work was to develop and evaluate an approach to gamma camera calibration for absolute quantification in tomographic imaging with (177)Lu. We assessed the gamma camera calibration factors for a Philips IRIX and Philips AXIS gamma camera system using various reference geometries, both in air and in water. Images were corrected for the major effects that contribute to image degradation, i.e. attenuation, scatter and dead- time. We validated our method in non-reference geometry using an anthropomorphic torso phantom provided with the liver cavity uniformly filled with (177)LuCl3. Our results showed that calibration factors depend on the particular reference condition. In general, acquisitions performed with the IRIX gamma camera provided good results at 208keV, with agreement within 5% for all geometries. The use of a Jaszczak 16mL hollow sphere in water provided calibration factors capable of recovering the activity in anthropomorphic geometry within 1% for the 208keV peak, for both gamma cameras. The point source provided the poorest results, most likely because scatter and attenuation correction are not incorporated in the calibration factor. However, for both gamma cameras all geometries provided calibration factors capable of recovering the activity in

  14. Orientation Modeling for Amateur Cameras by Matching Image Line Features and Building Vector Data

    NASA Astrophysics Data System (ADS)

    Hung, C. H.; Chang, W. C.; Chen, L. C.

    2016-06-01

    With the popularity of geospatial applications, database updating is getting important due to the environmental changes over time. Imagery provides a lower cost and efficient way to update the database. Three dimensional objects can be measured by space intersection using conjugate image points and orientation parameters of cameras. However, precise orientation parameters of light amateur cameras are not always available due to their costliness and heaviness of precision GPS and IMU. To automatize data updating, the correspondence of object vector data and image may be built to improve the accuracy of direct georeferencing. This study contains four major parts, (1) back-projection of object vector data, (2) extraction of image feature lines, (3) object-image feature line matching, and (4) line-based orientation modeling. In order to construct the correspondence of features between an image and a building model, the building vector features were back-projected onto the image using the initial camera orientation from GPS and IMU. Image line features were extracted from the imagery. Afterwards, the matching procedure was done by assessing the similarity between the extracted image features and the back-projected ones. Then, the fourth part utilized line features in orientation modeling. The line-based orientation modeling was performed by the integration of line parametric equations into collinearity condition equations. The experiment data included images with 0.06 m resolution acquired by Canon EOS Mark 5D II camera on a Microdrones MD4-1000 UAV. Experimental results indicate that 2.1 pixel accuracy may be reached, which is equivalent to 0.12 m in the object space.

  15. Retrieving Atmospheric Dust Loading on Mars Using Engineering Cameras and MSL's Mars Hand Lens Imager (MAHLI)

    NASA Astrophysics Data System (ADS)

    Wolfe, C. A.; Lemmon, M. T.

    2015-12-01

    Dust in the Martian atmosphere influences energy deposition, dynamics, and the viability of solar powered exploration vehicles. The Viking, Pathfinder, Spirit, Opportunity, Phoenix, and Curiosity landers and rovers each included the ability to image the Sun with a science camera equipped with a neutral density filter. Direct images of the Sun not only provide the ability to measure extinction by dust and ice in the atmosphere, but also provide a variety of constraints on the Martian dust and water cycles. These observations have been used to characterize dust storms, to provide ground truth sites for orbiter-based global measurements of dust loading, and to help monitor solar panel performance. In the cost-constrained environment of Mars exploration, future missions may omit such cameras, as the solar-powered InSight mission has. We seek to provide a robust capability of determining atmospheric opacity from sky images taken with cameras that have not been designed for solar imaging, such as the engineering cameras onboard Opportunity and the Mars Hand Lens Imager (MAHLI) on Curiosity. Our investigation focuses primarily on the accuracy of a method that determines optical depth values using scattering models that implement the ratio of sky radiance measurements at different elevation angles, but at the same scattering angle. Operational use requires the ability to retrieve optical depth on a timescale useful to mission planning, and with an accuracy and precision sufficient to support both mission planning and validating orbital measurements. We will present a simulation-based assessment of imaging strategies and their error budgets, as well as a validation based on the comparison of direct extinction measurements from archival Navcam, Hazcam, and MAHLI camera data.

  16. The application of camera calibration in range-gated 3D imaging technology

    NASA Astrophysics Data System (ADS)

    Liu, Xiao-quan; Wang, Xian-wei; Zhou, Yan

    2013-09-01

    Range-gated laser imaging technology was proposed in 1966 by LF Gillespiethe in U.S. Army Night Vision Laboratory(NVL). Using pulse laser and intensified charge-coupled device(ICCD) as light source and detector respectively, range-gated laser imaging technology can realize space-slice imaging while restraining the atmospheric backs-catter, and in turn detect the target effectively, by controlling the delay between the laser pulse and strobe. Owing to the constraints of the development of key components such as narrow pulse laser and gated imaging devices, the research has been progressed slowly in the next few decades. Until the beginning of this century, as the hardware technology continues to mature, this technology has developed rapidly in fields such as night vision, underwater imaging, biomedical imaging, three-dimensional imaging, especially range-gated three-dimensional(3-D) laser imaging field purposing of access to target spatial information. 3-D reconstruction is the processing of restoration of 3-D objects visible surface geometric structure from three-dimensional(2-D) image. Range-gated laser imaging technology can achieve gated imaging of slice space to form a slice image, and in turn provide the distance information corresponding to the slice image. But to inverse the information of 3-D space, we need to obtain the imaging visual field of system, that is, the focal length of the system. Then based on the distance information of the space slice, the spatial information of each unit space corresponding to each pixel can be inversed. Camera calibration is an indispensable step in 3-D reconstruction, including analysis of the internal structure of camera parameters and the external parameters . In order to meet the technical requirements of the range-gated 3-D imaging, this paper intends to study the calibration of the zoom lens system. After summarizing the camera calibration technique comprehensively, a classic calibration method based on line is

  17. A multiple-plate, multiple-pinhole camera for X-ray gamma-ray imaging

    NASA Technical Reports Server (NTRS)

    Hoover, R. B.

    1971-01-01

    Plates with identical patterns of precisely aligned pinholes constitute lens system which, when rotated about optical axis, produces continuous high resolution image of small energy X-ray or gamma ray source. Camera has applications in radiation treatment and nuclear medicine.

  18. Automatic orthorectification and mosaicking of oblique images from a zoom lens aerial camera

    NASA Astrophysics Data System (ADS)

    Zhou, Qianfei; Liu, Jinghong

    2015-01-01

    For the purpose of image distortion caused by the oblique photography of a zoom lens aerial camera, a fast and accurate image autorectification and mosaicking method in a ground control points (GCPs)-free environment was proposed. With the availability of integrated global positioning system (GPS) and inertial measurement units, the camera's exterior orientation parameters (EOPs) were solved through direct georeferencing. The one-parameter division model was adopted to estimate the distortion coefficient and the distortion center coordinates for the zoom lens to correct the lens distortion. Using the camera's EOPs and the lens distortion parameters, the oblique aerial images specified in the camera frame were geo-orthorectified into the mapping frame and then were mosaicked together based on the mapping coordinates to produce a larger field and high-resolution georeferenced image. Experimental results showed that the orthorectification error was less than 1.80 m at an 1100 m flight height above ground level, when compared with 14 presurveyed ground checkpoints which were measured by differential GPS. The mosaic error was about 1.57 m compared with 18 checkpoints. The accuracy was considered sufficient for urgent response such as military reconnaissance and disaster monitoring where GCPs were not available.

  19. Geologic Analysis of the Surface Thermal Emission Images Taken by the VMC Camera, Venus Express

    NASA Astrophysics Data System (ADS)

    Basilevsky, A. T.; Shalygin, E. V.; Titov, D. V.; Markiewicz, W. J.; Scholten, F.; Roatsch, Th.; Fiethe, B.; Osterloh, B.; Michalik, H.; Kreslavsky, M. A.; Moroz, L. V.

    2010-03-01

    Analysis of Venus Monitoring Camera 1-µm images and surface emission modeling showed apparent emissivity at Chimon-mana tessera and shows that Tuulikki volcano is higher than that of the adjacent plains; Maat Mons did not show any signature of ongoing volcanism.

  20. Hyperspectral imaging using a color camera and its application for pathogen detection

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper reports the results of a feasibility study for the development of a hyperspectral image recovery (reconstruction) technique using a RGB color camera and regression analysis in order to detect and classify colonies of foodborne pathogens. The target bacterial pathogens were the six represe...

  1. Future Planetary Surface Imager Development by the Beagle 2 Stereo Camera System Team

    NASA Astrophysics Data System (ADS)

    Griffiths, A. D.; Coates, A. J.; Josset, J.-L.; Paar, G.

    2004-03-01

    The Stereo Camera System provided Beagle 2 with wide-angle multi-spectral stereo imaging (IFOV=0.043°). The SCS team plans to build on this design heritage to provide improved stereo capabilities to the Pasteur payload of the Aurora ExoMars rover.

  2. Geopositioning Precision Analysis of Multiple Image Triangulation Using Lro Nac Lunar Images

    NASA Astrophysics Data System (ADS)

    Di, K.; Xu, B.; Liu, B.; Jia, M.; Liu, Z.

    2016-06-01

    This paper presents an empirical analysis of the geopositioning precision of multiple image triangulation using Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC) images at the Chang'e-3(CE-3) landing site. Nine LROC NAC images are selected for comparative analysis of geopositioning precision. Rigorous sensor models of the images are established based on collinearity equations with interior and exterior orientation elements retrieved from the corresponding SPICE kernels. Rational polynomial coefficients (RPCs) of each image are derived by least squares fitting using vast number of virtual control points generated according to rigorous sensor models. Experiments of different combinations of images are performed for comparisons. The results demonstrate that the plane coordinates can achieve a precision of 0.54 m to 2.54 m, with a height precision of 0.71 m to 8.16 m when only two images are used for three-dimensional triangulation. There is a general trend that the geopositioning precision, especially the height precision, is improved with the convergent angle of the two images increasing from several degrees to about 50°. However, the image matching precision should also be taken into consideration when choosing image pairs for triangulation. The precisions of using all the 9 images are 0.60 m, 0.50 m, 1.23 m in along-track, cross-track, and height directions, which are better than most combinations of two or more images. However, triangulation with selected fewer images could produce better precision than that using all the images.

  3. Film cameras or digital sensors? The challenge ahead for aerial imaging

    USGS Publications Warehouse

    Light, D.L.

    1996-01-01

    Cartographic aerial cameras continue to play the key role in producing quality products for the aerial photography business, and specifically for the National Aerial Photography Program (NAPP). One NAPP photograph taken with cameras capable of 39 lp/mm system resolution can contain the equivalent of 432 million pixels at 11 ??m spot size, and the cost is less than $75 per photograph to scan and output the pixels on a magnetic storage medium. On the digital side, solid state charge coupled device linear and area arrays can yield quality resolution (7 to 12 ??m detector size) and a broader dynamic range. If linear arrays are to compete with film cameras, they will require precise attitude and positioning of the aircraft so that the lines of pixels can be unscrambled and put into a suitable homogeneous scene that is acceptable to an interpreter. Area arrays need to be much larger than currently available to image scenes competitive in size with film cameras. Analysis of the relative advantages and disadvantages of the two systems show that the analog approach is more economical at present. However, as arrays become larger, attitude sensors become more refined, global positioning system coordinate readouts become commonplace, and storage capacity becomes more affordable, the digital camera may emerge as the imaging system for the future. Several technical challenges must be overcome if digital sensors are to advance to where they can support mapping, charting, and geographic information system applications.

  4. Correction of spatially varying image and video motion blur using a hybrid camera.

    PubMed

    Tai, Yu-Wing; Du, Hao; Brown, Michael S; Lin, Stephen

    2010-06-01

    We describe a novel approach to reduce spatially varying motion blur in video and images using a hybrid camera system. A hybrid camera is a standard video camera that is coupled with an auxiliary low-resolution camera sharing the same optical path but capturing at a significantly higher frame rate. The auxiliary video is temporally sharper but at a lower resolution, while the lower frame-rate video has higher spatial resolution but is susceptible to motion blur. Our deblurring approach uses the data from these two video streams to reduce spatially varying motion blur in the high-resolution camera with a technique that combines both deconvolution and super-resolution. Our algorithm also incorporates a refinement of the spatially varying blur kernels to further improve results. Our approach can reduce motion blur from the high-resolution video as well as estimate new high-resolution frames at a higher frame rate. Experimental results on a variety of inputs demonstrate notable improvement over current state-of-the-art methods in image/video deblurring.

  5. The trustworthy digital camera: Restoring credibility to the photographic image

    NASA Astrophysics Data System (ADS)

    Friedman, Gary L.

    1994-02-01

    The increasing sophistication of computers has made digital manipulation of photographic images, as well as other digitally-recorded artifacts such as audio and video, incredibly easy to perform and increasingly difficult to detect. Today, every picture appearing in newspapers and magazines has been digitally altered to some degree, with the severity varying from the trivial (cleaning up 'noise' and removing distracting backgrounds) to the point of deception (articles of clothing removed, heads attached to other people's bodies, and the complete rearrangement of city skylines). As the power, flexibility, and ubiquity of image-altering computers continues to increase, the well-known adage that 'the photography doesn't lie' will continue to become an anachronism. A solution to this problem comes from a concept called digital signatures, which incorporates modern cryptographic techniques to authenticate electronic mail messages. 'Authenticate' in this case means one can be sure that the message has not been altered, and that the sender's identity has not been forged. The technique can serve not only to authenticate images, but also to help the photographer retain and enforce copyright protection when the concept of 'electronic original' is no longer meaningful.

  6. The trustworthy digital camera: Restoring credibility to the photographic image

    NASA Technical Reports Server (NTRS)

    Friedman, Gary L.

    1994-01-01

    The increasing sophistication of computers has made digital manipulation of photographic images, as well as other digitally-recorded artifacts such as audio and video, incredibly easy to perform and increasingly difficult to detect. Today, every picture appearing in newspapers and magazines has been digitally altered to some degree, with the severity varying from the trivial (cleaning up 'noise' and removing distracting backgrounds) to the point of deception (articles of clothing removed, heads attached to other people's bodies, and the complete rearrangement of city skylines). As the power, flexibility, and ubiquity of image-altering computers continues to increase, the well-known adage that 'the photography doesn't lie' will continue to become an anachronism. A solution to this problem comes from a concept called digital signatures, which incorporates modern cryptographic techniques to authenticate electronic mail messages. 'Authenticate' in this case means one can be sure that the message has not been altered, and that the sender's identity has not been forged. The technique can serve not only to authenticate images, but also to help the photographer retain and enforce copyright protection when the concept of 'electronic original' is no longer meaningful.

  7. Quantitative Evaluation of Surface Color of Tomato Fruits Cultivated in Remote Farm Using Digital Camera Images

    NASA Astrophysics Data System (ADS)

    Hashimoto, Atsushi; Suehara, Ken-Ichiro; Kameoka, Takaharu

    To measure the quantitative surface color information of agricultural products with the ambient information during cultivation, a color calibration method for digital camera images and a remote monitoring system of color imaging using the Web were developed. Single-lens reflex and web digital cameras were used for the image acquisitions. The tomato images through the post-ripening process were taken by the digital camera in both the standard image acquisition system and in the field conditions from the morning to evening. Several kinds of images were acquired with the standard RGB color chart set up just behind the tomato fruit on a black matte, and a color calibration was carried out. The influence of the sunlight could be experimentally eliminated, and the calibrated color information consistently agreed with the standard ones acquired in the system through the post-ripening process. Furthermore, the surface color change of the tomato on the tree in a greenhouse was remotely monitored during maturation using the digital cameras equipped with the Field Server. The acquired digital color images were sent from the Farm Station to the BIFE Laboratory of Mie University via VPN. The time behavior of the tomato surface color change during the maturing process could be measured using the color parameter calculated based on the obtained and calibrated color images along with the ambient atmospheric record. This study is a very important step in developing the surface color analysis for both the simple and rapid evaluation of the crop vigor in the field and to construct an ambient and networked remote monitoring system for food security, precision agriculture, and agricultural research.

  8. The iQID camera: An ionizing-radiation quantum imaging detector.

    PubMed

    Miller, Brian W; Gregory, Stephanie J; Fuller, Erin S; Barrett, Harrison H; Barber, H Bradford; Furenlid, Lars R

    2014-12-11

    We have developed and tested a novel, ionizing-radiation Quantum Imaging Detector (iQID). This scintillation-based detector was originally developed as a high-resolution gamma-ray imager, called BazookaSPECT, for use in single-photon emission computed tomography (SPECT). Recently, we have investigated the detector's response and imaging potential with other forms of ionizing radiation including alpha, neutron, beta, and fission fragment particles. The confirmed response to this broad range of ionizing radiation has prompted its new title. The principle operation of the iQID camera involves coupling a scintillator to an image intensifier. The scintillation light generated by particle interactions is optically amplified by the intensifier and then re-imaged onto a CCD/CMOS camera sensor. The intensifier provides sufficient optical gain that practically any CCD/CMOS camera can be used to image ionizing radiation. The spatial location and energy of individual particles are estimated on an event-by-event basis in real time using image analysis algorithms on high-performance graphics processing hardware. Distinguishing features of the iQID camera include portability, large active areas, excellent detection efficiency for charged particles, and high spatial resolution (tens of microns). Although modest, iQID has energy resolution that is sufficient to discriminate between particles. Additionally, spatial features of individual events can be used for particle discrimination. An important iQID imaging application that has recently been developed is real-time, single-particle digital autoradiography. We present the latest results and discuss potential applications.

  9. The iQID Camera: An Ionizing-Radiation Quantum Imaging Detector

    DOE PAGES

    Miller, Brian W.; Gregory, Stephanie J.; Fuller, Erin S.; ...

    2014-06-11

    We have developed and tested a novel, ionizing-radiation Quantum Imaging Detector (iQID). This scintillation-based detector was originally developed as a high-resolution gamma-ray imager, called BazookaSPECT, for use in single-photon emission computed tomography (SPECT). Recently, we have investigated the detectors response and imaging potential with other forms of ionizing radiation including alpha, neutron, beta, and fission fragment particles. The detector’s response to a broad range of ionizing radiation has prompted its new title. The principle operation of the iQID camera involves coupling a scintillator to an image intensifier. The scintillation light generated particle interactions is optically amplified by the intensifier andmore » then re-imaged onto a CCD/CMOS camera sensor. The intensifier provides sufficient optical gain that practically any CCD/CMOS camera can be used to image ionizing radiation. Individual particles are identified and their spatial position (to sub-pixel accuracy) and energy are estimated on an event-by-event basis in real time using image analysis algorithms on high-performance graphics processing hardware. Distinguishing features of the iQID camera include portability, large active areas, high sensitivity, and high spatial resolution (tens of microns). Although modest, iQID has energy resolution that is sufficient to discrimate between particles. Additionally, spatial features of individual events can be used for particle discrimination. An important iQID imaging application that has recently been developed is single-particle, real-time digital autoradiography. In conclusion, we present the latest results and discuss potential applications.« less

  10. The iQID Camera: An Ionizing-Radiation Quantum Imaging Detector

    SciTech Connect

    Miller, Brian W.; Gregory, Stephanie J.; Fuller, Erin S.; Barrett, Harrison H.; Barber, Bradford H.; Furenlid, Lars R.

    2014-06-11

    We have developed and tested a novel, ionizing-radiation Quantum Imaging Detector (iQID). This scintillation-based detector was originally developed as a high-resolution gamma-ray imager, called BazookaSPECT, for use in single-photon emission computed tomography (SPECT). Recently, we have investigated the detectors response and imaging potential with other forms of ionizing radiation including alpha, neutron, beta, and fission fragment particles. The detector’s response to a broad range of ionizing radiation has prompted its new title. The principle operation of the iQID camera involves coupling a scintillator to an image intensifier. The scintillation light generated particle interactions is optically amplified by the intensifier and then re-imaged onto a CCD/CMOS camera sensor. The intensifier provides sufficient optical gain that practically any CCD/CMOS camera can be used to image ionizing radiation. Individual particles are identified and their spatial position (to sub-pixel accuracy) and energy are estimated on an event-by-event basis in real time using image analysis algorithms on high-performance graphics processing hardware. Distinguishing features of the iQID camera include portability, large active areas, high sensitivity, and high spatial resolution (tens of microns). Although modest, iQID has energy resolution that is sufficient to discrimate between particles. Additionally, spatial features of individual events can be used for particle discrimination. An important iQID imaging application that has recently been developed is single-particle, real-time digital autoradiography. In conclusion, we present the latest results and discuss potential applications.

  11. I-123 HIPDM brain imaging with a rotating gamma camera and slant-hole collimator

    SciTech Connect

    Polak, J.F.; Holman, B.L.; Moretti, J.L.; Eisner, R.L.; Lister-James, J.; English, R.J.

    1984-04-01

    The performance of a slant-hole collimator was compared with that of a standard straight-bore, low-energy collimator for tomographic imaging of I-123-iodinated amine brain agent. Improved in-slice resolution was due to the greater proximity between collimator and the subjects' heads. It was concluded that high quality tomographic images of the brain can be obtained from rotating cameras equipped with slant-hole collimators.

  12. The iQID camera: An ionizing-radiation quantum imaging detector

    PubMed Central

    Miller, Brian W.; Gregory, Stephanie J.; Fuller, Erin S.; Barrett, Harrison H.; Barber, H. Bradford; Furenlid, Lars R.

    2015-01-01

    We have developed and tested a novel, ionizing-radiation Quantum Imaging Detector (iQID). This scintillation-based detector was originally developed as a high-resolution gamma-ray imager, called BazookaSPECT, for use in single-photon emission computed tomography (SPECT). Recently, we have investigated the detector’s response and imaging potential with other forms of ionizing radiation including alpha, neutron, beta, and fission fragment particles. The confirmed response to this broad range of ionizing radiation has prompted its new title. The principle operation of the iQID camera involves coupling a scintillator to an image intensifier. The scintillation light generated by particle interactions is optically amplified by the intensifier and then re-imaged onto a CCD/CMOS camera sensor. The intensifier provides sufficient optical gain that practically any CCD/CMOS camera can be used to image ionizing radiation. The spatial location and energy of individual particles are estimated on an event-by-event basis in real time using image analysis algorithms on high-performance graphics processing hardware. Distinguishing features of the iQID camera include portability, large active areas, excellent detection efficiency for charged particles, and high spatial resolution (tens of microns). Although modest, iQID has energy resolution that is sufficient to discriminate between particles. Additionally, spatial features of individual events can be used for particle discrimination. An important iQID imaging application that has recently been developed is real-time, single-particle digital autoradiography. We present the latest results and discuss potential applications. PMID:26166921

  13. The iQID camera: An ionizing-radiation quantum imaging detector

    NASA Astrophysics Data System (ADS)

    Miller, Brian W.; Gregory, Stephanie J.; Fuller, Erin S.; Barrett, Harrison H.; Bradford Barber, H.; Furenlid, Lars R.

    2014-12-01

    We have developed and tested a novel, ionizing-radiation Quantum Imaging Detector (iQID). This scintillation-based detector was originally developed as a high-resolution gamma-ray imager, called BazookaSPECT, for use in single-photon emission computed tomography (SPECT). Recently, we have investigated the detector's response and imaging potential with other forms of ionizing radiation including alpha, neutron, beta, and fission fragment particles. The confirmed response to this broad range of ionizing radiation has prompted its new title. The principle operation of the iQID camera involves coupling a scintillator to an image intensifier. The scintillation light generated by particle interactions is optically amplified by the intensifier and then re-imaged onto a CCD/CMOS camera sensor. The intensifier provides sufficient optical gain that practically any CCD/CMOS camera can be used to image ionizing radiation. The spatial location and energy of individual particles are estimated on an event-by-event basis in real time using image analysis algorithms on high-performance graphics processing hardware. Distinguishing features of the iQID camera include portability, large active areas, excellent detection efficiency for charged particles, and high spatial resolution (tens of microns). Although modest, iQID has energy resolution that is sufficient to discriminate between particles. Additionally, spatial features of individual events can be used for particle discrimination. An important iQID imaging application that has recently been developed is real-time, single-particle digital autoradiography. We present the latest results and discuss potential applications.

  14. Real object-based integral imaging system using a depth camera and a polygon model

    NASA Astrophysics Data System (ADS)

    Jeong, Ji-Seong; Erdenebat, Munkh-Uchral; Kwon, Ki-Chul; Lim, Byung-Muk; Jang, Ho-Wook; Kim, Nam; Yoo, Kwan-Hee

    2017-01-01

    An integral imaging system using a polygon model for a real object is proposed. After depth and color data of the real object are acquired by a depth camera, the grid of the polygon model is converted from the initially reconstructed point cloud model. The elemental image array is generated from the polygon model and directly reconstructed. The polygon model eliminates the failed picking area between the points of a point cloud model, so at least the quality of the reconstructed 3-D image is significantly improved. The theory is verified experimentally, and higher-quality images are obtained.

  15. Real-time full-field photoacoustic imaging using an ultrasonic camera

    NASA Astrophysics Data System (ADS)

    Balogun, Oluwaseyi; Regez, Brad; Zhang, Hao F.; Krishnaswamy, Sridhar

    2010-03-01

    A photoacoustic imaging system that incorporates a commercial ultrasonic camera for real-time imaging of two-dimensional (2-D) projection planes in tissue at video rate (30 Hz) is presented. The system uses a Q-switched frequency-doubled Nd:YAG pulsed laser for photoacoustic generation. The ultrasonic camera consists of a 2-D 12×12 mm CCD chip with 120×120 piezoelectric sensing elements used for detecting the photoacoustic pressure distribution radiated from the target. An ultrasonic lens system is placed in front of the chip to collect the incoming photoacoustic waves, providing the ability for focusing and imaging at different depths. Compared with other existing photoacoustic imaging techniques, the camera-based system is attractive because it is relatively inexpensive and compact, and it can be tailored for real-time clinical imaging applications. Experimental results detailing the real-time photoacoustic imaging of rubber strings and buried absorbing targets in chicken breast tissue are presented, and the spatial resolution of the system is quantified.

  16. Early Detection of Breast Cancer by Using Handycam Camera Manipulation as Thermal Camera Imaging with Images Processing Method

    NASA Astrophysics Data System (ADS)

    Riantana, R.; Arie, B.; Adam, M.; Aditya, R.; Nuryani; Yahya, I.

    2017-02-01

    One important thing to pay attention for detecting breast cancer is breast temperature changes. Indications symptoms of breast tissue abnormalities marked by a rise in temperature of the breast. Handycam in night vision mode interferences by external infrared can penetrate into the skin better and can make an infrared image becomes clearer. The program is capable to changing images from a camcorder into a night vision thermal image by breaking RGB into Grayscale matrix structure. The matrix rearranged in the new matrix with double data type so that it can be processed into contour color chart to differentiate the distribution of body temperature. In this program are also features of contrast scale setting of the image is processed so that the color can be set as desired. There was Also a contrast adjustment feature inverse scale that is useful to reverse the color scale so that colors can be changed opposite. There is improfile function used to retrieves the intensity values of pixels along a line what we want to show the distribution of intensity in a graph of relationship between the intensity and the pixel coordinates.

  17. Skeletal camera network embedded structure-from-motion for 3D scene reconstruction from UAV images

    NASA Astrophysics Data System (ADS)

    Xu, Zhihua; Wu, Lixin; Gerke, Markus; Wang, Ran; Yang, Huachao

    2016-11-01

    Structure-from-Motion (SfM) techniques have been widely used for 3D scene reconstruction from multi-view images. However, due to the large computational costs of SfM methods there is a major challenge in processing highly overlapping images, e.g. images from unmanned aerial vehicles (UAV). This paper embeds a novel skeletal camera network (SCN) into SfM to enable efficient 3D scene reconstruction from a large set of UAV images. First, the flight control data are used within a weighted graph to construct a topologically connected camera network (TCN) to determine the spatial connections between UAV images. Second, the TCN is refined using a novel hierarchical degree bounded maximum spanning tree to generate a SCN, which contains a subset of edges from the TCN and ensures that each image is involved in at least a 3-view configuration. Third, the SCN is embedded into the SfM to produce a novel SCN-SfM method, which allows performing tie-point matching only for the actually connected image pairs. The proposed method was applied in three experiments with images from two fixed-wing UAVs and an octocopter UAV, respectively. In addition, the SCN-SfM method was compared to three other methods for image connectivity determination. The comparison shows a significant reduction in the number of matched images if our method is used, which leads to less computational costs. At the same time the achieved scene completeness and geometric accuracy are comparable.

  18. Engineering performance of IRIS2 infrared imaging camera and spectrograph

    NASA Astrophysics Data System (ADS)

    Churilov, Vladimir; Dawson, John; Smith, Greg A.; Waller, Lew; Whittard, John D.; Haynes, Roger; Lankshear, Allan; Ryder, Stuart D.; Tinney, Chris G.

    2004-09-01

    IRIS2, the infrared imager and spectrograph for the Cassegrain focus of the Anglo Australian Telescope, has been in service since October 2001. IRIS2 incorporated many novel features, including multiple cryogenic multislit masks, a dual chambered vacuum vessel (the smaller chamber used to reduce thermal cycle time required to change sets of multislit masks), encoded cryogenic wheel drives with controlled backlash, a deflection compensating structure, and use of teflon impregnated hard anodizing for gear lubrication at low temperatures. Other noteworthy features were: swaged foil thermal link terminations, the pupil imager, the detector focus mechanism, phased getter cycling to prevent detector contamination, and a flow-through LN2 precooling system. The instrument control electronics was designed to allow accurate positioning of the internal mechanisms with minimal generation of heat. The detector controller was based on the AAO2 CCD controller, adapted for use on the HAWAII1 detector (1024 x 1024 pixels) and is achieving low noise and high performance. We describe features of the instrument design, the problems encountered and the development work required to bring them into operation, and their performance in service.

  19. Picosecond time-resolved imaging using SPAD cameras

    NASA Astrophysics Data System (ADS)

    Gariepy, Genevieve; Leach, Jonathan; Warburton, Ryan; Chan, Susan; Henderson, Robert; Faccio, Daniele

    2016-10-01

    The recent development of 2D arrays of single-photon avalanche diodes (SPAD) has driven the development of applications based on the ability to capture light in motion. Such arrays are composed typically of 32x32 SPAD detectors, each having the ability to detect single photons and measure their time of arrival with a resolution of about 100 ps. Thanks to the single-photon sensitivity and the high temporal resolution of these detectors, it is now possible to image light as it is travelling on a centimetre scale. This opens the door for the direct observation and study of dynamics evolving over picoseconds and nanoseconds timescales such as laser propagation in air, laser-induced plasma and laser propagation in optical fibres. Another interesting application enabled by the ability to image light in motion is the detection of objects hidden from view, based on the recording of scattered waves originating from objects hidden by an obstacle. Similarly to LIDAR systems, the temporal information acquired at every pixel of a SPAD array, combined with the spatial information it provides, allows to pinpoint the position of an object located outside the line-of-sight of the detector. A non-line-of-sight tracking can be a valuable asset in many scenarios, including for search and rescue mission and safer autonomous driving.

  20. Mixel camera--a new push-broom camera concept for high spatial resolution keystone-free hyperspectral imaging.

    PubMed

    Høye, Gudrun; Fridman, Andrei

    2013-05-06

    Current high-resolution push-broom hyperspectral cameras introduce keystone errors to the captured data. Efforts to correct these errors in hardware severely limit the optical design, in particular with respect to light throughput and spatial resolution, while at the same time the residual keystone often remains large. The mixel camera solves this problem by combining a hardware component--an array of light mixing chambers--with a mathematical method that restores the hyperspectral data to its keystone-free form, based on the data that was recorded onto the sensor with large keystone. A Virtual Camera software, that was developed specifically for this purpose, was used to compare the performance of the mixel camera to traditional cameras that correct keystone in hardware. The mixel camera can collect at least four times more light than most current high-resolution hyperspectral cameras, and simulations have shown that the mixel camera will be photon-noise limited--even in bright light--with a significantly improved signal-to-noise ratio compared to traditional cameras. A prototype has been built and is being tested.

  1. MONICA: a compact, portable dual gamma camera system for mouse whole-body imaging

    SciTech Connect

    Xi, Wenze; Seidel, Jurgen; Kakareka, John W.; Pohida, Thomas J.; Milenic, Diane E.; Proffitt, James; Majewski, Stan; Weisenberger, Andrew G.; Green, Michael V.; Choyke, Peter L.

    2010-04-01

    Introduction We describe a compact, portable dual-gamma camera system (named "MONICA" for MObile Nuclear Imaging CAmeras) for visualizing and analyzing the whole-body biodistribution of putative diagnostic and therapeutic single photon emitting radiotracers in animals the size of mice. Methods Two identical, miniature pixelated NaI(Tl) gamma cameras were fabricated and installed ?looking up? through the tabletop of a compact portable cart. Mice are placed directly on the tabletop for imaging. Camera imaging performance was evaluated with phantoms and field performance was evaluated in a weeklong In-111 imaging study performed in a mouse tumor xenograft model. Results Tc-99m performance measurements, using a photopeak energy window of 140 keV?10%, yielded the following results: spatial resolution (FWHM at 1 cm), 2.2 mm; sensitivity, 149 cps (counts per seconds)/MBq (5.5 cps/μCi); energy resolution (FWHM, full width at half maximum), 10.8%; count rate linearity (count rate vs. activity), r2=0.99 for 0?185 MBq (0?5 mCi) in the field of view (FOV); spatial uniformity, <3% count rate variation across the FOV. Tumor and whole-body distributions of the In-111 agent were well visualized in all animals in 5-min images acquired throughout the 168-h study period. Conclusion Performance measurements indicate that MONICA is well suited to whole-body single photon mouse imaging. The field study suggests that inter-device communications and user-oriented interfaces included in the MONICA design facilitate use of the system in practice. We believe that MONICA may be particularly useful early in the (cancer) drug development cycle where basic whole-body biodistribution data can direct future development of the agent under study and where logistical factors, e.g., limited imaging space, portability and, potentially, cost are important.

  2. MONICA: A Compact, Portable Dual Gamma Camera System for Mouse Whole-Body Imaging

    PubMed Central

    Xi, Wenze; Seidel, Jurgen; Karkareka, John W.; Pohida, Thomas J.; Milenic, Diane E.; Proffitt, James; Majewski, Stan; Weisenberger, Andrew G.; Green, Michael V.; Choyke, Peter L.

    2009-01-01

    Introduction We describe a compact, portable dual-gamma camera system (named “MONICA” for MObile Nuclear Imaging CAmeras) for visualizing and analyzing the whole-body biodistribution of putative diagnostic and therapeutic single photon emitting radiotracers in animals the size of mice. Methods Two identical, miniature pixelated NaI(Tl) gamma cameras were fabricated and installed “looking up” through the tabletop of a compact portable cart. Mice are placed directly on the tabletop for imaging. Camera imaging performance was evaluated with phantoms and field performance was evaluated in a weeklong In-111 imaging study performed in a mouse tumor xenograft model. Results Tc-99m performance measurements, using a photopeak energy window of 140 keV ± 10%, yielded the following results: spatial resolution (FWHM at 1-cm), 2.2-mm; sensitivity, 149 cps/MBq (5.5 cps/μCi); energy resolution (FWHM), 10.8%; count rate linearity (count rate vs. activity), r2 = 0.99 for 0–185 MBq (0–5 mCi) in the field-of-view (FOV); spatial uniformity, < 3% count rate variation across the FOV. Tumor and whole-body distributions of the In-111 agent were well visualized in all animals in 5-minute images acquired throughout the 168-hour study period. Conclusion Performance measurements indicate that MONICA is well suited to whole-body single photon mouse imaging. The field study suggests that inter-device communications and user-oriented interfaces included in the MONICA design facilitate use of the system in practice. We believe that MONICA may be particularly useful early in the (cancer) drug development cycle where basic whole-body biodistribution data can direct future development of the agent under study and where logistical factors, e.g. limited imaging space, portability, and, potentially, cost are important. PMID:20346864

  3. Temporally consistent virtual camera generation from stereo image sequences

    NASA Astrophysics Data System (ADS)

    Fox, Simon R.; Flack, Julien; Shao, Juliang; Harman, Phil

    2004-05-01

    The recent emergence of auto-stereoscopic 3D viewing technologies has increased demand for the creation of 3D video content. A range of glasses-free multi-viewer screens have been developed that require as many as 9 views generated for each frame of video. This presents difficulties in both view generation and transmission bandwidth. This paper examines the use of stereo video capture as a means to generate multiple scene views via disparity analysis. A machine learning approach is applied to learn relationships between disparity generated depth information and source footage, and to generate depth information in a temporally smooth manner for both left and right eye image sequences. A view morphing approach to multiple view rendering is described which provides an excellent 3D effect on a range of glasses-free displays, while providing robustness to inaccurate stereo disparity calculations.

  4. Advanced High-Speed Framing Camera Development for Fast, Visible Imaging Experiments

    SciTech Connect

    Amy Lewis, Stuart Baker, Brian Cox, Abel Diaz, David Glass, Matthew Martin

    2011-05-11

    The advances in high-voltage switching developed in this project allow a camera user to rapidly vary the number of output frames from 1 to 25. A high-voltage, variable-amplitude pulse train shifts the deflection location to the new frame location during the interlude between frames, making multiple frame counts and locations possible. The final deflection circuit deflects to five different frame positions per axis, including the center position, making for a total of 25 frames. To create the preset voltages, electronically adjustable {+-}500 V power supplies were chosen. Digital-to-analog converters provide digital control of the supplies. The power supplies are clamped to {+-}400 V so as not to exceed the voltage ratings of the transistors. A field-programmable gated array (FPGA) receives the trigger signal and calculates the combination of plate voltages for each frame. The interframe time and number of frames are specified by the user, but are limited by the camera electronics. The variable-frame circuit shifts the plate voltages of the first frame to those of the second frame during the user-specified interframe time. Designed around an electrostatic image tube, a framing camera images the light present during each frame (at the photocathode) onto the tube’s phosphor. The phosphor persistence allows the camera to display multiple frames on the phosphor at one time. During this persistence, a CCD camera is triggered and the analog image is collected digitally. The tube functions by converting photons to electrons at the negatively charged photocathode. The electrons move quickly toward the more positive charge of the phosphor. Two sets of deflection plates skew the electron’s path in horizontal and vertical (x axis and y axis, respectively) directions. Hence, each frame’s electrons bombard the phosphor surface at a controlled location defined by the voltages on the deflection plates. To prevent the phosphor from being exposed between frames, the image tube

  5. Validation of spectral sky radiance derived from all-sky camera images - a case study

    NASA Astrophysics Data System (ADS)

    Tohsing, K.; Schrempf, M.; Riechelmann, S.; Seckmeyer, G.

    2014-01-01

    Spectral sky radiance (380-760 nm) is derived from measurements with a Hemispherical Sky Imager (HSI) system. The HSI consists of a commercial compact CCD (charge coupled device) camera equipped with a fish-eye lens and provides hemispherical sky images in three reference bands such as red, green and blue. To obtain the spectral sky radiance from these images non-linear regression functions for various sky conditions have been derived. The camera-based spectral sky radiance was validated by spectral sky radiance measured with a CCD spectroradiometer. The spectral sky radiance for complete distribution over the hemisphere between both instruments deviates by less than 20% at 500 nm for all sky conditions and for zenith angles less than 80°. The reconstructed spectra of the wavelength 380 nm to 760 nm between both instruments at various directions deviate by less then 20% for all sky conditions.

  6. Validation of spectral sky radiance derived from all-sky camera images - a case study

    NASA Astrophysics Data System (ADS)

    Tohsing, K.; Schrempf, M.; Riechelmann, S.; Seckmeyer, G.

    2014-07-01

    Spectral sky radiance (380-760 nm) is derived from measurements with a hemispherical sky imager (HSI) system. The HSI consists of a commercial compact CCD (charge coupled device) camera equipped with a fish-eye lens and provides hemispherical sky images in three reference bands such as red, green and blue. To obtain the spectral sky radiance from these images, non-linear regression functions for various sky conditions have been derived. The camera-based spectral sky radiance was validated using spectral sky radiance measured with a CCD spectroradiometer. The spectral sky radiance for complete distribution over the hemisphere between both instruments deviates by less than 20% at 500 nm for all sky conditions and for zenith angles less than 80°. The reconstructed spectra of the wavelengths 380-760 nm between both instruments at various directions deviate by less than 20% for all sky conditions.

  7. A portable device for small animal SPECT imaging in clinical gamma-cameras

    NASA Astrophysics Data System (ADS)

    Aguiar, P.; Silva-Rodríguez, J.; González-Castaño, D. M.; Pino, F.; Sánchez, M.; Herranz, M.; Iglesias, A.; Lois, C.; Ruibal, A.

    2014-07-01

    Molecular imaging is reshaping clinical practice in the last decades, providing practitioners with non-invasive ways to obtain functional in-vivo information on a diversity of relevant biological processes. The use of molecular imaging techniques in preclinical research is equally beneficial, but spreads more slowly due to the difficulties to justify a costly investment dedicated only to animal scanning. An alternative for lowering the costs is to repurpose parts of old clinical scanners to build new preclinical ones. Following this trend, we have designed, built, and characterized the performance of a portable system that can be attached to a clinical gamma-camera to make a preclinical single photon emission computed tomography scanner. Our system offers an image quality comparable to commercial systems at a fraction of their cost, and can be used with any existing gamma-camera with just an adaptation of the reconstruction software.

  8. Single Image Camera Calibration in Close Range Photogrammetry for Solder Joint Analysis

    NASA Astrophysics Data System (ADS)

    Heinemann, D.; Knabner, S.; Baumgarten, D.

    2016-06-01

    Printed Circuit Boards (PCB) play an important role in the manufacturing of electronic devices. To ensure a correct function of the PCBs a certain amount of solder paste is needed during the placement of components. The aim of the current research is to develop an real-time, closed-loop solution for the analysis of the printing process where solder is printed onto PCBs. Close range photogrammetry allows for determination of the solder volume and a subsequent correction if necessary. Photogrammetry is an image based method for three dimensional reconstruction from two dimensional image data of an object. A precise camera calibration is indispensable for an accurate reconstruction. In our certain application it is not possible to use calibration methods with two dimensional calibration targets. Therefore a special calibration target was developed and manufactured, which allows for single image camera calibration.

  9. Remote classification from an airborne camera using image super-resolution.

    PubMed

    Woods, Matthew; Katsaggelos, Aggelos

    2017-02-01

    The image processing technique known as super-resolution (SR), which attempts to increase the effective pixel sampling density of a digital imager, has gained rapid popularity over the last decade. The majority of literature focuses on its ability to provide results that are visually pleasing to a human observer. In this paper, we instead examine the ability of SR to improve the resolution-critical capability of an imaging system to perform a classification task from a remote location, specifically from an airborne camera. In order to focus the scope of the study, we address and quantify results for the narrow case of text classification. However, we expect the results generalize to a large set of related, remote classification tasks. We generate theoretical results through simulation, which are corroborated by experiments with a camera mounted on a DJI Phantom 3 quadcopter.

  10. Stereo Imaging Velocimetry Technique Using Standard Off-the-Shelf CCD Cameras

    NASA Technical Reports Server (NTRS)

    McDowell, Mark; Gray, Elizabeth

    2004-01-01

    Stereo imaging velocimetry is a fluid physics technique for measuring three-dimensional (3D) velocities at a plurality of points. This technique provides full-field 3D analysis of any optically clear fluid or gas experiment seeded with tracer particles. Unlike current 3D particle imaging velocimetry systems that rely primarily on laser-based systems, stereo imaging velocimetry uses standard off-the-shelf charge-coupled device (CCD) cameras to provide accurate and reproducible 3D velocity profiles for experiments that require 3D analysis. Using two cameras aligned orthogonally, we present a closed mathematical solution resulting in an accurate 3D approximation of the observation volume. The stereo imaging velocimetry technique is divided into four phases: 3D camera calibration, particle overlap decomposition, particle tracking, and stereo matching. Each phase is explained in detail. In addition to being utilized for space shuttle experiments, stereo imaging velocimetry has been applied to the fields of fluid physics, bioscience, and colloidal microscopy.

  11. The Calibration of High-Speed Camera Imaging System for ELMs Observation on EAST Tokamak

    NASA Astrophysics Data System (ADS)

    Fu, Chao; Zhong, Fangchuan; Hu, Liqun; Yang, Jianhua; Yang, Zhendong; Gan, Kaifu; Zhang, Bin; East Team

    2016-09-01

    A tangential fast visible camera has been set up in EAST tokamak for the study of edge MHD instabilities such as ELM. To determine the 3-D information from CCD images, Tsai's two-stage technique was utilized to calibrate the high-speed camera imaging system for ELM study. By applying tiles of the passive stabilizers in the tokamak device as the calibration pattern, transformation parameters for transforming from a 3-D world coordinate system to a 2-D image coordinate system were obtained, including the rotation matrix, the translation vector, the focal length and the lens distortion. The calibration errors were estimated and the results indicate the reliability of the method used for the camera imaging system. Through the calibration, some information about ELM filaments, such as positions and velocities were obtained from images of H-mode CCD videos. supported by National Natural Science Foundation of China (No. 11275047), the National Magnetic Confinement Fusion Science Program of China (No. 2013GB102000)

  12. Note: In vivo pH imaging system using luminescent indicator and color camera

    NASA Astrophysics Data System (ADS)

    Sakaue, Hirotaka; Dan, Risako; Shimizu, Megumi; Kazama, Haruko

    2012-07-01

    Microscopic in vivo pH imaging system is developed that can capture the luminescent- and color-imaging. The former gives a quantitative measurement of a pH distribution in vivo. The latter captures the structural information that can be overlaid to the pH distribution for correlating the structure of a specimen and its pH distribution. By using a digital color camera, a luminescent image as well as a color image is obtained. The system uses HPTS (8-hydroxypyrene-1,3,6-trisulfonate) as a luminescent pH indicator for the luminescent imaging. Filter units are mounted in the microscope, which extract two luminescent images for using the excitation-ratio method. A ratio of the two images is converted to a pH distribution through a priori pH calibration. An application of the system to epidermal cells of Lactuca Sativa L is shown.

  13. SU-E-E-06: Teaching About the Gamma Camera and Ultrasound Imaging

    SciTech Connect

    Lowe, M; Spiro, A; Vogel, R; Donaldson, N; Gosselin, C

    2015-06-15

    Purpose: Instructional modules on applications of physics in medicine are being developed. The target audience consists of students who have had an introductory undergraduate physics course. This presentation will concentrate on an active learning approach to teach the principles of the gamma camera. There will also be a description of an apparatus to teach ultrasound imaging. Methods: Since a real gamma camera is not feasible in the undergraduate classroom, we have developed two types of optical apparatus that teach the main principles. To understand the collimator, LEDS mimic gamma emitters in the body, and the photons pass through an array of tubes. The distance, spacing, diameter, and length of the tubes can be varied to understand the effect upon the resolution of the image. To determine the positions of the gamma emitters, a second apparatus uses a movable green laser, fluorescent plastic in lieu of the scintillation crystal, acrylic rods that mimic the PMTs, and a photodetector to measure the intensity. The position of the laser is calculated with a centroid algorithm.To teach the principles of ultrasound imaging, we are using the sound head and pulser box of an educational product, variable gain amplifier, rotation table, digital oscilloscope, Matlab software, and phantoms. Results: Gamma camera curriculum materials have been implemented in the classroom at Loyola in 2014 and 2015. Written work shows good knowledge retention and a more complete understanding of the material. Preliminary ultrasound imaging materials were run in 2015. Conclusion: Active learning methods add another dimension to descriptions in textbooks and are effective in keeping the students engaged during class time. The teaching apparatus for the gamma camera and ultrasound imaging can be expanded to include more cases, and could potentially improve students’ understanding of artifacts and distortions in the images.

  14. Smart-aggregation imaging for single molecule localisation with SPAD cameras

    PubMed Central

    Gyongy, Istvan; Davies, Amy; Dutton, Neale A. W.; Duncan, Rory R.; Rickman, Colin; Henderson, Robert K.; Dalgarno, Paul A.

    2016-01-01

    Single molecule localisation microscopy (SMLM) has become an essential part of the super-resolution toolbox for probing cellular structure and function. The rapid evolution of these techniques has outstripped detector development and faster, more sensitive cameras are required to further improve localisation certainty. Single-photon avalanche photodiode (SPAD) array cameras offer single-photon sensitivity, very high frame rates and zero readout noise, making them a potentially ideal detector for ultra-fast imaging and SMLM experiments. However, performance traditionally falls behind that of emCCD and sCMOS devices due to lower photon detection efficiency. Here we demonstrate, both experimentally and through simulations, that the sensitivity of a binary SPAD camera in SMLM experiments can be improved significantly by aggregating only frames containing signal, and that this leads to smaller datasets and competitive performance with that of existing detectors. The simulations also indicate that with predicted future advances in SPAD camera technology, SPAD devices will outperform existing scientific cameras when capturing fast temporal dynamics. PMID:27876857

  15. Smart-aggregation imaging for single molecule localisation with SPAD cameras

    NASA Astrophysics Data System (ADS)

    Gyongy, Istvan; Davies, Amy; Dutton, Neale A. W.; Duncan, Rory R.; Rickman, Colin; Henderson, Robert K.; Dalgarno, Paul A.

    2016-11-01

    Single molecule localisation microscopy (SMLM) has become an essential part of the super-resolution toolbox for probing cellular structure and function. The rapid evolution of these techniques has outstripped detector development and faster, more sensitive cameras are required to further improve localisation certainty. Single-photon avalanche photodiode (SPAD) array cameras offer single-photon sensitivity, very high frame rates and zero readout noise, making them a potentially ideal detector for ultra-fast imaging and SMLM experiments. However, performance traditionally falls behind that of emCCD and sCMOS devices due to lower photon detection efficiency. Here we demonstrate, both experimentally and through simulations, that the sensitivity of a binary SPAD camera in SMLM experiments can be improved significantly by aggregating only frames containing signal, and that this leads to smaller datasets and competitive performance with that of existing detectors. The simulations also indicate that with predicted future advances in SPAD camera technology, SPAD devices will outperform existing scientific cameras when capturing fast temporal dynamics.

  16. Smart-aggregation imaging for single molecule localisation with SPAD cameras.

    PubMed

    Gyongy, Istvan; Davies, Amy; Dutton, Neale A W; Duncan, Rory R; Rickman, Colin; Henderson, Robert K; Dalgarno, Paul A

    2016-11-23

    Single molecule localisation microscopy (SMLM) has become an essential part of the super-resolution toolbox for probing cellular structure and function. The rapid evolution of these techniques has outstripped detector development and faster, more sensitive cameras are required to further improve localisation certainty. Single-photon avalanche photodiode (SPAD) array cameras offer single-photon sensitivity, very high frame rates and zero readout noise, making them a potentially ideal detector for ultra-fast imaging and SMLM experiments. However, performance traditionally falls behind that of emCCD and sCMOS devices due to lower photon detection efficiency. Here we demonstrate, both experimentally and through simulations, that the sensitivity of a binary SPAD camera in SMLM experiments can be improved significantly by aggregating only frames containing signal, and that this leads to smaller datasets and competitive performance with that of existing detectors. The simulations also indicate that with predicted future advances in SPAD camera technology, SPAD devices will outperform existing scientific cameras when capturing fast temporal dynamics.

  17. Robust estimation of simulated urinary volume from camera images under bathroom illumination.

    PubMed

    Honda, Chizuru; Bhuiyan, Md Shoaib; Kawanaka, Haruki; Watanabe, Eiichi; Oguri, Koji

    2016-08-01

    General uroflowmetry method involves the risk of nosocomial infections or time and effort of the recording. Medical institutions, therefore, need to measure voided volume simply and hygienically. Multiple cylindrical model that can estimate the fluid flow rate from the photographed image using camera has been proposed in an earlier study. This study implemented a flow rate estimation by using a general-purpose camera system (Raspberry Pi Camera Module) and the multiple cylindrical model. However, large amounts of noise in extracting liquid region are generated by the variation of the illumination when performing measurements in the bathroom. So the estimation error gets very large. In other words, the specifications of the previous study's camera setup regarding the shutter type and the frame rate was too strict. In this study, we relax the specifications to achieve a flow rate estimation using a general-purpose camera. In order to determine the appropriate approximate curve, we propose a binarizing method using background subtraction at each scanning row and a curve approximation method using RANSAC. Finally, by evaluating the estimation accuracy of our experiment and by comparing it with the earlier study's results, we show the effectiveness of our proposed method for flow rate estimation.

  18. [The hyperspectral camera side-scan geometric imaging in any direction considering the spectral mixing].

    PubMed

    Wang, Shu-Min; Zhang, Ai-Wu; Hu, Shao-Xing; Sun, Wei-Dong

    2014-07-01

    In order to correct the image distortion in the hyperspectral camera side-scan geometric Imaging, the image pixel geo-referenced algorithm was deduced in detail in the present paper, which is suitable to the linear push-broom camera side-scan imaging on the ground in any direction. It takes the orientation of objects in the navigation coordinates system into account. Combined with the ground sampling distance of geo-referenced image and the area of push broom imaging, the general process of geo-referenced image divided into grids is also presented. The new image rows and columns will be got through the geo-referenced image area dividing the ground sampling distance. Considering the error produced by round rule in the pixel grids generated progress, and the spectral mixing problem caused by traditional direct spectral sampling method in the process of image correction, the improved spectral sampling method based on the weighted fusion method was proposed. It takes the area proportion of adjacent pixels in the new generated pixel as coefficient and then the coefficients are normalized to avoid the spectral overflow. So the new generated pixel is combined with the geo-referenced adjacent pixels spectral. Finally the amounts of push-broom imaging experiments were taken on the ground, and the distortion images were corrected according to the algorithm proposed above. The results show that the linear image distortion correction algorithm is valid and robust. At the same time, multiple samples were selected in the corrected images to verify the spectral data. The results indicate that the improved spectral sampling method is better than the direct spectral sampling algorithm. It provides reference for the application of similar productions on the ground.

  19. Hyperspectral imaging using a color camera and its application for pathogen detection

    NASA Astrophysics Data System (ADS)

    Yoon, Seung-Chul; Shin, Tae-Sung; Heitschmidt, Gerald W.; Lawrence, Kurt C.; Park, Bosoon; Gamble, Gary

    2015-02-01

    This paper reports the results of a feasibility study for the development of a hyperspectral image recovery (reconstruction) technique using a RGB color camera and regression analysis in order to detect and classify colonies of foodborne pathogens. The target bacterial pathogens were the six representative non-O157 Shiga-toxin producing Escherichia coli (STEC) serogroups (O26, O45, O103, O111, O121, and O145) grown in Petri dishes of Rainbow agar. The purpose of the feasibility study was to evaluate whether a DSLR camera (Nikon D700) could be used to predict hyperspectral images in the wavelength range from 400 to 1,000 nm and even to predict the types of pathogens using a hyperspectral STEC classification algorithm that was previously developed. Unlike many other studies using color charts with known and noise-free spectra for training reconstruction models, this work used hyperspectral and color images, separately measured by a hyperspectral imaging spectrometer and the DSLR color camera. The color images were calibrated (i.e. normalized) to relative reflectance, subsampled and spatially registered to match with counterpart pixels in hyperspectral images that were also calibrated to relative reflectance. Polynomial multivariate least-squares regression (PMLR) was previously developed with simulated color images. In this study, partial least squares regression (PLSR) was also evaluated as a spectral recovery technique to minimize multicollinearity and overfitting. The two spectral recovery models (PMLR and PLSR) and their parameters were evaluated by cross-validation. The QR decomposition was used to find a numerically more stable solution of the regression equation. The preliminary results showed that PLSR was more effective especially with higher order polynomial regressions than PMLR. The best classification accuracy measured with an independent test set was about 90%. The results suggest the potential of cost-effective color imaging using hyperspectral image

  20. Camera selection for real-time in vivo radiation treatment verification systems using Cherenkov imaging

    SciTech Connect

    Andreozzi, Jacqueline M. Glaser, Adam K.; Zhang, Rongxiao; Jarvis, Lesley A.; Gladstone, David J.; Pogue, Brian W.

    2015-02-15

    Purpose: To identify achievable camera performance and hardware needs in a clinical Cherenkov imaging system for real-time, in vivo monitoring of the surface beam profile on patients, as novel visual information, documentation, and possible treatment verification for clinicians. Methods: Complementary metal-oxide-semiconductor (CMOS), charge-coupled device (CCD), intensified charge-coupled device (ICCD), and electron multiplying-intensified charge coupled device (EM-ICCD) cameras were investigated to determine Cherenkov imaging performance in a clinical radiotherapy setting, with one emphasis on the maximum supportable frame rate. Where possible, the image intensifier was synchronized using a pulse signal from the Linac in order to image with room lighting conditions comparable to patient treatment scenarios. A solid water phantom irradiated with a 6 MV photon beam was imaged by the cameras to evaluate the maximum frame rate for adequate Cherenkov detection. Adequate detection was defined as an average electron count in the background-subtracted Cherenkov image region of interest in excess of 0.5% (327 counts) of the 16-bit maximum electron count value. Additionally, an ICCD and an EM-ICCD were each used clinically to image two patients undergoing whole-breast radiotherapy to compare clinical advantages and limitations of each system. Results: Intensifier-coupled cameras were required for imaging Cherenkov emission on the phantom surface with ambient room lighting; standalone CMOS and CCD cameras were not viable. The EM-ICCD was able to collect images from a single Linac pulse delivering less than 0.05 cGy of dose at 30 frames/s (fps) and pixel resolution of 512 × 512, compared to an ICCD which was limited to 4.7 fps at 1024 × 1024 resolution. An intensifier with higher quantum efficiency at the entrance photocathode in the red wavelengths [30% quantum efficiency (QE) vs previous 19%] promises at least 8.6 fps at a resolution of 1024 × 1024 and lower monetary

  1. Camera selection for real-time in vivo radiation treatment verification systems using Cherenkov imaging

    PubMed Central

    Andreozzi, Jacqueline M.; Zhang, Rongxiao; Glaser, Adam K.; Jarvis, Lesley A.; Pogue, Brian W.; Gladstone, David J.

    2015-01-01

    Purpose: To identify achievable camera performance and hardware needs in a clinical Cherenkov imaging system for real-time, in vivo monitoring of the surface beam profile on patients, as novel visual information, documentation, and possible treatment verification for clinicians. Methods: Complementary metal-oxide-semiconductor (CMOS), charge-coupled device (CCD), intensified charge-coupled device (ICCD), and electron multiplying-intensified charge coupled device (EM-ICCD) cameras were investigated to determine Cherenkov imaging performance in a clinical radiotherapy setting, with one emphasis on the maximum supportable frame rate. Where possible, the image intensifier was synchronized using a pulse signal from the Linac in order to image with room lighting conditions comparable to patient treatment scenarios. A solid water phantom irradiated with a 6 MV photon beam was imaged by the cameras to evaluate the maximum frame rate for adequate Cherenkov detection. Adequate detection was defined as an average electron count in the background-subtracted Cherenkov image region of interest in excess of 0.5% (327 counts) of the 16-bit maximum electron count value. Additionally, an ICCD and an EM-ICCD were each used clinically to image two patients undergoing whole-breast radiotherapy to compare clinical advantages and limitations of each system. Results: Intensifier-coupled cameras were required for imaging Cherenkov emission on the phantom surface with ambient room lighting; standalone CMOS and CCD cameras were not viable. The EM-ICCD was able to collect images from a single Linac pulse delivering less than 0.05 cGy of dose at 30 frames/s (fps) and pixel resolution of 512 × 512, compared to an ICCD which was limited to 4.7 fps at 1024 × 1024 resolution. An intensifier with higher quantum efficiency at the entrance photocathode in the red wavelengths [30% quantum efficiency (QE) vs previous 19%] promises at least 8.6 fps at a resolution of 1024 × 1024 and lower monetary

  2. Single camera system for multi-wavelength fluorescent imaging in the heart.

    PubMed

    Yamanaka, Takeshi; Arafune, Tatsuhiko; Shibata, Nitaro; Honjo, Haruo; Kamiya, Kaichiro; Kodama, Itsuo; Sakuma, Ichiro

    2012-01-01

    Optical mapping has been a powerful method to measure the cardiac electrophysiological phenomenon such as membrane potential(V(m)), intracellular calcium(Ca(2+)), and the other electrophysiological parameters. To measure two parameters simultaneously, the dual mapping system using two cameras is often used. However, the method to measure more than three parameters does not exist. To exploit the full potential of fluorescence imaging, an innovative method to measure multiple, more than three parameters is needed. In this study, we present a new optical mapping system which records multiple parameters using a single camera. Our system consists of one camera, custom-made optical lens units, and a custom-made filter wheel. The optical lens units is designed to focus the fluorescence light at filter position, and form an image on camera's sensor. To obtain optical signals with high quality, efficiency of light collection was carefully discussed in designing the optical system. The developed optical system has object space numerical aperture(NA) 0.1, and image space NA 0.23. The filter wheel was rotated by a motor, which allows filter switching corresponding with needed fluorescence wavelength. The camera exposure and filter switching were synchronized by phase locked loop, which allow this system to record multiple fluorescent signals frame by frame alternately. To validate the performance of this system, we performed experiments to observe V(m) and Ca(2+) dynamics simultaneously (frame rate: 125fps) with Langendorff perfused rabbit heart. Firstly, we applied basic stimuli to the heart base (cycle length: 500ms), and observed planer wave. The waveforms of V(m) and Ca(2+) show the same upstroke synchronized with cycle length of pacing. In addition, we recorded V(m) and Ca(2+) signals during ventricular fibrillation induced by burst pacing. According to these experiments, we showed the efficacy and availability of our method for cardiac electrophysiological research.

  3. Real-time analysis of laser beams by simultaneous imaging on a single camera chip

    NASA Astrophysics Data System (ADS)

    Piehler, S.; Boley, M.; Abdou Ahmed, M.; Graf, T.

    2015-03-01

    The fundamental parameters of a laser beam, such as the exact position and size of the focus or the beam quality factor M² yield vital information both for laser developers and end-users. However, each of these parameters can significantly change on a short time scale due to thermally induced effects in the processing optics or in the laser source itself, leading to process instabilities and non-reproducible results. In order to monitor the transient behavior of these effects, we have developed a camera-based measurement system, which enables full laser beam characterization in online. A novel monolithic beam splitter has been designed which generates a 2D array of images on a single camera chip, each of which corresponds to an intensity cross section of the beam along the propagation axis separated by a well-defined spacing. Thus, using the full area of the camera chip, a large number of measurement planes is achieved, leading to a measurement range sufficient for a full beam characterization conforming to ISO 11146 for a broad range of beam parameters of the incoming beam. The exact beam diameters in each plane are derived by calculation of the 2nd order intensity moments of the individual intensity slices. The processing time needed to carry out both the background filtering and the image processing operations for the full analysis of a single camera image is in the range of a few milliseconds. Hence, the measurement frequency of our system is mainly limited by the frame-rate of the camera.

  4. Optimal Design of Anger Camera for Bremsstrahlung Imaging: Monte Carlo Evaluation

    PubMed Central

    Walrand, Stephan; Hesse, Michel; Wojcik, Randy; Lhommel, Renaud; Jamar, François

    2014-01-01

    A conventional Anger camera is not adapted to bremsstrahlung imaging and, as a result, even using a reduced energy acquisition window, geometric x-rays represent <15% of the recorded events. This increases noise, limits the contrast, and reduces the quantification accuracy. Monte Carlo (MC) simulations of energy spectra showed that a camera based on a 30-mm-thick BGO crystal and equipped with a high energy pinhole collimator is well-adapted to bremsstrahlung imaging. The total scatter contamination is reduced by a factor 10 versus a conventional NaI camera equipped with a high energy parallel hole collimator enabling acquisition using an extended energy window ranging from 50 to 350 keV. By using the recorded event energy in the reconstruction method, shorter acquisition time and reduced orbit range will be usable allowing the design of a simplified mobile gantry. This is more convenient for use in a busy catheterization room. After injecting a safe activity, a fast single photon emission computed tomography could be performed without moving the catheter tip in order to assess the liver dosimetry and estimate the additional safe activity that could still be injected. Further long running time MC simulations of realistic acquisitions will allow assessing the quantification capability of such system. Simultaneously, a dedicated bremsstrahlung prototype camera reusing PMT–BGO blocks coming from a retired PET system is currently under design for further evaluation. PMID:24982849

  5. IR camera temperature resolution enhancing using computer processing of IR image

    NASA Astrophysics Data System (ADS)

    Trofimov, Vyacheslav A.; Trofimov, Vladislav V.

    2016-05-01

    As it is well-known, application of the IR camera for the security problems is very promising way. In previous papers, we demonstrate new possibility of the passive THz camera using for a temperature difference observing on the human skin if this difference is caused by different temperatures inside the body. For proof of validity of our statement we make the similar physical experiment using the IR camera. We show a possility of viewing the temperature trace on a human body skin, caused by temperature changing inside the human body due to water drinking. We use new approach, based on usung a correlation function, for computer processing of IR images. Its application results in a temperature resolution enhancing of cameras. We analyze IR images of a person, which drinks water. We follow a temperature trace on human body skin, caused by changing of temperature inside the human body. Some experiments were made with measurements of a body temperature covered by shirt. We try to see a human body temperature changing in physical experiments under consideration. Shown phenomena are very important for the detection of forbidden objects, cancelled under clothes or inside the human body, by using non-destructive control without using X-rays.

  6. Portable, stand-off spectral imaging camera for detection of effluents and residues

    NASA Astrophysics Data System (ADS)

    Goldstein, Neil; St. Peter, Benjamin; Grot, Jonathan; Kogan, Michael; Fox, Marsha; Vujkovic-Cvijin, Pajo; Penny, Ryan; Cline, Jason

    2015-06-01

    A new, compact and portable spectral imaging camera, employing a MEMs-based encoded imaging approach, has been built and demonstrated for detection of hazardous contaminants including gaseous effluents and solid-liquid residues on surfaces. The camera is called the Thermal infrared Reconfigurable Analysis Camera for Effluents and Residues (TRACER). TRACER operates in the long wave infrared and has the potential to detect a wide variety of materials with characteristic spectral signatures in that region. The 30 lb. camera is tripod mounted and battery powered. A touch screen control panel provides a simple user interface for most operations. The MEMS spatial light modulator is a Texas Instruments Digital Microarray Array with custom electronics and firmware control. Simultaneous 1D-spatial and 1Dspectral dimensions are collected, with the second spatial dimension obtained by scanning the internal spectrometer slit. The sensor can be configured to collect data in several modes including full hyperspectral imagery using Hadamard multiplexing, panchromatic thermal imagery, and chemical-specific contrast imagery, switched with simple user commands. Matched filters and other analog filters can be generated internally on-the-fly and applied in hardware, substantially reducing detection time and improving SNR over HSI software processing, while reducing storage requirements. Results of preliminary instrument evaluation and measurements of flame exhaust are presented.

  7. High-resolution image digitizing through 12x3-bit RGB-filtered CCD camera

    NASA Astrophysics Data System (ADS)

    Cheng, Andrew Y. S.; Pau, Michael C. Y.

    1996-09-01

    A high resolution computer-controlled CCD image capturing system is developed by using a 12 bits 1024 by 1024 pixels CCD camera and motorized RGB filters to grasp an image with color depth up to 36 bits. The filters distinguish the major components of color and collect them separately while the CCD camera maintains the spatial resolution and detector filling factor. The color separation can be done optically rather than electronically. The operation is simply by placing the capturing objects like color photos, slides and even x-ray transparencies under the camera system, the necessary parameters such as integration time, mixing level and light intensity are automatically adjusted by an on-line expert system. This greatly reduces the restrictions of the capturing species. This unique approach can save considerable time for adjusting the quality of image, give much more flexibility of manipulating captured object even if it is a 3D object with minimal setup fixers. In addition, cross sectional dimension of a 3D capturing object can be analyzed by adapting a fiber optic ring light source. It is particularly useful in non-contact metrology of a 3D structure. The digitized information can be stored in an easily transferable format. Users can also perform a special LUT mapping automatically or manually. Applications of the system include medical images archiving, printing quality control, 3D machine vision, and etc.

  8. New design of a gamma camera detector with reduced edge effect for breast imaging

    NASA Astrophysics Data System (ADS)

    Yeon Hwang, Ji; Lee, Seung-Jae; Baek, Cheol-Ha; Hyun Kim, Kwang; Hyun Chung, Yong

    2011-05-01

    In recent years, there has been a growing interest in developing small gamma cameras dedicated to breast imaging. We designed a new detector with trapezoidal shape to expand the field of view (FOV) of camera without increasing its dimensions. To find optimal parameters, images of point sources at the edge area as functions of the angle and optical treatment of crystal side surface were simulated by using a DETECT2000. Our detector employs monolithic CsI(Tl) with dimensions of 48.0×48.0×6.0 mm coupled to an array of photo-sensors. Side surfaces of crystal were treated with three different surface finishes: black absorber, metal reflector and white reflector. The trapezoidal angle varied from 45° to 90° in steps of 15°. Gamma events were generated on 15 evenly spaced points with 1.0 mm spacing in the X-axis starting 1.0 mm away from the side surface. Ten thousand gamma events were simulated at each location and images were formed by calculating the Anger-logic. The results demonstrated that all the 15 points could be identified only for the crystal with trapezoidal shape having 45° angle and white reflector on the side surface. In conclusion, our new detector proved to be a reliable design to expand the FOV of small gamma camera for breast imaging.

  9. Initial Observations by the MRO Mars Color Imager and Context Camera

    NASA Astrophysics Data System (ADS)

    Malin, M. C.

    2006-12-01

    The Mars Color Imager (MARCI) on MRO is a copy of the wide angle instrument flown on the unsuccessful Mars Climate Orbiter. It consists of two optical systems (visible and ultraviolet) projecting images onto a single CCD detector. The field of view of the optics is 180 degrees cross-track, sufficient to image limb-to-limb even when the MRO spacecraft is pointing off-nadir by 20 degrees. MARCI can image in two UV (260 and 320 ±15 nm) and five visible (425, 550, 600, 650, and 750 nm, ±25 nm) channels. The visible channels have a nadir scale of about 900 m, and a limb scale of just under 5 km; the UV channels are summed to 7-8 km nadir scale. Daily global observations, consisting of 12 terminator to terminator, limb-to-limb swaths are used to monitor meteorological conditions, clouds, dust storms, and ozone concentration (a surrogate for water). During high data rate periods, MARCI can reproduce the Mariner 9 global image mosaic every day. The Context Camera (CTX) acquires 30 km wide, 6 m/pixel images, and is new camera derived from the MCO medium angle MARCI. Its primary purpose is to provide spatial context for MRO instruments with higher resolution, or more limited fields of view. Scientifically, CTX acquires images that are nearly as high in spatial resolution as the Mars Orbiter Camera aboard MGS. CTX can cover about 9 percent of Mars, but stereoscopic coverage, overlap for mosaics, and re-imaging locations to search for changes will reduce this coverage significantly.

  10. Multispectral integral imaging acquisition and processing using a monochrome camera and a liquid crystal tunable filter.

    PubMed

    Latorre-Carmona, Pedro; Sánchez-Ortiga, Emilio; Xiao, Xiao; Pla, Filiberto; Martínez-Corral, Manuel; Navarro, Héctor; Saavedra, Genaro; Javidi, Bahram

    2012-11-05

    This paper presents an acquisition system and a procedure to capture 3D scenes in different spectral bands. The acquisition system is formed by a monochrome camera, and a Liquid Crystal Tunable Filter (LCTF) that allows to acquire images at different spectral bands in the [480, 680]nm wavelength interval. The Synthetic Aperture Integral Imaging acquisition technique is used to obtain the elemental images for each wavelength. These elemental images are used to computationally obtain the reconstruction planes of the 3D scene at different depth planes. The 3D profile of the acquired scene is also obtained using a minimization of the variance of the contribution of the elemental images at each image pixel. Experimental results show the viability to recover the 3D multispectral information of the scene. Integration of 3D and multispectral information could have important benefits in different areas, including skin cancer detection, remote sensing and pattern recognition, among others.

  11. Real Time Speed Estimation of Moving Vehicles from Side View Images from an Uncalibrated Video Camera

    PubMed Central

    Doğan, Sedat; Temiz, Mahir Serhan; Külür, Sıtkı

    2010-01-01

    In order to estimate the speed of a moving vehicle with side view camera images, velocity vectors of a sufficient number of reference points identified on the vehicle must be found using frame images. This procedure involves two main steps. In the first step, a sufficient number of points from the vehicle is selected, and these points must be accurately tracked on at least two successive video frames. In the second step, by using the displacement vectors of the tracked points and passed time, the velocity vectors of those points are computed. Computed velocity vectors are defined in the video image coordinate system and displacement vectors are measured by the means of pixel units. Then the magnitudes of the computed vectors in image space should be transformed to the object space to find the absolute values of these magnitudes. This transformation requires an image to object space information in a mathematical sense that is achieved by means of the calibration and orientation parameters of the video frame images. This paper presents proposed solutions for the problems of using side view camera images mentioned here. PMID:22399909

  12. Full image-processing pipeline in field-programmable gate array for a small endoscopic camera

    NASA Astrophysics Data System (ADS)

    Mostafa, Sheikh Shanawaz; Sousa, L. Natércia; Ferreira, Nuno Fábio; Sousa, Ricardo M.; Santos, Joao; Wäny, Martin; Morgado-Dias, F.

    2017-01-01

    Endoscopy is an imaging procedure used for diagnosis as well as for some surgical purposes. The camera used for the endoscopy should be small and able to produce a good quality image or video, to reduce discomfort of the patients, and to increase the efficiency of the medical team. To achieve these fundamental goals, a small endoscopy camera with a footprint of 1 mm×1 mm×1.65 mm is used. Due to the physical properties of the sensors and human vision system limitations, different image-processing algorithms, such as noise reduction, demosaicking, and gamma correction, among others, are needed to faithfully reproduce the image or video. A full image-processing pipeline is implemented using a field-programmable gate array (FPGA) to accomplish a high frame rate of 60 fps with minimum processing delay. Along with this, a viewer has also been developed to display and control the image-processing pipeline. The control and data transfer are done by a USB 3.0 end point in the computer. The full developed system achieves real-time processing of the image and fits in a Xilinx Spartan-6LX150 FPGA.

  13. Technique for improving the quality of images from digital cameras using ink-jet printers and smoothed RGB transfer curves

    NASA Astrophysics Data System (ADS)

    Sampat, Nitin; Grim, John F.; O'Hara, James E.

    1998-04-01

    The digital camera market is growing at an explosive rate. At the same time, the quality of photographs printed on ink- jet printers continues to improve. Most of the consumer cameras are designed with the monitor as the target output device and ont the printer. When a user is printing his images from a camera, he/she needs to optimize the camera and printer combination in order to maximize image quality. We describe the details of one such method for improving image quality using a AGFA digital camera and an ink jet printer combination. Using Adobe PhotoShop, we generated optimum red, green and blue transfer curves that match the scene content to the printers output capabilities. Application of these curves to the original digital image resulted in a print with more shadow detail, no loss of highlight detail, a smoother tone scale, and more saturated colors. The image also exhibited an improved tonal scale and visually more pleasing images than those captured and printed without any 'correction'. While we report the results for one camera-printer combination we tested this technique on numbers digital cameras and printer combinations and in each case produced a better looking image. We also discuss the problems we encountered in implementing this technique.

  14. A curve fitting method for extrinsic camera calibration from a single image of a cylindrical object

    NASA Astrophysics Data System (ADS)

    Winkler, A. W.; Zagar, B. G.

    2013-08-01

    An important step in the process of optical steel coil quality assurance is to measure the proportions of width and radius of steel coils as well as the relative position and orientation of the camera. This work attempts to estimate these extrinsic parameters from single images by using the cylindrical coil itself as the calibration target. Therefore, an adaptive least-squares algorithm is applied to fit parametrized curves to the detected true coil outline in the acquisition. The employed model allows for strictly separating the intrinsic and the extrinsic parameters. Thus, the intrinsic camera parameters can be calibrated beforehand using available calibration software. Furthermore, a way to segment the true coil outline in the acquired images is motivated. The proposed optimization method yields highly accurate results and can be generalized even to measure other solids which cannot be characterized by the identification of simple geometric primitives.

  15. Optical character recognition of camera-captured images based on phase features

    NASA Astrophysics Data System (ADS)

    Diaz-Escobar, Julia; Kober, Vitaly

    2015-09-01

    Nowadays most of digital information is obtained using mobile devices specially smartphones. In particular, it brings the opportunity for optical character recognition in camera-captured images. For this reason many recognition applications have been recently developed such as recognition of license plates, business cards, receipts and street signal; document classification, augmented reality, language translator and so on. Camera-captured images are usually affected by geometric distortions, nonuniform illumination, shadow, noise, which make difficult the recognition task with existing systems. It is well known that the Fourier phase contains a lot of important information regardless of the Fourier magnitude. So, in this work we propose a phase-based recognition system exploiting phase-congruency features for illumination/scale invariance. The performance of the proposed system is tested in terms of miss classifications and false alarms with the help of computer simulation.

  16. Digital camera workflow for high dynamic range images using a model of retinal processing

    NASA Astrophysics Data System (ADS)

    Tamburrino, Daniel; Alleysson, David; Meylan, Laurence; Süsstrunk, Sabine

    2008-02-01

    We propose a complete digital camera workflow to capture and render high dynamic range (HDR) static scenes, from RAW sensor data to an output-referred encoded image. In traditional digital camera processing, demosaicing is one of the first operations done after scene analysis. It is followed by rendering operations, such as color correction and tone mapping. In our workflow, which is based on a model of retinal processing, most of the rendering steps are performed before demosaicing. This reduces the complexity of the computation, as only one third of the pixels are processed. This is especially important as our tone mapping operator applies local and global tone corrections, which is usually needed to well render high dynamic scenes. Our algorithms efficiently process HDR images with different keys and different content.

  17. Visible-infrared achromatic imaging by wavefront coding with wide-angle automobile camera

    NASA Astrophysics Data System (ADS)

    Ohta, Mitsuhiko; Sakita, Koichi; Shimano, Takeshi; Sugiyama, Takashi; Shibasaki, Susumu

    2016-09-01

    We perform an experiment of achromatic imaging with wavefront coding (WFC) using a wide-angle automobile lens. Our original annular phase mask for WFC was inserted to the lens, for which the difference between the focal positions at 400 nm and at 950 nm is 0.10 mm. We acquired images of objects using a WFC camera with this lens under the conditions of visible and infrared light. As a result, the effect of the removal of the chromatic aberration of the WFC system was successfully determined. Moreover, we fabricated a demonstration set assuming the use of a night vision camera in an automobile and showed the effect of the WFC system.

  18. Whole-field thickness strain measurement using multiple camera digital image correlation system

    NASA Astrophysics Data System (ADS)

    Li, Junrui; Xie, Xin; Yang, Guobiao; Zhang, Boyang; Siebert, Thorsten; Yang, Lianxiang.

    2017-03-01

    Three Dimensional digital image correlation(3D-DIC) has been widely used by industry, especially for strain measurement. The traditional 3D-DIC system can accurately obtain the whole-field 3D deformation. However, the conventional 3D-DIC system can only acquire the displacement field on a single surface, thus lacking information in the depth direction. Therefore, the strain in the thickness direction cannot be measured. In recent years, multiple camera DIC (multi-camera DIC) systems have become a new research topic, which provides much more measurement possibility compared to the conventional 3D-DIC system. In this paper, a multi-camera DIC system used to measure the whole-field thickness strain is introduced in detail. Four cameras are used in the system. two of them are placed at the front side of the object, and the other two cameras are placed at the back side. Each pair of cameras constitutes a sub stereo-vision system and measures the whole-field 3D deformation on one side of the object. A special calibration plate is used to calibrate the system, and the information from these two subsystems is linked by the calibration result. Whole-field thickness strain can be measured using the information obtained from both sides of the object. Additionally, the major and minor strain on the object surface are obtained simultaneously, and a whole-field quasi 3D strain history is acquired. The theory derivation for the system, experimental process, and application of determining the thinning strain limit based on the obtained whole-field thickness strain history are introduced in detail.

  19. Reflection imaging in the millimeter-wave range using a video-rate terahertz camera

    NASA Astrophysics Data System (ADS)

    Marchese, Linda E.; Terroux, Marc; Doucet, Michel; Blanchard, Nathalie; Pancrati, Ovidiu; Dufour, Denis; Bergeron, Alain

    2016-05-01

    The ability of millimeter waves (1-10 mm, or 30-300 GHz) to penetrate through dense materials, such as leather, wool, wood and gyprock, and to also transmit over long distances due to low atmospheric absorption, makes them ideal for numerous applications, such as body scanning, building inspection and seeing in degraded visual environments. Current drawbacks of millimeter wave imaging systems are they use single detector or linear arrays that require scanning or the two dimensional arrays are bulky, often consisting of rather large antenna-couple focal plane arrays (FPAs). Previous work from INO has demonstrated the capability of its compact lightweight camera, based on a 384 x 288 microbolometer pixel FPA with custom optics for active video-rate imaging at wavelengths of 118 μm (2.54 THz), 432 μm (0.69 THz), 663 μm (0.45 THz), and 750 μm (0.4 THz). Most of the work focused on transmission imaging, as a first step, but some preliminary demonstrations of reflection imaging at these were also reported. In addition, previous work also showed that the broadband FPA remains sensitive to wavelengths at least up to 3.2 mm (94 GHz). The work presented here demonstrates the ability of the INO terahertz camera for reflection imaging at millimeter wavelengths. Snapshots taken at video rates of objects show the excellent quality of the images. In addition, a description of the imaging system that includes the terahertz camera and different millimeter sources is provided.

  20. Geiger-mode APD camera system for single-photon 3D LADAR imaging

    NASA Astrophysics Data System (ADS)

    Entwistle, Mark; Itzler, Mark A.; Chen, Jim; Owens, Mark; Patel, Ketan; Jiang, Xudong; Slomkowski, Krystyna; Rangwala, Sabbir

    2012-06-01

    The unparalleled sensitivity of 3D LADAR imaging sensors based on single photon detection provides substantial benefits for imaging at long stand-off distances and minimizing laser pulse energy requirements. To obtain 3D LADAR images with single photon sensitivity, we have demonstrated focal plane arrays (FPAs) based on InGaAsP Geiger-mode avalanche photodiodes (GmAPDs) optimized for use at either 1.06 μm or 1.55 μm. These state-of-the-art FPAs exhibit excellent pixel-level performance and the capability for 100% pixel yield on a 32 x 32 format. To realize the full potential of these FPAs, we have recently developed an integrated camera system providing turnkey operation based on FPGA control. This system implementation enables the extremely high frame-rate capability of the GmAPD FPA, and frame rates in excess of 250 kHz (for 0.4 μs range gates) can be accommodated using an industry-standard CameraLink interface in full configuration. Real-time data streaming for continuous acquisition of 2 μs range gate point cloud data with 13-bit time-stamp resolution at 186 kHz frame rates has been established using multiple solid-state storage drives. Range gate durations spanning 4 ns to 10 μs provide broad operational flexibility. The camera also provides real-time signal processing in the form of multi-frame gray-scale contrast images and single-frame time-stamp histograms, and automated bias control has been implemented to maintain a constant photon detection efficiency in the presence of ambient temperature changes. A comprehensive graphical user interface has been developed to provide complete camera control using a simple serial command set, and this command set supports highly flexible end-user customization.

  1. Portable retinal imaging for eye disease screening using a consumer-grade digital camera

    NASA Astrophysics Data System (ADS)

    Barriga, Simon; Larichev, Andrey; Zamora, Gilberto; Soliz, Peter

    2012-03-01

    The development of affordable means to image the retina is an important step toward the implementation of eye disease screening programs. In this paper we present the i-RxCam, a low-cost, hand-held, retinal camera for widespread applications such as tele-retinal screening for eye diseases like diabetic retinopathy (DR), glaucoma, and age-related ocular diseases. Existing portable retinal imagers do not meet the requirements of a low-cost camera with sufficient technical capabilities (field of view, image quality, portability, battery power, and ease-of-use) to be distributed widely to low volume clinics, such as the offices of single primary care physicians serving rural communities. The i-RxCam uses a Nikon D3100 digital camera body. The camera has a CMOS sensor with 14.8 million pixels. We use a 50mm focal lens that gives a retinal field of view of 45 degrees. The internal autofocus can compensate for about 2D (diopters) of focusing error. The light source is an LED produced by Philips with a linear emitting area that is transformed using a light pipe to the optimal shape at the eye pupil, an annulus. To eliminate corneal reflex we use a polarization technique in which the light passes through a nano-wire polarizer plate. This is a novel type of polarizer featuring high polarization separation (contrast ratio of more than 1000) and very large acceptance angle (>45 degrees). The i-RxCam approach will yield a significantly more economical retinal imaging device that would allow mass screening of the at-risk population.

  2. Volcano geodesy at Santiaguito using ground-based cameras and particle image velocimetry

    NASA Astrophysics Data System (ADS)

    Johnson, J.; Andrews, B. J.; Anderson, J.; Lyons, J. J.; Lees, J. M.

    2012-12-01

    The active Santiaguito dome in Guatemala is an exceptional field site for ground-based optical observations owing to the bird's-eye viewing perspective from neighboring Santa Maria Volcano. From the summit of Santa Maria the frequent (1 per hour) explosions and continuous lava flow effusion may be observed from a vantage point, which is at a ~30 degree elevation angle, 1200 m above and 2700 m distant from the active vent. At these distances both video cameras and SLR cameras fitted with high-power lenses can effectively track blocky features translating and uplifting on the surface of Santiaguito's dome. We employ particle image velocimetry in the spatial frequency domain to map movements of ~10x10 m^2 surface patches with better than 10 cm displacement resolution. During three field campaigns to Santiaguito in 2007, 2009, and 2012 we have used cameras to measure dome surface movements for a range of time scales. In 2007 and 2009 we used video cameras recording at 30 fps to track repeated rapid dome uplift (more than 1 m within 2 s) of the 30,000 m^2 dome associated with the onset of eruptive activity. We inferred that the these uplift events were responsible for both a seismic long period response and an infrasound bimodal pulse. In 2012 we returned to Santiaguito to quantify dome surface movements over hour-to-day-long time scales by recording time lapse imagery at one minute intervals. These longer time scales reveal dynamic structure to the uplift and subsidence trends, effusion rate, and surface flow patterns that are related to internal conduit pressurization. In 2012 we performed particle image velocimetry with multiple cameras spatially separated in order to reconstruct 3-dimensional surface movements.

  3. Feasibility of monitoring patient motion with opposed stereo infrared cameras during supine medical imaging

    NASA Astrophysics Data System (ADS)

    Beach, Richard D.; McNamara, Joseph E.; Terlecki, George; King, Michael A.

    2006-10-01

    Patient motion during single photon emission computed tomographic (SPECT) acquisition causes inconsistent projection data and reconstruction artifacts which can significantly affect diagnostic accuracy. We have investigated use of the Polaris stereo infrared motion-tracking system to track 6-Degrees-of-Freedom (6-DOF) motion of spherical reflectors (markers) on stretchy bands about the patient's chest and abdomen during cardiac SPECT imaging. The marker position information, obtained by opposed stereo infrared-camera systems, requires processing to correctly record tracked markers, and map Polaris co-ordinate data into the SPECT co-ordinate system. One stereo camera views the markers from the patient's head direction, and the other from the patient's foot direction. The need for opposed cameras is to overcome anatomical and geometrical limitations which sometimes prevent all markers from being seen by a single stereo camera. Both sets of marker data are required to compute rotational and translational 6-DOF motion of the patient which ultimately will be used for SPECT patient-motion corrections. The processing utilizes an algorithm involving least-squares fitting, to each other, of two 3-D point sets using singular value decomposition (SVD) resulting in the rotation matrix and translation of the rigid body centroid. We have previously demonstrated the ability to monitor multiple markers for twelve patients viewing from the foot end, and employed a neural network to separate the periodic respiratory motion component of marker motion from aperiodic body motion. We plan to initiate routine 6-DOF tracking of patient motion during SPECT imaging in the future, and are herein evaluating the feasibility of employing opposed stereo cameras.

  4. Intercomparison of SO2 camera systems for imaging volcanic gas plumes

    NASA Astrophysics Data System (ADS)

    Kern, Christoph; Lübcke, Peter; Bobrowski, Nicole; Campion, Robin; Mori, Toshiya; Smekens, Jean-François; Stebel, Kerstin; Tamburello, Giancarlo; Burton, Mike; Platt, Ulrich; Prata, Fred

    2015-07-01

    SO2 camera systems are increasingly being used to image volcanic gas plumes. The ability to derive SO2 emission rates directly from the acquired imagery at high time resolution allows volcanic process studies that incorporate other high time-resolution datasets. Though the general principles behind the SO2 camera have remained the same for a number of years, recent advances in CCD technology and an improved understanding of the physics behind the measurements have driven a continuous evolution of the camera systems. Here we present an intercomparison of seven different SO2 cameras. In the first part of the experiment, the various technical designs are compared and the advantages and drawbacks of individual design options are considered. Though the ideal design was found to be dependent on the specific application, a number of general recommendations are made. Next, a time series of images recorded by all instruments at Stromboli Volcano (Italy) is compared. All instruments were easily able to capture SO2 clouds emitted from the summit vents. Quantitative comparison of the SO2 load in an individual cloud yielded an intra-instrument precision of about 12%. From the imagery, emission rates were then derived according to each group's standard retrieval process. A daily average SO2 emission rate of 61 ± 10 t/d was calculated. Due to differences in spatial integration methods and plume velocity determination, the time-dependent progression of SO2 emissions varied significantly among the individual systems. However, integration over distinct degassing events yielded comparable SO2 masses. Based on the intercomparison data, we find an approximate 1-sigma precision of 20% for the emission rates derived from the various SO2 cameras. Though it may still be improved in the future, this is currently within the typical accuracy of the measurement and is considered sufficient for most applications.

  5. Intercomparison of SO2 camera systems for imaging volcanic gas plumes

    USGS Publications Warehouse

    Kern, Christoph; Lübcke, Peter; Bobrowski, Nicole; Campion, Robin; Mori, Toshiya; Smekens, Jean-Francois; Stebel, Kerstin; Tamburello, Giancarlo; Burton, Mike; Platt, Ulrich; Prata, Fred

    2015-01-01

    SO2 camera systems are increasingly being used to image volcanic gas plumes. The ability to derive SO2 emission rates directly from the acquired imagery at high time resolution allows volcanic process studies that incorporate other high time-resolution datasets. Though the general principles behind the SO2 camera have remained the same for a number of years, recent advances in CCD technology and an improved understanding of the physics behind the measurements have driven a continuous evolution of the camera systems. Here we present an intercomparison of seven different SO2 cameras. In the first part of the experiment, the various technical designs are compared and the advantages and drawbacks of individual design options are considered. Though the ideal design was found to be dependent on the specific application, a number of general recommendations are made. Next, a time series of images recorded by all instruments at Stromboli Volcano (Italy) is compared. All instruments were easily able to capture SO2 clouds emitted from the summit vents. Quantitative comparison of the SO2 load in an individual cloud yielded an intra-instrument precision of about 12%. From the imagery, emission rates were then derived according to each group's standard retrieval process. A daily average SO2 emission rate of 61 ± 10 t/d was calculated. Due to differences in spatial integration methods and plume velocity determination, the time-dependent progression of SO2 emissions varied significantly among the individual systems. However, integration over distinct degassing events yielded comparable SO2 masses. Based on the intercomparison data, we find an approximate 1-sigma precision of 20% for the emission rates derived from the various SO2 cameras. Though it may still be improved in the future, this is currently within the typical accuracy of the measurement and is considered sufficient for most applications.

  6. A highly reliable and budget-friendly Peltier-cooled camera for biological fluorescence imaging microscopy.

    PubMed

    Jolling, Koen; Vandeven, Martin; Van den Eynden, Jimmy; Ameloot, Marcel; Van Kerkhove, Emmy

    2007-12-01

    The SAC8.5, a low-cost Peltier-cooled black and white 8-bit CCD camera for astronomy, was evaluated for its use in imaging microscopy. Two camera-microscope configurations were used: an epifluorescence microscope (Nikon Eclipse TE2000-U) and a bottom port laser scanning confocal microscope system (Zeiss LSCM 510 META). Main advantages of the CCD camera over the currently used photomultiplier detection in the scanning setup are fast image capturing, stable background, an improved signal-to-noise ratio and good linearity. Based on DAPI-labelled Chinese Hamster Ovarian cells, the signal-to-noise ratio was estimated to be 4 times higher with respect to the currently used confocal photomultiplier detector. A linear relationship between the fluorescence signal and the FITC-inulin concentrations ranging from 0.05 to 1.8 mg mL(-1) could be established. With the SAC8.5 CCD camera and using DAPI, calcein-AM and propidium iodide we could also distinguish between viable, apoptotic and necrotic cells: exposure to CdCl(2) caused necrosis in A6 cells. Additional examples include the observation of wire-like mitochondrial networks in Mito Tracker Green-loaded Madin-Darby canine kidney cells. Furthermore, it is straightforward to interface the SAC8.5 with automated shutters to prevent rapid fluorophore photobleaching via easy to use astrovideo software. In this study, we demonstrate that the SAC8.5 black and white CCD camera is an easy-to-implement and cost-conscious addition to quantitative fluorescence microfluorimetry on living tissues and is suitable for teaching laboratories.

  7. Validity and repeatability of a depth camera-based surface imaging system for thigh volume measurement.

    PubMed

    Bullas, Alice M; Choppin, Simon; Heller, Ben; Wheat, Jon

    2016-10-01

    Complex anthropometrics such as area and volume, can identify changes in body size and shape that are not detectable with traditional anthropometrics of lengths, breadths, skinfolds and girths. However, taking these complex with manual techniques (tape measurement and water displacement) is often unsuitable. Three-dimensional (3D) surface imaging systems are quick and accurate alternatives to manual techniques but their use is restricted by cost, complexity and limited access. We have developed a novel low-cost, accessible and portable 3D surface imaging system based on consumer depth cameras. The aim of this study was to determine the validity and repeatability of the system in the measurement of thigh volume. The thigh volumes of 36 participants were measured with the depth camera system and a high precision commercially available 3D surface imaging system (3dMD). The depth camera system used within this study is highly repeatable (technical error of measurement (TEM) of <1.0% intra-calibration and ~2.0% inter-calibration) but systematically overestimates (~6%) thigh volume when compared to the 3dMD system. This suggests poor agreement yet a close relationship, which once corrected can yield a usable thigh volume measurement.

  8. COMPACT CdZnTe-BASED GAMMA CAMERA FOR PROSTATE CANCER IMAGING

    SciTech Connect

    CUI, Y.; LALL, T.; TSUI, B.; YU, J.; MAHLER, G.; BOLOTNIKOV, A.; VASKA, P.; DeGERONIMO, G.; O'CONNOR, P.; MEINKEN, G.; JOYAL, J.; BARRETT, J.; CAMARDA, G.; HOSSAIN, A.; KIM, K.H.; YANG, G.; POMPER, M.; CHO, S.; WEISMAN, K.; SEO, Y.; BABICH, J.; LaFRANCE, N.; AND JAMES, R.B.

    2011-10-23

    In this paper, we discuss the design of a compact gamma camera for high-resolution prostate cancer imaging using Cadmium Zinc Telluride (CdZnTe or CZT) radiation detectors. Prostate cancer is a common disease in men. Nowadays, a blood test measuring the level of prostate specific antigen (PSA) is widely used for screening for the disease in males over 50, followed by (ultrasound) imaging-guided biopsy. However, PSA tests have a high false-positive rate and ultrasound-guided biopsy has a high likelihood of missing small cancerous tissues. Commercial methods of nuclear medical imaging, e.g. PET and SPECT, can functionally image the organs, and potentially find cancer tissues at early stages, but their applications in diagnosing prostate cancer has been limited by the smallness of the prostate gland and the long working distance between the organ and the detectors comprising these imaging systems. CZT is a semiconductor material with wide band-gap and relatively high electron mobility, and thus can operate at room temperature without additional cooling. CZT detectors are photon-electron direct-conversion devices, thus offering high energy-resolution in detecting gamma rays, enabling energy-resolved imaging, and reducing the background of Compton-scattering events. In addition, CZT material has high stopping power for gamma rays; for medical imaging, a few-mm-thick CZT material provides adequate detection efficiency for many SPECT radiotracers. Because of these advantages, CZT detectors are becoming popular for several SPECT medical-imaging applications. Most recently, we designed a compact gamma camera using CZT detectors coupled to an application-specific-integrated-circuit (ASIC). This camera functions as a trans-rectal probe to image the prostate gland from a distance of only 1-5 cm, thus offering higher detection efficiency and higher spatial resolution. Hence, it potentially can detect prostate cancers at their early stages. The performance tests of this camera

  9. Compact CdZnTe-based gamma camera for prostate cancer imaging

    NASA Astrophysics Data System (ADS)

    Cui, Yonggang; Lall, Terry; Tsui, Benjamin; Yu, Jianhua; Mahler, George; Bolotnikov, Aleksey; Vaska, Paul; De Geronimo, Gianluigi; O'Connor, Paul; Meinken, George; Joyal, John; Barrett, John; Camarda, Giuseppe; Hossain, Anwar; Kim, Ki Hyun; Yang, Ge; Pomper, Marty; Cho, Steve; Weisman, Ken; Seo, Youngho; Babich, John; LaFrance, Norman; James, Ralph B.

    2011-06-01

    In this paper, we discuss the design of a compact gamma camera for high-resolution prostate cancer imaging using Cadmium Zinc Telluride (CdZnTe or CZT) radiation detectors. Prostate cancer is a common disease in men. Nowadays, a blood test measuring the level of prostate specific antigen (PSA) is widely used for screening for the disease in males over 50, followed by (ultrasound) imaging-guided biopsy. However, PSA tests have a high falsepositive rate and ultrasound-guided biopsy has a high likelihood of missing small cancerous tissues. Commercial methods of nuclear medical imaging, e.g. PET and SPECT, can functionally image the organs, and potentially find cancer tissues at early stages, but their applications in diagnosing prostate cancer has been limited by the smallness of the prostate gland and the long working distance between the organ and the detectors comprising these imaging systems. CZT is a semiconductor material with wide band-gap and relatively high electron mobility, and thus can operate at room temperature without additional cooling. CZT detectors are photon-electron direct-conversion devices, thus offering high energy-resolution in detecting gamma rays, enabling energy-resolved imaging, and reducing the background of Compton-scattering events. In addition, CZT material has high stopping power for gamma rays; for medical imaging, a few-mm-thick CZT material provides adequate detection efficiency for many SPECT radiotracers. Because of these advantages, CZT detectors are becoming popular for several SPECT medical-imaging applications. Most recently, we designed a compact gamma camera using CZT detectors coupled to an application-specific-integratedcircuit (ASIC). This camera functions as a trans-rectal probe to image the prostate gland from a distance of only 1-5 cm, thus offering higher detection efficiency and higher spatial resolution. Hence, it potentially can detect prostate cancers at their early stages. The performance tests of this camera

  10. Cardiac cameras.

    PubMed

    Travin, Mark I

    2011-05-01

    Cardiac imaging with radiotracers plays an important role in patient evaluation, and the development of suitable imaging instruments has been crucial. While initially performed with the rectilinear scanner that slowly transmitted, in a row-by-row fashion, cardiac count distributions onto various printing media, the Anger scintillation camera allowed electronic determination of tracer energies and of the distribution of radioactive counts in 2D space. Increased sophistication of cardiac cameras and development of powerful computers to analyze, display, and quantify data has been essential to making radionuclide cardiac imaging a key component of the cardiac work-up. Newer processing algorithms and solid state cameras, fundamentally different from the Anger camera, show promise to provide higher counting efficiency and resolution, leading to better image quality, more patient comfort and potentially lower radiation exposure. While the focus has been on myocardial perfusion imaging with single-photon emission computed tomography, increased use of positron emission tomography is broadening the field to include molecular imaging of the myocardium and of the coronary vasculature. Further advances may require integrating cardiac nuclear cameras with other imaging devices, ie, hybrid imaging cameras. The goal is to image the heart and its physiological processes as accurately as possible, to prevent and cure disease processes.

  11. Advances In The Image Sensor: The Critical Element In The Performance Of Cameras

    NASA Astrophysics Data System (ADS)

    Narabu, Tadakuni

    2011-01-01

    Digital imaging technology and digital imaging products are advancing at a rapid pace. The progress of digital cameras has been particularly impressive. Image sensors now have smaller pixel size, a greater number of pixels, higher sensitivity, lower noise and a higher frame rate. Picture resolution is a function of the number of pixels of the image sensor. The more pixels there are, the smaller each pixel, but the sensitivity and the charge-handling capability of each pixel can be maintained or even be increased by raising the quantum efficiency and the saturation capacity of the pixel per unit area. Sony's many technologies can be successfully applied to CMOS Image Sensor manufacturing toward sub-2.0 um pitch pixel and beyond.

  12. A correction method of the spatial distortion in planar images from γ-Camera systems

    NASA Astrophysics Data System (ADS)

    Thanasas, D.; Georgiou, E.; Giokaris, N.; Karabarbounis, A.; Maintas, D.; Papanicolas, C. N.; Polychronopoulou, A.; Stiliaris, E.

    2009-06-01

    A methodology for correcting spatial distortions in planar images for small Field Of View (FOV) γ-Camera systems based on Position Sensitive Photomultiplier Tubes (PSPMT) and pixelated scintillation crystals is described. The process utilizes a correction matrix whose elements are derived from a prototyped planar image obtained through irradiation of the scintillation crystal by a 60Co point source and without a collimator. The method was applied to several planar images of a SPECT experiment with a simple phantom construction at different detection angles. The tomographic images are obtained using the Maximum-Likelihood Expectation-Maximization (MLEM) reconstruction technique. Corrected and uncorrected images are compared and the applied correction methodology is discussed.

  13. Diffuse reflection imaging of sub-epidermal tissue haematocrit using a simple RGB camera

    NASA Astrophysics Data System (ADS)

    Leahy, Martin J.; O'Doherty, Jim; McNamara, Paul; Henricson, Joakim; Nilsson, Gert E.; Anderson, Chris; Sjoberg, Folke

    2007-05-01

    This paper describes the design and evaluation of a novel easy to use, tissue viability imaging system (TiVi). The system is based on the methods of diffuse reflectance spectroscopy and polarization spectroscopy. The technique has been developed as an alternative to current imaging technology in the area of microcirculation imaging, most notably optical coherence tomography (OCT) and laser Doppler perfusion imaging (LDPI). The system is based on standard digital camera technology, and is sensitive to red blood cells (RBCs) in the microcirculation. Lack of clinical acceptance of both OCT and LDPI fuels the need for an objective, simple, reproducible and portable imaging method that can provide accurate measurements related to stimulus vasoactivity in the microvasculature. The limitations of these technologies are discussed in this paper. Uses of the Tissue Viability system include skin care products, drug development, and assessment spatial and temporal aspects of vasodilation (erythema) and vasoconstriction (blanching).

  14. Can We Trust the Use of Smartphone Cameras in Clinical Practice? Laypeople Assessment of Their Image Quality

    PubMed Central

    Boissin, Constance; Fleming, Julian; Wallis, Lee; Hasselberg, Marie

    2015-01-01

    Abstract Background: Smartphone cameras are rapidly being introduced in medical practice, among other devices for image-based teleconsultation. Little is known, however, about the actual quality of the images taken, which is the object of this study. Materials and Methods: A series of nonclinical objects (from three broad categories) were photographed by a professional photographer using three smartphones (iPhone® 4 [Apple, Cupertino, CA], Samsung [Suwon, Korea] Galaxy S2, and BlackBerry® 9800 [BlackBerry Ltd., Waterloo, ON, Canada]) and a digital camera (Canon [Tokyo, Japan] Mark II). In a Web survey a convenience sample of 60 laypeople “blind” to the types of camera assessed the quality of the photographs, individually and best overall. We then measured how each camera scored by object category and as a whole and whether a camera ranked best using a Mann–Whitney U test for 2×2 comparisons. Results: There were wide variations between and within categories in the quality assessments for all four cameras. The iPhone had the highest proportion of images individually evaluated as good, and it also ranked best for more objects compared with other cameras, including the digital one. The ratings of the Samsung or the BlackBerry smartphone did not significantly differ from those of the digital camera. Conclusions: Whereas one smartphone camera ranked best more often, all three smartphones obtained results at least as good as those of the digital camera. Smartphone cameras can be a substitute for digital cameras for the purposes of medical teleconsulation. PMID:26076033

  15. ROSA: A High-cadence, Synchronized Multi-camera Solar Imaging System

    NASA Astrophysics Data System (ADS)

    Christian, Damian Joseph; Jess, D. B.; Mahtioudakis, M.; Keenan, F. P.

    2011-05-01

    The Rapid Oscillations in the Solar Atmosphere (ROSA) instrument is a synchronized, six-camera high-cadence solar imaging instrument developed by Queen's University Belfast and recently commissioned at the Dunn Solar Telescope at the National Solar Observatory in Sunspot, New Mexico, USA, as a common-user instrument. Consisting of six 1k x 1k Peltier-cooled frame-transfer CCD cameras with very low noise (0.02 - 15 e/pixel/s), each ROSA camera is capable of full-chip readout speeds in excess of 30 Hz, and up to 200 Hz when the CCD is windowed. ROSA will allow for multi-wavelength studies of the solar atmosphere at a high temporal resolution. We will present the current instrument set-up and parameters, observing modes, and future plans, including a new high QE camera allowing 15 Hz for Halpha. Interested parties should see https://habu.pst.qub.ac.uk/groups/arcresearch/wiki/de502/ROSA.html

  16. Intraoperative Imaging Guidance for Sentinel Node Biopsy in Melanoma Using a Mobile Gamma Camera

    SciTech Connect

    Dengel, Lynn T; Judy, Patricia G; Petroni, Gina R; Smolkin, Mark E; Rehm, Patrice K; Majewski, Stan; Williams, Mark B; Slingluff Jr., Craig L

    2011-04-01

    The objective is to evaluate the sensitivity and clinical utility of intraoperative mobile gamma camera (MGC) imaging in sentinel lymph node biopsy (SLNB) in melanoma. The false-negative rate for SLNB for melanoma is approximately 17%, for which failure to identify the sentinel lymph node (SLN) is a major cause. Intraoperative imaging may aid in detection of SLN near the primary site, in ambiguous locations, and after excision of each SLN. The present pilot study reports outcomes with a prototype MGC designed for rapid intraoperative image acquisition. We hypothesized that intraoperative use of the MGC would be feasible and that sensitivity would be at least 90%. From April to September 2008, 20 patients underwent Tc99 sulfur colloid lymphoscintigraphy, and SLNB was performed with use of a conventional fixed gamma camera (FGC), and gamma probe followed by intraoperative MGC imaging. Sensitivity was calculated for each detection method. Intraoperative logistical challenges were scored. Cases in which MGC provided clinical benefit were recorded. Sensitivity for detecting SLN basins was 97% for the FGC and 90% for the MGC. A total of 46 SLN were identified: 32 (70%) were identified as distinct hot spots by preoperative FGC imaging, 31 (67%) by preoperative MGC imaging, and 43 (93%) by MGC imaging pre- or intraoperatively. The gamma probe identified 44 (96%) independent of MGC imaging. The MGC provided defined clinical benefit as an addition to standard practice in 5 (25%) of 20 patients. Mean score for MGC logistic feasibility was 2 on a scale of 1-9 (1 = best). Intraoperative MGC imaging provides additional information when standard techniques fail or are ambiguous. Sensitivity is 90% and can be increased. This pilot study has identified ways to improve the usefulness of an MGC for intraoperative imaging, which holds promise for reducing false negatives of SLNB for melanoma.

  17. Acquisition of Diagnostic Screen and Synchrotron Radiation Images Using IEEE1394 Digital Cameras

    NASA Astrophysics Data System (ADS)

    Rehm, G.

    2004-11-01

    In the LINAC, booster synchrotron and transfer lines of DIAMOND a number of screens (YAG:Ce and OTR) as well as synchrotron radiation ports will be used to acquire information about the transverse beam distribution. Digital IEEE1394 cameras have been selected for their range of sensor sizes and resolutions available, their easy triggering to single events, and their noise-free transmission of the images into the control system. Their suitability for use under influence of high-energy radiation has been verified. Images from preliminary tests at the SRS Daresbury are presented.

  18. ROPtool analysis of images acquired using a noncontact handheld fundus camera (Pictor)--a pilot study.

    PubMed

    Vickers, Laura A; Freedman, Sharon F; Wallace, David K; Prakalapakorn, S Grace

    2015-12-01

    The presence of plus disease is the primary indication for treatment of retinopathy of prematurity (ROP), but its diagnosis is subjective and prone to error. ROPtool is a semiautomated computer program that quantifies vascular tortuosity and dilation. Pictor is an FDA-approved, noncontact, handheld digital fundus camera. This pilot study evaluated ROPtool's ability to analyze high-quality Pictor images of premature infants and its accuracy in diagnosing plus disease compared to clinical examination. In our small sample of images, ROPtool could trace and identify the presence of plus disease with high accuracy.

  19. Electron-tracking Compton gamma-ray camera for small animal and phantom imaging

    NASA Astrophysics Data System (ADS)

    Kabuki, Shigeto; Kimura, Hiroyuki; Amano, Hiroo; Nakamoto, Yuji; Kubo, Hidetoshi; Miuchi, Kentaro; Kurosawa, Shunsuke; Takahashi, Michiaki; Kawashima, Hidekazu; Ueda, Masashi; Okada, Tomohisa; Kubo, Atsushi; Kunieda, Etuso; Nakahara, Tadaki; Kohara, Ryota; Miyazaki, Osamu; Nakazawa, Tetsuo; Shirahata, Takashi; Yamamoto, Etsuji; Ogawa, Koichi; Togashi, Kaori; Saji, Hideo; Tanimori, Toru

    2010-11-01

    We have developed an electron-tracking Compton camera (ETCC) for medical use. Our ETCC has a wide energy dynamic range (200-1300 keV) and wide field of view (3 sr), and thus has potential for advanced medical use. To evaluate the ETCC, we imaged the head (brain) and bladder of mice that had been administered with F-18-FDG. We also imaged the head and thyroid gland of mice using double tracers of F-18-FDG and I-131 ions.

  20. Multi-acoustic lens design methodology for a low cost C-scan photoacoustic imaging camera

    NASA Astrophysics Data System (ADS)

    Chinni, Bhargava; Han, Zichao; Brown, Nicholas; Vallejo, Pedro; Jacobs, Tess; Knox, Wayne; Dogra, Vikram; Rao, Navalgund

    2016-03-01

    We have designed and implemented a novel acoustic lens based focusing technology into a prototype photoacoustic imaging camera. All photoacoustically generated waves from laser exposed absorbers within a small volume get focused simultaneously by the lens onto an image plane. We use a multi-element ultrasound transducer array to capture the focused photoacoustic signals. Acoustic lens eliminates the need for expensive data acquisition hardware systems, is faster compared to electronic focusing and enables real-time image reconstruction. Using this photoacoustic imaging camera, we have imaged more than 150 several centimeter size ex-vivo human prostate, kidney and thyroid specimens with a millimeter resolution for cancer detection. In this paper, we share our lens design strategy and how we evaluate the resulting quality metrics (on and off axis point spread function, depth of field and modulation transfer function) through simulation. An advanced toolbox in MATLAB was adapted and used for simulating a two-dimensional gridded model that incorporates realistic photoacoustic signal generation and acoustic wave propagation through the lens with medium properties defined on each grid point. Two dimensional point spread functions have been generated and compared with experiments to demonstrate the utility of our design strategy. Finally we present results from work in progress on the use of two lens system aimed at further improving some of the quality metrics of our system.

  1. Improvement of a snapshot spectroscopic retinal multi-aperture imaging camera

    NASA Astrophysics Data System (ADS)

    Lemaillet, Paul; Lompado, Art; Ramella-Roman, Jessica C.

    2009-02-01

    Measurement of oxygen saturation has proved to give important information about the eye health and the onset of eye pathologies such as Diabetic Retinopathy. Recently, we have presented a multi-aperture system enabling snapshot acquisition of human fundus images at six different wavelengths. In our setup a commercial fundus ophthalmoscope was interfaced with the multi-aperture system to acquire spectroscopic sensitive images of the retina vessel, thus enabling assessment of the oxygen saturation in the retina. Snapshot spectroscopic acquisition is meant to minimize the effects of eye movements. Higher measurement accuracy can be achieved by increasing the number of wavelengths at which the fundus images are taken. In this study we present an improvement of our setup by introducing an other multi-aperture camera that enables us to take snapshot images of the fundus at nine different wavelengths. Careful consideration is taken to improve image transfer by measuring the optical properties of the fundus camera used in the setup and modeling the optical train in Zemax.

  2. Evaluating the capability of time-of-flight cameras for accurately imaging a cyclically loaded beam

    NASA Astrophysics Data System (ADS)

    Lahamy, Hervé; Lichti, Derek; El-Badry, Mamdouh; Qi, Xiaojuan; Detchev, Ivan; Steward, Jeremy; Moravvej, Mohammad

    2015-05-01

    Time-of-flight cameras are used for diverse applications ranging from human-machine interfaces and gaming to robotics and earth topography. This paper aims at evaluating the capability of the Mesa Imaging SR4000 and the Microsoft Kinect 2.0 time-of-flight cameras for accurately imaging the top surface of a concrete beam subjected to fatigue loading in laboratory conditions. Whereas previous work has demonstrated the success of such sensors for measuring the response at point locations, the aim here is to measure the entire beam surface in support of the overall objective of evaluating the effectiveness of concrete beam reinforcement with steel fibre reinforced polymer sheets. After applying corrections for lens distortions to the data and differencing images over time to remove systematic errors due to internal scattering, the periodic deflections experienced by the beam have been estimated for the entire top surface of the beam and at witness plates attached. The results have been assessed by comparison with measurements from highly-accurate laser displacement transducers. This study concludes that both the Microsoft Kinect 2.0 and the Mesa Imaging SR4000s are capable of sensing a moving surface with sub-millimeter accuracy once the image distortions have been modeled and removed.

  3. Development of a Portable 3CCD Camera System for Multispectral Imaging of Biological Samples

    PubMed Central

    Lee, Hoyoung; Park, Soo Hyun; Noh, Sang Ha; Lim, Jongguk; Kim, Moon S.

    2014-01-01

    Recent studies have suggested the need for imaging devices capable of multispectral imaging beyond the visible region, to allow for quality and safety evaluations of agricultural commodities. Conventional multispectral imaging devices lack flexibility in spectral waveband selectivity for such applications. In this paper, a recently developed portable 3CCD camera with significant improvements over existing imaging devices is presented. A beam-splitter prism assembly for 3CCD was designed to accommodate three interference filters that can be easily changed for application-specific multispectral waveband selection in the 400 to 1000 nm region. We also designed and integrated electronic components on printed circuit boards with firmware programming, enabling parallel processing, synchronization, and independent control of the three CCD sensors, to ensure the transfer of data without significant delay or data loss due to buffering. The system can stream 30 frames (3-waveband images in each frame) per second. The potential utility of the 3CCD camera system was demonstrated in the laboratory for detecting defect spots on apples. PMID:25350510

  4. 200 ps FWHM and 100 MHz repetition rate ultrafast gated camera for optical medical functional imaging

    NASA Astrophysics Data System (ADS)

    Uhring, Wilfried; Poulet, Patrick; Hanselmann, Walter; Glazenborg, René; Zint, Virginie; Nouizi, Farouk; Dubois, Benoit; Hirschi, Werner

    2012-04-01

    The paper describes the realization of a complete optical imaging device to clinical applications like brain functional imaging by time-resolved, spectroscopic diffuse optical tomography. The entire instrument is assembled in a unique setup that includes a light source, an ultrafast time-gated intensified camera and all the electronic control units. The light source is composed of four near infrared laser diodes driven by a nanosecond electrical pulse generator working in a sequential mode at a repetition rate of 100 MHz. The resulting light pulses, at four wavelengths, are less than 80 ps FWHM. They are injected in a four-furcated optical fiber ended with a frontal light distributor to obtain a uniform illumination spot directed towards the head of the patient. Photons back-scattered by the subject are detected by the intensified CCD camera; there are resolved according to their time of flight inside the head. The very core of the intensified camera system is the image intensifier tube and its associated electrical pulse generator. The ultrafast generator produces 50 V pulses, at a repetition rate of 100 MHz and a width corresponding to the 200 ps requested gate. The photocathode and the Micro-Channel-Plate of the intensifier have been specially designed to enhance the electromagnetic wave propagation and reduce the power loss and heat that are prejudicial to the quality of the image. The whole instrumentation system is controlled by an FPGA based module. The timing of the light pulses and the photocathode gating is precisely adjustable with a step of 9 ps. All the acquisition parameters are configurable via software through an USB plug and the image data are transferred to a PC via an Ethernet link. The compactness of the device makes it a perfect device for bedside clinical applications.

  5. Algorithm design for automated transportation photo enforcement camera image and video quality diagnostic check modules

    NASA Astrophysics Data System (ADS)

    Raghavan, Ajay; Saha, Bhaskar

    2013-03-01

    Photo enforcement devices for traffic rules such as red lights, toll, stops, and speed limits are increasingly being deployed in cities and counties around the world to ensure smooth traffic flow and public safety. These are typically unattended fielded systems, and so it is important to periodically check them for potential image/video quality problems that might interfere with their intended functionality. There is interest in automating such checks to reduce the operational overhead and human error involved in manually checking large camera device fleets. Examples of problems affecting such camera devices include exposure issues, focus drifts, obstructions, misalignment, download errors, and motion blur. Furthermore, in some cases, in addition to the sub-algorithms for individual problems, one also has to carefully design the overall algorithm and logic to check for and accurately classifying these individual problems. Some of these issues can occur in tandem or have the potential to be confused for each other by automated algorithms. Examples include camera misalignment that can cause some scene elements to go out of focus for wide-area scenes or download errors that can be misinterpreted as an obstruction. Therefore, the sequence in which the sub-algorithms are utilized is also important. This paper presents an overview of these problems along with no-reference and reduced reference image and video quality solutions to detect and classify such faults.

  6. Single-camera microscopic stereo digital image correlation using a diffraction grating.

    PubMed

    Pan, Bing; Wang, Qiong

    2013-10-21

    A simple, cost-effective but practical microscopic 3D-DIC method using a single camera and a transmission diffraction grating is proposed for surface profile and deformation measurement of small-scale objects. By illuminating a test sample with quasi-monochromatic source, the transmission diffraction grating placed in front of the camera can produce two laterally spaced first-order diffraction views of the sample surface into the two halves of the camera target. The single image comprising negative and positive first-order diffraction views can be used to reconstruct the profile of the test sample, while the two single images acquired before and after deformation can be employed to determine the 3D displacements and strains of the sample surface. The basic principles and implementation procedures of the proposed technique for microscopic 3D profile and deformation measurement are described in detail. The effectiveness and accuracy of the presented microscopic 3D-DIC method is verified by measuring the profile and 3D displacements of a regular cylinder surface.

  7. The computation of cloud base height from paired whole-sky imaging cameras

    SciTech Connect

    Allmen, M.C.; Kegelmeyer, W.P. Jr.

    1994-03-01

    A major goal for global change studies is to improve the accuracy of general circulation models (GCMs) capable of predicting the timing and magnitude of greenhouse gas-induced global warming. Research has shown that cloud radiative feedback is the single most important effect determining the magnitude of possible climate responses to human activity. Of particular value to reducing the uncertainties associated with cloud-radiation interactions is the measurement of cloud base height (CBH), both because it is a dominant factor in determining the infrared radiative properties of clouds with respect to the earth`s surface and lower atmosphere and because CBHs are essential to measuring cloud cover fraction. We have developed a novel approach to the extraction of cloud base height from pairs of whole sky imaging (WSI) cameras. The core problem is to spatially register cloud fields from widely separated WSI cameras; this complete, triangulation provides the CBH measurements. The wide camera separation (necessary to cover the desired observation area) and the self-similarity of clouds defeats all standard matching algorithms when applied to static views of the sky. To address this, our approach is based on optical flow methods that exploit the fact that modern WSIs provide sequences of images. We will describe the algorithm and present its performance as evaluated both on real data validated by ceilometer measurements and on a variety of simulated cases.

  8. Measurement of food volume based on single 2-D image without conventional camera calibration.

    PubMed

    Yue, Yaofeng; Jia, Wenyan; Sun, Mingui

    2012-01-01

    Food portion size measurement combined with a database of calories and nutrients is important in the study of metabolic disorders such as obesity and diabetes. In this work, we present a convenient and accurate approach to the calculation of food volume by measuring several dimensions using a single 2-D image as the input. This approach does not require the conventional checkerboard based camera calibration since it is burdensome in practice. The only prior requirements of our approach are: 1) a circular container with a known size, such as a plate, a bowl or a cup, is present in the image, and 2) the picture is taken under a reasonable assumption that the camera is always held level with respect to its left and right sides and its lens is tilted down towards foods on the dining table. We show that, under these conditions, our approach provides a closed form solution to camera calibration, allowing convenient measurement of food portion size using digital pictures.

  9. Imaging and radiometric performance simulation for a new high-performance dual-band airborne reconnaissance camera

    NASA Astrophysics Data System (ADS)

    Seong, Sehyun; Yu, Jinhee; Ryu, Dongok; Hong, Jinsuk; Yoon, Jee-Yeon; Kim, Sug-Whan; Lee, Jun-Ho; Shin, Myung-Jin

    2009-05-01

    In recent years, high performance visible and IR cameras have been used widely for tactical airborne reconnaissance. The process improvement for efficient discrimination and analysis of complex target information from active battlefields requires for simultaneous multi-band measurement from airborne platforms at various altitudes. We report a new dual band airborne camera designed for simultaneous registration of both visible and IR imagery from mid-altitude ranges. The camera design uses a common front end optical telescope of around 0.3m in entrance aperture and several relay optical sub-systems capable of delivering both high spatial resolution visible and IR images to the detectors. The camera design is benefited from the use of several optical channels packaged in a compact space and the associated freedom to choose between wide (~3 degrees) and narrow (~1 degree) field of view. In order to investigate both imaging and radiometric performances of the camera, we generated an array of target scenes with optical properties such as reflection, refraction, scattering, transmission and emission. We then combined the target scenes and the camera optical system into the integrated ray tracing simulation environment utilizing Monte Carlo computation technique. Taking realistic atmospheric radiative transfer characteristics into account, both imaging and radiometric performances were then investigated. The simulation results demonstrate successfully that the camera design satisfies NIIRS 7 detection criterion. The camera concept, details of performance simulation computation, the resulting performances are discussed together with future development plan.

  10. Image Capture with Synchronized Multiple-Cameras for Extraction of Accurate Geometries

    NASA Astrophysics Data System (ADS)

    Koehl, M.; Delacourt, T.; Boutry, C.

    2016-06-01

    This paper presents a project of recording and modelling tunnels, traffic circles and roads from multiple sensors. The aim is the representation and the accurate 3D modelling of a selection of road infrastructures as dense point clouds in order to extract profiles and metrics from it. Indeed, these models will be used for the sizing of infrastructures in order to simulate exceptional convoy truck routes. The objective is to extract directly from the point clouds the heights, widths and lengths of bridges and tunnels, the diameter of gyrating and to highlight potential obstacles for a convoy. Light, mobile and fast acquisition approaches based on images and videos from a set of synchronized sensors have been tested in order to obtain useable point clouds. The presented solution is based on a combination of multiple low-cost cameras designed on an on-boarded device allowing dynamic captures. The experimental device containing GoPro Hero4 cameras has been set up and used for tests in static or mobile acquisitions. That way, various configurations have been tested by using multiple synchronized cameras. These configurations are discussed in order to highlight the best operational configuration according to the shape of the acquired objects. As the precise calibration of each sensor and its optics are major factors in the process of creation of accurate dense point clouds, and in order to reach the best quality available from such cameras, the estimation of the internal parameters of fisheye lenses of the cameras has been processed. Reference measures were also realized by using a 3D TLS (Faro Focus 3D) to allow the accuracy assessment.

  11. Measuring SO2 ship emissions with an ultra-violet imaging camera

    NASA Astrophysics Data System (ADS)

    Prata, A. J.

    2013-11-01

    Over the last few years fast-sampling ultra-violet (UV) imaging cameras have been developed for use in measuring SO2 emissions from industrial sources (e.g. power plants; typical fluxes ~1-10 kg s-1) and natural sources (e.g. volcanoes; typical fluxes ~10-100 kg s-1). Generally, measurements have been made from sources rich in SO2 with high concentrations and fluxes. In this work, for the first time, a UV camera has been used to measure the much lower concentrations and fluxes of SO2 (typical fluxes ~0.01-0.1 kg s-1) in the plumes from moving and stationary ships. Some innovations and trade-offs have been made so that estimates of the fluxes and path concentrations can be retrieved in real-time. Field experiments were conducted at Kongsfjord in Ny Ålesund, Svalbard, where emissions from cruise ships were made, and at the port of Rotterdam, Netherlands, measuring emissions from more than 10 different container and cargo ships. In all cases SO2 path concentrations could be estimated and fluxes determined by measuring ship plume speeds simultaneously using the camera, or by using surface wind speed data from an independent source. Accuracies were compromised in some cases because of the presence of particulates in some ship emissions and the restriction of single-filter UV imagery, a requirement for fast-sampling (>10 Hz) from a single camera. Typical accuracies ranged from 10-30% in path concentration and 10-40% in flux estimation. Despite the ease of use and ability to determine SO2 fluxes from the UV camera system, the limitation in accuracy and precision suggest that the system may only be used under rather ideal circumstances and that currently the technology needs further development to serve as a method to monitor ship emissions for regulatory purposes.

  12. Non-contact imaging of venous compliance in humans using an RGB camera

    NASA Astrophysics Data System (ADS)

    Nakano, Kazuya; Satoh, Ryota; Hoshi, Akira; Matsuda, Ryohei; Suzuki, Hiroyuki; Nishidate, Izumi

    2015-04-01

    We propose a technique for non-contact imaging of venous compliance that uses the red, green, and blue (RGB) camera. Any change in blood concentration is estimated from an RGB image of the skin, and a regression formula is calculated from that change. Venous compliance is obtained from a differential form of the regression formula. In vivo experiments with human subjects confirmed that the proposed method does differentiate the venous compliances among individuals. In addition, the image of venous compliance is obtained by performing the above procedures for each pixel. Thus, we can measure venous compliance without physical contact with sensors and, from the resulting images, observe the spatial distribution of venous compliance, which correlates with the distribution of veins.

  13. Design and realization of an image mosaic system on the CCD aerial camera

    NASA Astrophysics Data System (ADS)

    Liu, Hai ying; Wang, Peng; Zhu, Hai bin; Li, Yan; Zhang, Shao jun

    2015-08-01

    It has long been difficulties in aerial photograph to stitch multi-route images into a panoramic image in real time for multi-route flight framing CCD camera with very large amount of data, and high accuracy requirements. An automatic aerial image mosaic system based on GPU development platform is described in this paper. Parallel computing of SIFT feature extraction and matching algorithm module is achieved by using CUDA technology for motion model parameter estimation on the platform, which makes it's possible to stitch multiple CCD images in real-time. Aerial tests proved that the mosaic system meets the user's requirements with 99% accuracy and 30 to 50 times' speed improvement of the normal mosaic system.

  14. High performance gel imaging with a commercial single lens reflex camera

    NASA Astrophysics Data System (ADS)

    Slobodan, J.; Corbett, R.; Wye, N.; Schein, J. E.; Marra, M. A.; Coope, R. J. N.

    2011-03-01

    A high performance gel imaging system was constructed using a digital single lens reflex camera with epi-illumination to image 19 × 23 cm agarose gels with up to 10,000 DNA bands each. It was found to give equivalent performance to a laser scanner in this high throughput DNA fingerprinting application using the fluorophore SYBR Green®. The specificity and sensitivity of the imager and scanner were within 1% using the same band identification software. Low and high cost color filters were also compared and it was found that with care, good results could be obtained with inexpensive dyed acrylic filters in combination with more costly dielectric interference filters, but that very poor combinations were also possible. Methods for determining resolution, dynamic range, and optical efficiency for imagers are also proposed to facilitate comparison between systems.

  15. Digital image measurement of specimen deformation based on CCD cameras and Image J software: an application to human pelvic biomechanics

    NASA Astrophysics Data System (ADS)

    Jia, Yongwei; Cheng, Liming; Yu, Guangrong; Lou, Yongjian; Yu, Yan; Chen, Bo; Ding, Zuquan

    2008-03-01

    A method of digital image measurement of specimen deformation based on CCD cameras and Image J software was developed. This method was used to measure the biomechanics behavior of human pelvis. Six cadaveric specimens from the third lumbar vertebra to the proximal 1/3 part of femur were tested. The specimens without any structural abnormalities were dissected of all soft tissue, sparing the hip joint capsules and the ligaments of the pelvic ring and floor. Markers with black dot on white background were affixed to the key regions of the pelvis. Axial loading from the proximal lumbar was applied by MTS in the gradient of 0N to 500N, which simulated the double feet standing stance. The anterior and lateral images of the specimen were obtained through two CCD cameras. Based on Image J software, digital image processing software, which can be freely downloaded from the National Institutes of Health, digital 8-bit images were processed. The procedure includes the recognition of digital marker, image invert, sub-pixel reconstruction, image segmentation, center of mass algorithm based on weighted average of pixel gray values. Vertical displacements of S1 (the first sacral vertebrae) in front view and micro-angular rotation of sacroiliac joint in lateral view were calculated according to the marker movement. The results of digital image measurement showed as following: marker image correlation before and after deformation was excellent. The average correlation coefficient was about 0.983. According to the 768 × 576 pixels image (pixel size 0.68mm × 0.68mm), the precision of the displacement detected in our experiment was about 0.018 pixels and the comparatively error could achieve 1.11\\perthou. The average vertical displacement of S1 of the pelvis was 0.8356+/-0.2830mm under vertical load of 500 Newtons and the average micro-angular rotation of sacroiliac joint in lateral view was 0.584+/-0.221°. The load-displacement curves obtained from our optical measure system

  16. Background and imaging simulations for the hard X-ray camera of the MIRAX mission

    NASA Astrophysics Data System (ADS)

    Castro, M.; Braga, J.; Penacchioni, A.; D'Amico, F.; Sacahui, R.

    2016-07-01

    We report the results of detailed Monte Carlo simulations of the performance expected both at balloon altitudes and at the probable satellite orbit of a hard X-ray coded-aperture camera being developed for the Monitor e Imageador de RAios X (MIRAX) mission. Based on a thorough mass model of the instrument and detailed specifications of the spectra and angular dependence of the various relevant radiation fields at both the stratospheric and orbital environments, we have used the well-known package GEANT4 to simulate the instrumental background of the camera. We also show simulated images of source fields to be observed and calculated the detailed sensitivity of the instrument in both situations. The results reported here are especially important to researchers in this field considering that we provide important information, not easily found in the literature, on how to prepare input files and calculate crucial instrumental parameters to perform GEANT4 simulations for high-energy astrophysics space experiments.

  17. A Method to Solve Interior and Exterior Camera Calibration Parameters for Image Resection

    NASA Technical Reports Server (NTRS)

    Samtaney, Ravi

    1999-01-01

    An iterative method is presented to solve the internal and external camera calibration parameters, given model target points and their images from one or more camera locations. The direct linear transform formulation was used to obtain a guess for the iterative method, and herein lies one of the strengths of the present method. In all test cases, the method converged to the correct solution. In general, an overdetermined system of nonlinear equations is solved in the least-squares sense. The iterative method presented is based on Newton-Raphson for solving systems of nonlinear algebraic equations. The Jacobian is analytically derived and the pseudo-inverse of the Jacobian is obtained by singular value decomposition.

  18. Quantitative Evaluation of Scintillation Camera Imaging Characteristics of Isotopes Used in Liver Radioembolization

    PubMed Central

    Elschot, Mattijs; Nijsen, Johannes Franciscus Wilhelmus; Dam, Alida Johanna; de Jong, Hugo Wilhelmus Antonius Maria

    2011-01-01

    Background Scintillation camera imaging is used for treatment planning and post-treatment dosimetry in liver radioembolization (RE). In yttrium-90 (90Y) RE, scintigraphic images of technetium-99m (99mTc) are used for treatment planning, while 90Y Bremsstrahlung images are used for post-treatment dosimetry. In holmium-166 (166Ho) RE, scintigraphic images of 166Ho can be used for both treatment planning and post-treatment dosimetry. The aim of this study is to quantitatively evaluate and compare the imaging characteristics of these three isotopes, in order that imaging protocols can be optimized and RE studies with varying isotopes can be compared. Methodology/Principal Findings Phantom experiments were performed in line with NEMA guidelines to assess the spatial resolution, sensitivity, count rate linearity, and contrast recovery of 99mTc, 90Y and 166Ho. In addition, Monte Carlo simulations were performed to obtain detailed information about the history of detected photons. The results showed that the use of a broad energy window and the high-energy collimator gave optimal combination of sensitivity, spatial resolution, and primary photon fraction for 90Y Bremsstrahlung imaging, although differences with the medium-energy collimator were small. For 166Ho, the high-energy collimator also slightly outperformed the medium-energy collimator. In comparison with 99mTc, the image quality of both 90Y and 166Ho is degraded by a lower spatial resolution, a lower sensitivity, and larger scatter and collimator penetration fractions. Conclusions/Significance The quantitative evaluation of the scintillation camera characteristics presented in this study helps to optimize acquisition parameters and supports future analysis of clinical comparisons between RE studies. PMID:22073149

  19. Megapixel imaging camera for expanded H{sup {minus}} beam measurements

    SciTech Connect

    Simmons, J.E.; Lillberg, J.W.; McKee, R.J.; Slice, R.W.; Torrez, J.H.; McCurnin, T.W.; Sanchez, P.G.

    1994-02-01

    A charge coupled device (CCD) imaging camera system has been developed as part of the Ground Test Accelerator project at the Los Alamos National Laboratory to measure the properties of a large diameter, neutral particle beam. The camera is designed to operate in the accelerator vacuum system for extended periods of time. It would normally be cooled to reduce dark current. The CCD contains 1024 {times} 1024 pixels with pixel size of 19 {times} 19 {mu}m{sup 2} and with four phase parallel clocking and two phase serial clocking. The serial clock rate is 2.5{times}10{sup 5} pixels per second. Clock sequence and timing are controlled by an external logic-word generator. The DC bias voltages are likewise located externally. The camera contains circuitry to generate the analog clocks for the CCD and also contains the output video signal amplifier. Reset switching noise is removed by an external signal processor that employs delay elements to provide noise suppression by the method of double-correlated sampling. The video signal is digitized to 12 bits in an analog to digital converter (ADC) module controlled by a central processor module. Both modules are located in a VME-type computer crate that communicates via ethernet with a separate workstation where overall control is exercised and image processing occurs. Under cooled conditions the camera shows good linearity with dynamic range of 2000 and with dark noise fluctuations of about {plus_minus}1/2 ADC count. Full well capacity is about 5{times}10{sup 5} electron charges.

  20. Monte Carlo simulation of breast tumor imaging properties with compact, discrete gamma cameras

    SciTech Connect

    Gruber, G.J.; Moses, W.W.; Derenzo, S.E.

    1999-12-01

    The authors describe Monte Carlo simulation results for breast tumor imaging using a compact, discrete gamma camera. The simulations were designed to analyze and optimize camera design, particularly collimator configuration and detector pixel size. Simulated planar images of 5--15 mm diameter tumors in a phantom patient (including a breast, torso, and heart) were generated for imaging distances of 5--55 mm, pixel sizes of 2 x 2--4 x 4 mm{sup 2}, and hexagonal and square hole collimators with sensitivities from 4,000 to 16,000 counts/mCi/sec. Other factors considered included T/B (tumor-to-background tissue uptake ratio) and detector energy resolution. Image properties were quantified by computing the observed tumor fwhm (full-width at half-maximum) and S/N (sum of detected tumor events divided by the statistical noise). Results suggest that hexagonal and square hole collimators perform comparably, that higher sensitivity collimators provide higher tumor S/N with little increase in the observed tumor fwhm, that smaller pixels only slightly improve tumor fwhm and S/N, and that improved detector energy resolution has little impact on either the observed tumor fwhm or the observed tumor S/N.

  1. Retinal oximetry based on nonsimultaneous image acquisition using a conventional fundus camera.

    PubMed

    Kim, Sun Kwon; Kim, Dong Myung; Suh, Min Hee; Kim, Martha; Kim, Hee Chan

    2011-08-01

    To measure the retinal arteriole and venule oxygen saturation (SO(2)) using a conventional fundus camera, retinal oximetry based on nonsimultaneous image acquisition was developed and evaluated. Two retinal images were sequentially acquired using a conventional fundus camera with two bandpass filters (568 nm: isobestic, 600 nm: nonisobestic wavelength), one after another, instead of a built-in green filter. The images were registered to compensate for the differences caused by eye movements during the image acquisition. Retinal SO(2) was measured using two wavelength oximetry. To evaluate sensitivity of the proposed method, SO(2) in the arterioles and venules before and after inhalation of 100% O(2) were compared, respectively, in 11 healthy subjects. After inhalation of 100% O(2), SO(2) increased from 96.0 ±6.0% to 98.8% ±7.1% in the arterioles (p=0.002) and from 54.0 ±8.0% to 66.7% ±7.2% in the venules (p=0.005) (paired t-test, n=11). Reproducibility of the method was 2.6% and 5.2% in the arterioles and venules, respectively (average standard deviation of five measurements, n=11).

  2. Tomographic Small-Animal Imaging Using a High-Resolution Semiconductor Camera

    PubMed Central

    Kastis, GA; Wu, MC; Balzer, SJ; Wilson, DW; Furenlid, LR; Stevenson, G; Barber, HB; Barrett, HH; Woolfenden, JM; Kelly, P; Appleby, M

    2015-01-01

    We have developed a high-resolution, compact semiconductor camera for nuclear medicine applications. The modular unit has been used to obtain tomographic images of phantoms and mice. The system consists of a 64 x 64 CdZnTe detector array and a parallel-hole tungsten collimator mounted inside a 17 cm x 5.3 cm x 3.7 cm tungsten-aluminum housing. The detector is a 2.5 cm x 2.5 cm x 0.15 cm slab of CdZnTe connected to a 64 x 64 multiplexer readout via indium-bump bonding. The collimator is 7 mm thick, with a 0.38 mm pitch that matches the detector pixel pitch. We obtained a series of projections by rotating the object in front of the camera. The axis of rotation was vertical and about 1.5 cm away from the collimator face. Mouse holders were made out of acrylic plastic tubing to facilitate rotation and the administration of gas anesthetic. Acquisition times were varied from 60 sec to 90 sec per image for a total of 60 projections at an equal spacing of 6 degrees between projections. We present tomographic images of a line phantom and mouse bone scan and assess the properties of the system. The reconstructed images demonstrate spatial resolution on the order of 1–2 mm. PMID:26568676

  3. Optimization of camera exposure durations for multi-exposure speckle imaging of the microcirculation

    PubMed Central

    Kazmi, S. M. Shams; Balial, Satyajit; Dunn, Andrew K.

    2014-01-01

    Improved Laser Speckle Contrast Imaging (LSCI) blood flow analyses that incorporate inverse models of the underlying laser-tissue interaction have been used to develop more quantitative implementations of speckle flowmetry such as Multi-Exposure Speckle Imaging (MESI). In this paper, we determine the optimal camera exposure durations required for obtaining flow information with comparable accuracy with the prevailing MESI implementation utilized in recent in vivo rodent studies. A looping leave-one-out (LOO) algorithm was used to identify exposure subsets which were analyzed for accuracy against flows obtained from analysis with the original full exposure set over 9 animals comprising n = 314 regional flow measurements. From the 15 original exposures, 6 exposures were found using the LOO process to provide comparable accuracy, defined as being no more than 10% deviant, with the original flow measurements. The optimal subset of exposures provides a basis set of camera durations for speckle flowmetry studies of the microcirculation and confers a two-fold faster acquisition rate and a 28% reduction in processing time without sacrificing accuracy. Additionally, the optimization process can be used to identify further reductions in the exposure subsets for tailoring imaging over less expansive flow distributions to enable even faster imaging. PMID:25071956

  4. Real-Time On-Board Processing Validation of MSPI Ground Camera Images

    NASA Technical Reports Server (NTRS)

    Pingree, Paula J.; Werne, Thomas A.; Bekker, Dmitriy L.

    2010-01-01

    The Earth Sciences Decadal Survey identifies a multiangle, multispectral, high-accuracy polarization imager as one requirement for the Aerosol-Cloud-Ecosystem (ACE) mission. JPL has been developing a Multiangle SpectroPolarimetric Imager (MSPI) as a candidate to fill this need. A key technology development needed for MSPI is on-board signal processing to calculate polarimetry data as imaged by each of the 9 cameras forming the instrument. With funding from NASA's Advanced Information Systems Technology (AIST) Program, JPL is solving the real-time data processing requirements to demonstrate, for the first time, how signal data at 95 Mbytes/sec over 16-channels for each of the 9 multiangle cameras in the spaceborne instrument can be reduced on-board to 0.45 Mbytes/sec. This will produce the intensity and polarization data needed to characterize aerosol and cloud microphysical properties. Using the Xilinx Virtex-5 FPGA including PowerPC440 processors we have implemented a least squares fitting algorithm that extracts intensity and polarimetric parameters in real-time, thereby substantially reducing the image data volume for spacecraft downlink without loss of science information.

  5. A clinical gamma camera-based pinhole collimated system for high resolution small animal SPECT imaging.

    PubMed

    Mejia, J; Galvis-Alonso, O Y; Castro, A A de; Braga, J; Leite, J P; Simões, M V

    2010-12-01

    The main objective of the present study was to upgrade a clinical gamma camera to obtain high resolution tomographic images of small animal organs. The system is based on a clinical gamma camera to which we have adapted a special-purpose pinhole collimator and a device for positioning and rotating the target based on a computer-controlled step motor. We developed a software tool to reconstruct the target's three-dimensional distribution of emission from a set of planar projections, based on the maximum likelihood algorithm. We present details on the hardware and software implementation. We imaged phantoms and heart and kidneys of rats. When using pinhole collimators, the spatial resolution and sensitivity of the imaging system depend on parameters such as the detector-to-collimator and detector-to-target distances and pinhole diameter. In this study, we reached an object voxel size of 0.6 mm and spatial resolution better than 2.4 and 1.7 mm full width at half maximum when 1.5- and 1.0-mm diameter pinholes were used, respectively. Appropriate sensitivity to study the target of interest was attained in both cases. Additionally, we show that as few as 12 projections are sufficient to attain good quality reconstructions, a result that implies a significant reduction of acquisition time and opens the possibility for radiotracer dynamic studies. In conclusion, a high resolution single photon emission computed tomography (SPECT) system was developed using a commercial clinical gamma camera, allowing the acquisition of detailed volumetric images of small animal organs. This type of system has important implications for research areas such as Cardiology, Neurology or Oncology.

  6. Blue camera of the Keck cosmic web imager, fabrication and testing

    NASA Astrophysics Data System (ADS)

    Rockosi, Constance; Cowley, David; Cabak, Jerry; Hilyard, David; Pfister, Terry

    2016-08-01

    The Keck Cosmic Web Imager (KCWI) is a new facility instrument being developed for the W. M. Keck Observatory and funded for construction by the Telescope System Instrumentation Program (TSIP) of the National Science Foundation (NSF). KCWI is a bench-mounted spectrograph for the Keck II right Nasmyth focal station, providing integral field spectroscopy over a seeing-limited field up to 20" x 33" in extent. Selectable Volume Phase Holographic (VPH) gratings provide high efficiency and spectral resolution in the range of 1000 to 20000. The dual-beam design of KCWI passed a Preliminary Design Review in summer 2011. The detailed design of the KCWI blue channel (350 to 700 nm) is now nearly complete, with the red channel (530 to 1050 nm) planned for a phased implementation contingent upon additional funding. KCWI builds on the experience of the Caltech team in implementing the Cosmic Web Imager (CWI), in operation since 2009 at Palomar Observatory. KCWI adds considerable flexibility to the CWI design, and will take full advantage of the excellent seeing and dark sky above Mauna Kea with a selectable nod-and-shuffle observing mode. In this paper, models of the expected KCWI sensitivity and background subtraction capability are presented, along with a detailed description of the instrument design. The KCWI team is lead by Caltech (project management, design and implementation) in partnership with the University of California at Santa Cruz (camera optical and mechanical design) and the W. M. Keck Observatory (program oversight and observatory interfaces). The optical design of the blue camera for the Keck Cosmic Web Imager (KCWI) by Harland Epps of the University of California, Santa Cruz is a lens assembly consisting of eight spherical optical elements. Half the elements are calcium fluoride and all elements are air spaced. The design of the camera barrel is unique in that all the optics are secured in their respective cells with an RTV annulus without additional hardware

  7. Temperature dependent operation of PSAPD-based compact gamma camera for SPECT imaging

    PubMed Central

    Kim, Sangtaek; McClish, Mickel; Alhassen, Fares; Seo, Youngho; Shah, Kanai S.; Gould, Robert G.

    2011-01-01

    We investigated the dependence of image quality on the temperature of a position sensitive avalanche photodiode (PSAPD)-based small animal single photon emission computed tomography (SPECT) gamma camera with a CsI:Tl scintillator. Currently, nitrogen gas cooling is preferred to operate PSAPDs in order to minimize the dark current shot noise. Being able to operate a PSAPD at a relatively high temperature (e.g., 5 °C) would allow a more compact and simple cooling system for the PSAPD. In our investigation, the temperature of the PSAPD was controlled by varying the flow of cold nitrogen gas through the PSAPD module and varied from −40 °C to 20 °C. Three experiments were performed to demonstrate the performance variation over this temperature range. The point spread function (PSF) of the gamma camera was measured at various temperatures, showing variation of full-width-half-maximum (FWHM) of the PSF. In addition, a 99mTc-pertechnetate (140 keV) flood source was imaged and the visibility of the scintillator segmentation (16×16 array, 8 mm × 8 mm area, 400 μm pixel size) at different temperatures was evaluated. Comparison of image quality was made at −25 °C and 5 °C using a mouse heart phantom filled with an aqueous solution of 99mTc-pertechnetate and imaged using a 0.5 mm pinhole collimator made of tungsten. The reconstructed image quality of the mouse heart phantom at 5 °C degraded in comparision to the reconstructed image quality at −25 °C. However, the defect and structure of the mouse heart phantom were clearly observed, showing the feasibility of operating PSAPDs for SPECT imaging at 5 °C, a temperature that would not need the nitrogen cooling. All PSAPD evaluations were conducted with an applied bias voltage that allowed the highest gain at a given temperature. PMID:24465051

  8. Real-time broadband terahertz spectroscopic imaging by using a high-sensitivity terahertz camera

    PubMed Central

    Kanda, Natsuki; Konishi, Kuniaki; Nemoto, Natsuki; Midorikawa, Katsumi; Kuwata-Gonokami, Makoto

    2017-01-01

    Terahertz (THz) imaging has a strong potential for applications because many molecules have fingerprint spectra in this frequency region. Spectroscopic imaging in the THz region is a promising technique to fully exploit this characteristic. However, the performance of conventional techniques is restricted by the requirement of multidimensional scanning, which implies an image data acquisition time of several minutes. In this study, we propose and demonstrate a novel broadband THz spectroscopic imaging method that enables real-time image acquisition using a high-sensitivity THz camera. By exploiting the two-dimensionality of the detector, a broadband multi-channel spectrometer near 1 THz was constructed with a reflection type diffraction grating and a high-power THz source. To demonstrate the advantages of the developed technique, we performed molecule-specific imaging and high-speed acquisition of two-dimensional (2D) images. Two different sugar molecules (lactose and D-fructose) were identified with fingerprint spectra, and their distributions in one-dimensional space were obtained at a fast video rate (15 frames per second). Combined with the one-dimensional (1D) mechanical scanning of the sample, two-dimensional molecule-specific images can be obtained only in a few seconds. Our method can be applied in various important fields such as security and biomedicine. PMID:28198395

  9. Estimate of DTM Degradation due to Image Compression for the Stereo Camera of the Bepicolombo Mission

    NASA Astrophysics Data System (ADS)

    Re, C.; Simioni, E.; Cremonese, G.; Roncella, R.; Forlani, G.; Langevin, Y.; Da Deppo, V.; Naletto, G.; Salemi, G.

    2016-06-01

    The great amount of data that will be produced during the imaging of Mercury by the stereo camera (STC) of the BepiColombo mission needs a compromise with the restrictions imposed by the band downlink that could drastically reduce the duration and frequency of the observations. The implementation of an on-board real time data compression strategy preserving as much information as possible is therefore mandatory. The degradation that image compression might cause to the DTM accuracy is worth to be investigated. During the stereo-validation procedure of the innovative STC imaging system, several image pairs of an anorthosite sample and a modelled piece of concrete have been acquired under different illumination angles. This set of images has been used to test the effects of the compression algorithm (Langevin and Forni, 2000) on the accuracy of the DTM produced by dense image matching. Different configurations taking in account at the same time both the illumination of the surface and the compression ratio, have been considered. The accuracy of the DTMs is evaluated by comparison with a high resolution laser-scan acquisition of the same targets. The error assessment included also an analysis on the image plane indicating the influence of the compression procedure on the image measurements.

  10. Real-time broadband terahertz spectroscopic imaging by using a high-sensitivity terahertz camera

    NASA Astrophysics Data System (ADS)

    Kanda, Natsuki; Konishi, Kuniaki; Nemoto, Natsuki; Midorikawa, Katsumi; Kuwata-Gonokami, Makoto

    2017-02-01

    Terahertz (THz) imaging has a strong potential for applications because many molecules have fingerprint spectra in this frequency region. Spectroscopic imaging in the THz region is a promising technique to fully exploit this characteristic. However, the performance of conventional techniques is restricted by the requirement of multidimensional scanning, which implies an image data acquisition time of several minutes. In this study, we propose and demonstrate a novel broadband THz spectroscopic imaging method that enables real-time image acquisition using a high-sensitivity THz camera. By exploiting the two-dimensionality of the detector, a broadband multi-channel spectrometer near 1 THz was constructed with a reflection type diffraction grating and a high-power THz source. To demonstrate the advantages of the developed technique, we performed molecule-specific imaging and high-speed acquisition of two-dimensional (2D) images. Two different sugar molecules (lactose and D-fructose) were identified with fingerprint spectra, and their distributions in one-dimensional space were obtained at a fast video rate (15 frames per second). Combined with the one-dimensional (1D) mechanical scanning of the sample, two-dimensional molecule-specific images can be obtained only in a few seconds. Our method can be applied in various important fields such as security and biomedicine.

  11. Development of proton CT imaging system using plastic scintillator and CCD camera

    NASA Astrophysics Data System (ADS)

    Tanaka, Sodai; Nishio, Teiji; Matsushita, Keiichiro; Tsuneda, Masato; Kabuki, Shigeto; Uesaka, Mitsuru

    2016-06-01

    A proton computed tomography (pCT) imaging system was constructed for evaluation of the error of an x-ray CT (xCT)-to-WEL (water-equivalent length) conversion in treatment planning for proton therapy. In this system, the scintillation light integrated along the beam direction is obtained by photography using the CCD camera, which enables fast and easy data acquisition. The light intensity is converted to the range of the proton beam using a light-to-range conversion table made beforehand, and a pCT image is reconstructed. An experiment for demonstration of the pCT system was performed using a 70 MeV proton beam provided by the AVF930 cyclotron at the National Institute of Radiological Sciences. Three-dimensional pCT images were reconstructed from the experimental data. A thin structure of approximately 1 mm was clearly observed, with spatial resolution of pCT images at the same level as that of xCT images. The pCT images of various substances were reconstructed to evaluate the pixel value of pCT images. The image quality was investigated with regard to deterioration including multiple Coulomb scattering.

  12. Airborne imaging for heritage documentation using the Fotokite tethered flying camera

    NASA Astrophysics Data System (ADS)

    Verhoeven, Geert; Lupashin, Sergei; Briese, Christian; Doneus, Michael

    2014-05-01

    Since the beginning of aerial photography, researchers used all kinds of devices (from pigeons, kites, poles, and balloons to rockets) to take still cameras aloft and remotely gather aerial imagery. To date, many of these unmanned devices are still used for what has been referred to as Low-Altitude Aerial Photography or LAAP. In addition to these more traditional camera platforms, radio-controlled (multi-)copter platforms have recently added a new aspect to LAAP. Although model airplanes have been around for several decades, the decreasing cost, increasing functionality and stability of ready-to-fly multi-copter systems has proliferated their use among non-hobbyists. As such, they became a very popular tool for aerial imaging. The overwhelming amount of currently available brands and types (heli-, dual-, tri-, quad-, hexa-, octo-, dodeca-, deca-hexa and deca-octocopters), together with the wide variety of navigation options (e.g. altitude and position hold, waypoint flight) and camera mounts indicate that these platforms are here to stay for some time. Given the multitude of still camera types and the image quality they are currently capable of, endless combinations of low- and high-cost LAAP solutions are available. In addition, LAAP allows for the exploitation of new imaging techniques, as it is often only a matter of lifting the appropriate device (e.g. video cameras, thermal frame imagers, hyperspectral line sensors). Archaeologists were among the first to adopt this technology, as it provided them with a means to easily acquire essential data from a unique point of view, whether for simple illustration purposes of standing historic structures or to compute three-dimensional (3D) models and orthophotographs from excavation areas. However, even very cheap multi-copters models require certain skills to pilot them safely. Additionally, malfunction or overconfidence might lift these devices to altitudes where they can interfere with manned aircrafts. As such, the

  13. Toward Real-time quantum imaging with a single pixel camera

    SciTech Connect

    Lawrie, Benjamin J; Pooser, Raphael C

    2013-01-01

    We present a workbench for the study of real-time quantum imaging by measuring the frame-by-frame quantum noise reduction of multi-spatial-mode twin beams generated by four wave mixing in Rb vapor. Exploiting the multiple spatial modes of this squeezed light source, we utilize spatial light modulators to selectively transmit macropixels of quantum correlated modes from each of the twin beams to a high quantum efficiency balanced detector. In low-light-level imaging applications, the ability to measure the quantum correlations between individual spatial modes and macropixels of spatial modes with a single pixel camera will facilitate compressive quantum imaging with sensitivity below the photon shot noise limit.

  14. Plasma image edge detection based on the visible camera in the EAST device.

    PubMed

    Shu, Shuangbao; Xu, Chongyang; Chen, Meiwen; Yang, Zhendong

    2016-01-01

    The controlling of plasma shape and position are essential to the success of Tokamak discharge. A real-time image acquisition system was designed to obtain plasma radiation image during the discharge processes in the Experimental Advanced Superconducting Tokamak (EAST) device. The hardware structure and software design of this visible camera system are introduced in detail. According to the general structure of EAST and the layout of the observation window, spatial location of the discharging plasma in the image was measured. An improved Sobel edge detection algorithm using iterative threshold was proposed to detect plasma boundary. EAST discharge results show that the proposed method acquired plasma position and boundary with high accuracy, which is of great significance for better plasma control.

  15. Design of a smartphone-camera-based fluorescence imaging system for the detection of oral cancer

    NASA Astrophysics Data System (ADS)

    Uthoff, Ross

    Shown is the design of the Smartphone Oral Cancer Detection System (SOCeeDS). The SOCeeDS attaches to a smartphone and utilizes its embedded imaging optics and sensors to capture images of the oral cavity to detect oral cancer. Violet illumination sources excite the oral tissues to induce fluorescence. Images are captured with the smartphone's onboard camera. Areas where the tissues of the oral cavity are darkened signify an absence of fluorescence signal, indicating breakdown in tissue structure brought by precancerous or cancerous conditions. With this data the patient can seek further testing and diagnosis as needed. Proliferation of this device will allow communities with limited access to healthcare professionals a tool to detect cancer in its early stages, increasing the likelihood of cancer reversal.

  16. Imaging performance of the mini-mosaic camera at the WIYN telescope

    NASA Astrophysics Data System (ADS)

    Saha, Abhijit; Armandroff, Taft; Sawyer, David G.; Corson, Charles

    2000-08-01

    The goal of much recent engineering improvements at the 3.5m WIYN telescope has been to improve imaging performance that utilizes the good intrinsic seeing at Kitt Peak. This direction complements the efforts of high order adaptive optics by maximizing the usable field. The new 'mini-mosaic' camera, which is a mosaic of 2 4K by 2K SITE CCDs is in the final stages of commissioning. With its 0.14 arc-sec per pixel scale at the Nasmyth f/6.3 focus, it is capable of adequately sampling the best delivered images from the telescope, while maintaining a relatively large field of view. We present some early performance results from this new instruments, and demonstrate the excellent image quality over the entire 9.6-minute field.

  17. Synchroscan streak camera imaging at a 15-MeV photoinjector with emittance exchange

    NASA Astrophysics Data System (ADS)

    Lumpkin, A. H.; Ruan, J.; Thurman-Keup, R.

    2012-09-01

    At the Fermilab A0 photoinjector facility, bunch-length measurements of the laser micropulse and the e-beam micropulse have been done in the past with a fast single-sweep module of the Hamamatsu C5680 streak camera with an intrinsic shot-to-shot trigger jitter of 10-20 ps. We have upgraded the camera system with the synchroscan module tuned to 81.25 MHz to provide synchronous summing capability with less than 1.5 ps FWHM trigger jitter and a phase-locked delay box to provide phase stability of ˜1 ps over 10 s of minutes. These steps allowed us to measure both the UV laser pulse train at 263 nm and the e-beam via optical transition radiation (OTR). Due to the low electron beam energies and OTR signals, we typically summed over 50 micropulses with 0.25-1 nC per micropulse. The phase-locked delay box allowed us to assess chromatic temporal effects and instigated another upgrade to an all-mirror input optics barrel. In addition, we added a slow sweep horizontal deflection plug-in unit to provide dual-sweep capability for the streak camera. We report on a series of measurements made during the commissioning of these upgrades including bunch-length and phase effects using the emittance exchange beamline and simultaneous imaging of a UV drive laser component, OTR, and the 800 nm diagnostics laser.

  18. Static laser speckle contrast analysis for noninvasive burn diagnosis using a camera-phone imager.

    PubMed

    Ragol, Sigal; Remer, Itay; Shoham, Yaron; Hazan, Sivan; Willenz, Udi; Sinelnikov, Igor; Dronov, Vladimir; Rosenberg, Lior; Bilenca, Alberto

    2015-08-01

    Laser speckle contrast analysis (LASCA) is an established optical technique for accurate widefield visualization of relative blood perfusion when no or minimal scattering from static tissue elements is present, as demonstrated, for example, in LASCA imaging of the exposed cortex. However, when LASCA is applied to diagnosis of burn wounds, light is backscattered from both moving blood and static burn scatterers, and thus the spatial speckle contrast includes both perfusion and nonperfusion components and cannot be straightforwardly associated to blood flow. We extract from speckle contrast images of burn wounds the nonperfusion (static) component and discover that it conveys useful information on the ratio of static-to-dynamic scattering composition of the wound, enabling identification of burns of different depth in a porcine model in vivo within the first 48 h postburn. Our findings suggest that relative changes in the static-to-dynamic scattering composition of burns can dominate relative changes in blood flow for burns of different severity. Unlike conventional LASCA systems that employ scientific or industrial-grade cameras, our LASCA system is realized here using a camera phone, showing the potential to enable LASCA-based burn diagnosis with a simple imager.

  19. Precise Trajectory Reconstruction of CE-3 Hovering Stage By Landing Camera Images

    NASA Astrophysics Data System (ADS)

    Yan, W.; Liu, J.; Li, C.; Ren, X.; Mu, L.; Gao, X.; Zeng, X.

    2014-12-01

    Chang'E-3 (CE-3) is part of the second phase of the Chinese Lunar Exploration Program, incorporating a lander and China's first lunar rover. It was landed on 14 December, 2013 successfully. Hovering and obstacle avoidance stages are essential for CE-3 safety soft landing so that precise spacecraft trajectory in these stages are of great significance to verify orbital control strategy, to optimize orbital design, to accurately determine the landing site of CE-3, and to analyze the geological background of the landing site. Because the time consumption of these stages is just 25s, it is difficult to present spacecraft's subtle movement by Measurement and Control System or by radio observations. Under this background, the trajectory reconstruction based on landing camera images can be used to obtain the trajectory of CE-3 because of its technical advantages such as unaffecting by lunar gravity field spacecraft kinetic model, high resolution, high frame rate, and so on. In this paper, the trajectory of CE-3 before and after entering hovering stage was reconstructed by landing camera images from frame 3092 to frame 3180, which lasted about 9s, under Single Image Space Resection (SISR). The results show that CE-3's subtle changes during hovering stage can be emerged by the reconstructed trajectory. The horizontal accuracy of spacecraft position was up to 1.4m while vertical accuracy was up to 0.76m. The results can be used for orbital control strategy analysis and some other application fields.

  20. The development of a subsea holographic camera for the imaging and analysis of marine organisms

    NASA Astrophysics Data System (ADS)

    Watson, John

    2004-06-01

    In this overview of the entire HoloMar system, we describe the design, development and field deployment of the fully-functioning, prototype, underwater holographic camera (HoloCam) followed by the dedicated replay facility (HoloScan) and the associated image processing and extraction of data from the holograms. The HoloCam consists of a laser and power supply, holographic recording optics and holographic plate holders, a water-tight housing and a support frame. It incorporates two basic holographic geometries, in-line and off-axis such that a wide range of species, sizes and concentrations can be recorded. After holograms have been recorded and processed thay are reconstructed in full three-dimensional detail in air in a dedicated replay facility. A computer controlled microscope, using video cameras to record the image at a given depth, is used to digitize the scene. Specially developed software extracts a binarized image of an object in its true focal plane and is classified using a neural network. The HoloCam was deployed on separate cruises in a Scottish sea loch (Loch Etive) to a depth of 100 m and over 300 holograms recorded.

  1. Light sources and cameras for standard in vitro membrane potential and high-speed ion imaging.

    PubMed

    Davies, R; Graham, J; Canepari, M

    2013-07-01

    Membrane potential and fast ion imaging are now standard optical techniques routinely used to record dynamic physiological signals in several preparations in vitro. Although detailed resolution of optical signals can be improved by confocal or two-photon microscopy, high spatial and temporal resolution can be obtained using conventional microscopy and affordable light sources and cameras. Thus, standard wide-field imaging methods are still the most common in research laboratories and can often produce measurements with a signal-to-noise ratio that is superior to other optical approaches. This paper seeks to review the most important instrumentation used in these experiments, with particular reference to recent technological advances. We analyse in detail the optical constraints dictating the type of signals that are obtained with voltage and ion imaging and we discuss how to use this information to choose the optimal apparatus. Then, we discuss the available light sources with specific attention to light emitting diodes and solid state lasers. We then address the current state-of-the-art of available charge coupled device, electron multiplying charge coupled device and complementary metal oxide semiconductor cameras and we analyse the characteristics that need to be taken into account for the choice of optimal detector. Finally, we conclude by discussing prospective future developments that are likely to further improve the quality of the signals expanding the capability of the techniques and opening the gate to novel applications.

  2. Using the Standard Deviation of a Region of Interest in an Image to Estimate Camera to Emitter Distance

    PubMed Central

    Cano-García, Angel E.; Lazaro, José Luis; Infante, Arturo; Fernández, Pedro; Pompa-Chacón, Yamilet; Espinoza, Felipe

    2012-01-01

    In this study, a camera to infrared diode (IRED) distance estimation problem was analyzed. The main objective was to define an alternative to measures depth only using the information extracted from pixel grey levels of the IRED image to estimate the distance between the camera and the IRED. In this paper, the standard deviation of the pixel grey level in the region of interest containing the IRED image is proposed as an empirical parameter to define a model for estimating camera to emitter distance. This model includes the camera exposure time, IRED radiant intensity and the distance between the camera and the IRED. An expression for the standard deviation model related to these magnitudes was also derived and calibrated using different images taken under different conditions. From this analysis, we determined the optimum parameters to ensure the best accuracy provided by this alternative. Once the model calibration had been carried out, a differential method to estimate the distance between the camera and the IRED was defined and applied, considering that the camera was aligned with the IRED. The results indicate that this method represents a useful alternative for determining the depth information. PMID:22778608

  3. Camera Image Transformation and Registration for Safe Spacecraft Landing and Hazard Avoidance

    NASA Technical Reports Server (NTRS)

    Jones, Brandon M.

    2005-01-01

    Inherent geographical hazards of Martian terrain may impede a safe landing for science exploration spacecraft. Surface visualization software for hazard detection and avoidance may accordingly be applied in vehicles such as the Mars Exploration Rover (MER) to induce an autonomous and intelligent descent upon entering the planetary atmosphere. The focus of this project is to develop an image transformation algorithm for coordinate system matching between consecutive frames of terrain imagery taken throughout descent. The methodology involves integrating computer vision and graphics techniques, including affine transformation and projective geometry of an object, with the intrinsic parameters governing spacecraft dynamic motion and camera calibration.

  4. Plume Imaging Using an IR Camera to Estimate Sulphur Dioxide Flux on Volcanoes of Northern Chile

    NASA Astrophysics Data System (ADS)

    Rosas Sotomayor, F.; Amigo, A.

    2014-12-01

    Remote sensing is a fast and safe method to obtain gas abundances in volcanic plumes, in particular when the access to the vent is difficult or during volcanic crisis. In recent years, a ground-based infrared camera (NicAir) has been developed by Nicarnica Aviation, which quantifies SO2 and ash on volcanic plumes, based on the infrared radiance at specific wavelengths through the application of filters. NicAir cameras have been acquired by the Geological Survey of Chile in order to study degassing of active volcanoes. This contribution focuses on series of measurements done in December 2013 in volcanoes of northern Chile, in particular Láscar, Irruputuncu and Ollagüe, which are characterized by persistent quiescent degassing. During fieldwork, plumes from all three volcanoes showed regular behavior and the atmospheric conditions were very favorable (cloud-free and dry air). Four, two and one sets of measurements, up to 100 images each, were taken for Láscar, Irruputuncu and Ollagüe volcano, respectively. Matlab software was used for image visualizing and processing of the raw data. For instance, data visualization is performed through Matlab IPT functions imshow() and imcontrast(), and one algorithm was created for extracting necessary metadata. Image processing considers radiation at 8.6 and 10 μm wavelengths, due to differential SO2 and water vapor absorption. Calibration was performed in the laboratory through a detector correlation between digital numbers (raw data in image pixel values) and spectral radiance, and also in the field considering the camera self-emissions of infrared radiation. A gradient between the plume core and plume rim is expected, due to quick reaction of sulphur dioxide with water vapor, therefore a flux underestimate is also expected. Results will be compared with other SO2 remote sensing instruments such as DOAS and UV-camera. The implementation of this novel technique in Chilean volcanoes will be a major advance in our

  5. Cloud Base Height Measurements at Manila Observatory: Initial Results from Constructed Paired Sky Imaging Cameras

    NASA Astrophysics Data System (ADS)

    Lagrosas, N.; Tan, F.; Antioquia, C. T.

    2014-12-01

    Fabricated all sky imagers are efficient and cost effective instruments for cloud detection and classification. Continuous operation of this instrument can result in the determination of cloud occurrence and cloud base heights for the paired system. In this study, a fabricated paired sky imaging system - consisting two commercial digital cameras (Canon Powershot A2300) enclosed in weatherproof containers - is developed in Manila Observatory for the purpose of determining cloud base heights at the Manila Observatory area. One of the cameras is placed on the rooftop of Manila Observatory and the other is placed on the rooftop of the university dormitory, 489m from the first camera. The cameras are programmed to simultaneously gather pictures every 5 min. Continuous operation of these cameras were implemented since the end of May of 2014 but data collection started end of October 2013. The data were processed following the algorithm proposed by Kassianov et al (2005). The processing involves the calculation of the merit function that determines the area of overlap of the two pictures. When two pictures are overlapped, the minimum of the merit function corresponds to the pixel column positions where the pictures have the best overlap. In this study, pictures of overcast sky prove to be difficult to process for cloud base height and were excluded from processing. The figure below shows the initial results of the hourly average of cloud base heights from data collected from November 2013 to July 2014. Measured cloud base heights ranged from 250m to 1.5km. These are the heights of cumulus and nimbus clouds that are dominant in this part of the world. Cloud base heights are low in the early hours of the day indicating low convection process during these times. However, the increase in the convection process in the atmosphere can be deduced from higher cloud base heights in the afternoon. The decrease of cloud base heights after 15:00 follows the trend of decreasing solar

  6. High-quality virus images obtained by transmission electron microscopy and charge coupled device digital camera technology.

    PubMed

    Tiekotter, Kenneth L; Ackermann, Hans-W

    2009-07-01

    The introduction of digital cameras has led to the publication of numerous virus electron micrographs of low magnification, poor contrast, and low resolution. Described herein is the methodology for obtaining highly contrasted virus images in the magnification range of approximately 250,000-300,000x. Based on recent advances in charged couple device (CCD) digital camera technology, methodology is described for optimal imaging parameters for using CCD cameras mounted in side- and bottom-mount position of electron microscopes and the recommendation of higher accelerating voltages, larger objective apertures, and small spot size. The authors are concerned with the principles of image formation and modulation, advocate a better use of imaging software to improve image quality, and recommend either pre- or post-acquisition adjustment for distributing pixel intensities of compressed histograms over the entire range of tonal values.

  7. A novel three-dimensional imaging method by means of coded cameras array recording and computational reconstruction

    NASA Astrophysics Data System (ADS)

    Lang, Haitao; Liu, Liren; Yang, Qingguo

    2007-04-01

    In this paper, we propose a novel three-dimensional imaging method by which the object is captured by a coded cameras array (CCA) and computationally reconstructed as a series of longitudinal layered surface images of the object. The distribution of cameras in array, named code pattern, is crucial for reconstructed images fidelity when the correlation decoding is used. We use DIRECT global optimization algorithm to design the code patterns that possess proper imaging property. We have conducted primary experiments to verify and test the performance of the proposed method with a simple discontinuous object and a small-scale CCA including nine cameras. After certain procedures such as capturing, photograph integrating, computational reconstructing and filtering, etc., we obtain reconstructed longitudinal layered surface images of the object with higher signal-to-noise ratio. The results of experiments show that the proposed method is feasible. It is a promising method to be used in fields such as remote sensing, machine vision, etc.

  8. System for photometric calibration of optoelectronic imaging devices especially streak cameras

    DOEpatents

    Boni, Robert; Jaanimagi, Paul

    2003-11-04

    A system for the photometric calibration of streak cameras and similar imaging devices provides a precise knowledge of the camera's flat-field response as well as a mapping of the geometric distortions. The system provides the flat-field response, representing the spatial variations in the sensitivity of the recorded output, with a signal-to-noise ratio (SNR) greater than can be achieved in a single submicrosecond streak record. The measurement of the flat-field response is carried out by illuminating the input slit of the streak camera with a signal that is uniform in space and constant in time. This signal is generated by passing a continuous wave source through an optical homogenizer made up of a light pipe or pipes in which the illumination typically makes several bounces before exiting as a spatially uniform source field. The rectangular cross-section of the homogenizer is matched to the usable photocathode area of the streak tube. The flat-field data set is obtained by using a slow streak ramp that may have a period from one millisecond (ms) to ten seconds (s), but may be nominally one second in duration. The system also provides a mapping of the geometric distortions, by spatially and temporarily modulating the output of the homogenizer and obtaining a data set using the slow streak ramps. All data sets are acquired using a CCD camera and stored on a computer, which is used to calculate all relevant corrections to the signal data sets. The signal and flat-field data sets are both corrected for geometric distortions prior to applying the flat-field correction. Absolute photometric calibration is obtained by measuring the output fluence of the homogenizer with a "standard-traceable" meter and relating that to the CCD pixel values for a self-corrected flat-field data set.

  9. Image disparity in cross-spectral face recognition: mitigating camera and atmospheric effects

    NASA Astrophysics Data System (ADS)

    Cao, Zhicheng; Schmid, Natalia A.; Li, Xin

    2016-05-01

    Matching facial images acquired in different electromagnetic spectral bands remains a challenge. An example of this type of comparison is matching active or passive infrared (IR) against a gallery of visible face images. When combined with cross-distance, this problem becomes even more challenging due to deteriorated quality of the IR data. As an example, we consider a scenario where visible light images are acquired at a short standoff distance while IR images are long range data. To address the difference in image quality due to atmospheric and camera effects, typical degrading factors observed in long range data, we propose two approaches that allow to coordinate image quality of visible and IR face images. The first approach involves Gaussian-based smoothing functions applied to images acquired at a short distance (visible light images in the case we analyze). The second approach involves denoising and enhancement applied to low quality IR face images. A quality measure tool called Adaptive Sharpness Measure is utilized as guidance for the quality parity process, which is an improvement of the famous Tenengrad method. For recognition algorithm, a composite operator combining Gabor filters, Local Binary Patterns (LBP), generalized LBP and Weber Local Descriptor (WLD) is used. The composite operator encodes both magnitude and phase responses of the Gabor filters. The combining of LBP and WLD utilizes both the orientation and intensity information of edges. Different IR bands, short-wave infrared (SWIR) and near-infrared (NIR), and different long standoff distances are considered. The experimental results show that in all cases the proposed technique of image quality parity (both approaches) benefits the final recognition performance.

  10. Experiences Supporting the Lunar Reconnaissance Orbiter Camera: the Devops Model

    NASA Astrophysics Data System (ADS)

    Licht, A.; Estes, N. M.; Bowman-Cisnesros, E.; Hanger, C. D.

    2013-12-01

    Introduction: The Lunar Reconnaissance Orbiter Camera (LROC) Science Operations Center (SOC) is responsible for instrument targeting, product processing, and archiving [1]. The LROC SOC maintains over 1,000,000 observations with over 300 TB of released data. Processing challenges compound with the acquisition of over 400 Gbits of observations daily creating the need for a robust, efficient, and reliable suite of specialized software. Development Environment: The LROC SOC's software development methodology has evolved over time. Today, the development team operates in close cooperation with the systems administration team in a model known in the IT industry as DevOps. The DevOps model enables a highly productive development environment that facilitates accomplishment of key goals within tight schedules[2]. The LROC SOC DevOps model incorporates industry best practices including prototyping, continuous integration, unit testing, code coverage analysis, version control, and utilizing existing open source software. Scientists and researchers at LROC often prototype algorithms and scripts in a high-level language such as MATLAB or IDL. After the prototype is functionally complete the solution is implemented as production ready software by the developers. Following this process ensures that all controls and requirements set by the LROC SOC DevOps team are met. The LROC SOC also strives to enhance the efficiency of the operations staff by way of weekly presentations and informal mentoring. Many small scripting tasks are assigned to the cognizant operations personnel (end users), allowing for the DevOps team to focus on more complex and mission critical tasks. In addition to leveraging open source software the LROC SOC has also contributed to the open source community by releasing Lunaserv [3]. Findings: The DevOps software model very efficiently provides smooth software releases and maintains team momentum. Scientists prototyping their work has proven to be very efficient

  11. Image Analysis of OSIRIS-REx Touch-And-Go Camera System (TAGCAMS) Thermal Vacuum Test Images

    NASA Astrophysics Data System (ADS)

    Everett Gordon, Kenneth; Bos, Brent J.

    2017-01-01

    The objective of NASA’s OSIRIS-REx Asteroid Sample Return Mission, which launched in September 2016, is to travel to the near-Earth asteroid 101955 Bennu, survey and map the asteroid, and return a scientifically interesting sample to Earth in 2023. As a part of its suite of integrated sensors, the OSIRIS-REx spacecraft includes a Touch-And-Go Camera System (TAGCAMS). The purpose of TAGCAMS is to provide imagery during the mission to facilitate navigation to the target asteroid, acquisition of the asteroid sample, and confirmation of the asteroid sample stowage in the spacecraft’s Sample Return Capsule (SRC). After first being calibrated at Malin Space Science Systems (MSSS) at the instrument level, the TAGCAMS were then transferred to Lockheed Martin (LM), where they were put through a progressive series of spacecraft-level environmental tests. These tests culminated in a several-week long, spacecraft-level thermal vacuum (TVAC) test during which hundreds of images were recorded. To analyze the images, custom codes were developed using MATLAB R2016a programming software. For analyses of the TAGCAMS dark images, the codes observed the dark current level for each of the images as a function of the camera-head temperature. Results confirm that the detector dark current noise has not increased and follows similar trends to the results measured at the instrument-level by MSSS. This indicates that the electrical performance of the camera system is stable, even after integration with the spacecraft, and will provide imagery with the required signal-to-noise ratio during spaceflight operations. During the TVAC testing, the TAGCAMS were positioned to view optical dot targets suspended in the chamber. Results for the TAGCAMS light images using a centroid analysis on the positions of the optical target holes indicate that the boresight pointing of the two navigation cameras depend on spacecraft temperature, but will not change by more than ten pixels (approximately 2

  12. Minimal camera networks for 3D image based modeling of cultural heritage objects.

    PubMed

    Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma

    2014-03-25

    3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue "Lamassu". Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883-859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm.

  13. Human Detection Based on the Generation of a Background Image and Fuzzy System by Using a Thermal Camera

    PubMed Central

    Jeon, Eun Som; Kim, Jong Hyun; Hong, Hyung Gil; Batchuluun, Ganbayar; Park, Kang Ryoung

    2016-01-01

    Recently, human detection has been used in various applications. Although visible light cameras are usually employed for this purpose, human detection based on visible light cameras has limitations due to darkness, shadows, sunlight, etc. An approach using a thermal (far infrared light) camera has been studied as an alternative for human detection, however, the performance of human detection by thermal cameras is degraded in case of low temperature differences between humans and background. To overcome these drawbacks, we propose a new method for human detection by using thermal camera images. The main contribution of our research is that the thresholds for creating the binarized difference image between the input and background (reference) images can be adaptively determined based on fuzzy systems by using the information derived from the background image and difference values between background and input image. By using our method, human area can be correctly detected irrespective of the various conditions of input and background (reference) images. For the performance evaluation of the proposed method, experiments were performed with the 15 datasets captured under different weather and light conditions. In addition, the experiments with an open database were also performed. The experimental results confirm that the proposed method can robustly detect human shapes in various environments. PMID:27043564

  14. High-resolution imaging of the Pluto-Charon system with the Faint Object Camera of the Hubble Space Telescope

    NASA Technical Reports Server (NTRS)

    Albrecht, R.; Barbieri, C.; Adorf, H.-M.; Corrain, G.; Gemmo, A.; Greenfield, P.; Hainaut, O.; Hook, R. N.; Tholen, D. J.; Blades, J. C.

    1994-01-01

    Images of the Pluto-Charon system were obtained with the Faint Object Camera (FOC) of the Hubble Space Telescope (HST) after the refurbishment of the telescope. The images are of superb quality, allowing the determination of radii, fluxes, and albedos. Attempts were made to improve the resolution of the already diffraction limited images by image restoration. These yielded indications of surface albedo distributions qualitatively consistent with models derived from observations of Pluto-Charon mutual eclipses.

  15. Position-sensitive detection of ultracold neutrons with an imaging camera and its implications to spectroscopy

    NASA Astrophysics Data System (ADS)

    Wei, Wanchun; Broussard, L. J.; Hoffbauer, M. A.; Makela, M.; Morris, C. L.; Tang, Z.; Adamek, E. R.; Callahan, N. B.; Clayton, S. M.; Cude-Woods, C.; Currie, S.; Dees, E. B.; Ding, X.; Geltenbort, P.; Hickerson, K. P.; Holley, A. T.; Ito, T. M.; Leung, K. K.; Liu, C.-Y.; Morley, D. J.; Ortiz, Jose D.; Pattie, R. W.; Ramsey, J. C.; Saunders, A.; Seestrom, S. J.; Sharapov, E. I.; Sjue, S. K.; Wexler, J.; Womack, T. L.; Young, A. R.; Zeck, B. A.; Wang, Zhehui

    2016-09-01

    Position-sensitive detection of ultracold neutrons (UCNs) is demonstrated using an imaging charge-coupled device (CCD) camera. A spatial resolution less than 15 μm has been achieved, which is equivalent to a UCN energy resolution below 2 pico-electron-volts through the relation δE =m0 gδx. Here, the symbols δE, δx, m0 and g are the energy resolution, the spatial resolution, the neutron rest mass and the gravitational acceleration, respectively. A multilayer surface convertor described previously is used to capture UCNs and then emits visible light for CCD imaging. Particle identification and noise rejection are discussed through the use of light intensity profile analysis. This method allows different types of UCN spectroscopy and other applications.

  16. Real time plume and laser spot recognition in IR camera images

    SciTech Connect

    Moore, K.R.; Caffrey, M.P.; Nemzek, R.J.; Salazar, A.A.; Jeffs, J.; Andes, D.K.; Witham, J.C.

    1997-08-01

    It is desirable to automatically guide the laser spot onto the effluent plume for maximum IR DIAL system sensitivity. This requires the use of a 2D focal plane array. The authors have demonstrated that a wavelength-filtered IR camera is capable of 2D imaging of both the plume and the laser spot. In order to identify the centers of the plume and the laser spot, it is first necessary to segment these features from the background. They report a demonstration of real time plume segmentation based on velocity estimation. They also present results of laser spot segmentation using simple thresholding. Finally, they describe current research on both advanced segmentation and recognition algorithms and on reconfigurable real time image processing hardware based on field programmable gate array technology.

  17. Charon's Color: A view from New Horizon Ralph/Multispectral Visible Imaging Camera

    NASA Astrophysics Data System (ADS)

    Olkin, C.; Howett, C.; Grundy, W. M.; Parker, A. H.; Ennico Smith, K.; Stern, S. A.; Binzel, R. P.; Cook, J. C.; Cruikshank, D. P.; Dalle Ore, C.; Earle, A. M.; Jennings, D. E.; Linscott, I.; Lunsford, A.; Parker, J. W.; Protopapa, S.; Reuter, D.; Singer, K. N.; Spencer, J. R.; Tsang, C.; Verbiscer, A.; Weaver, H. A., Jr.; Young, L. A.

    2015-12-01

    The Multispectral Visible Imaging Camera (MVIC; Reuter et al., 2008) is part of Ralph, an instrument on NASA's New Horizons spacecraft. MVIC is the color 'eyes' of New Horizons, observing objects using five bands from blue to infrared wavelengths. MVIC's images of Charon show it to be an intriguing place, a far cry from the grey heavily cratered world once postulated. Rather Charon is observed to have large surface areas free of craters, and a northern polar region that is much redder than its surroundings. This talk will describe these initial results in more detail, along with Charon's global geological color variations to put these results into their wider context. Finally possible surface coloration mechanisms due to global processes and/or seasonal cycles will be discussed.

  18. Satellite Detection in Advanced Camera for Surveys/Wide Field Channel Images

    NASA Astrophysics Data System (ADS)

    Borncamp, D.; Lim, Pey Lian

    2016-01-01

    This document explains the process by which satellite trails can be found within individual chips of an Advanced Camera for Surveys (ACS) Wide Field Channel (WFC) image. Since satellites are transient and sporadic events, we used the Hubble Frontier Fields (HFF) dataset which is manually checked for satellite trails has been used as a truth set to verify that the method in this document does a complete job without a high false positive rate. This document also details the process of producing a mask that will update data quality information to inform users where the trail traverses the image and properly account for the affected pixels. Along with this document, the Python source code used to detect and mask satellite trails will be released to users with as a stand-alone product within the STSDAS acstools package.

  19. Body-Based Gender Recognition Using Images from Visible and Thermal Cameras.

    PubMed

    Nguyen, Dat Tien; Park, Kang Ryoung

    2016-01-27

    Gender information has many useful applications in computer vision systems, such as surveillance systems, counting the number of males and females in a shopping mall, accessing control systems in restricted areas, or any human-computer interaction system. In most previous studies, researchers attempted to recognize gender by using visible light images of the human face or body. However, shadow, illumination, and time of day greatly affect the performance of these methods. To overcome this problem, we propose a new gender recognition method based on the combination of visible light and thermal camera images of the human body. Experimental results, through various kinds of feature extraction and fusion methods, show that our approach is efficient for gender recognition through a comparison of recognition rates with conventional systems.

  20. High-resolution Ceres High Altitude Mapping Orbit atlas derived from Dawn Framing Camera images

    NASA Astrophysics Data System (ADS)

    Roatsch, Th.; Kersten, E.; Matz, K.-D.; Preusker, F.; Scholten, F.; Jaumann, R.; Raymond, C. A.; Russell, C. T.

    2016-09-01

    The Dawn spacecraft Framing Camera (FC) acquired over 2400 clear filter images of Ceres with a resolution of about 140 m/pixel during the six cycles in the High Altitude Mapping Orbit (HAMO) phase between August 18 and October 21, 2015. We ortho-rectified the images from the first cycle and produced a global, high-resolution, controlled photomosaic of Ceres. This global mosaic is the basis for a high-resolution Ceres atlas that consists of 15 tiles mapped at a scale of 1:750,000. The nomenclature used in this atlas was proposed by the Dawn team and was approved by the International Astronomical Union (IAU). The full atlas is available to the public through the Dawn Geographical Information System (GIS) web page

  1. Position-sensitive detection of ultracold neutrons with an imaging camera and its implications to spectroscopy

    SciTech Connect

    Wei, Wanchun; Broussard, Leah J.; Hoffbauer, Mark Arles; Makela, Mark F.; Morris, Christopher L.; Tang, Zhaowen; Adamek, Evan Robert; Callahan, Nathen Brannan; Clayton, Steven M.; Cude-Woods, Chris B.; Currie, Scott Allister; Dees, E. B.; Ding, Xinjian; Geltenbort, Peter W.; Hickerson, Kevin Peter; Holley, Adam Tarte; Ito, Takeyasu M.; Leung, Kent Kwan Ho; Liu, Chen -Yu; Morley, Deborah Jean; Ortiz, Jose D.; Pattie, Jr., Robert Wayne; Ramsey, John Clinton; Saunders, Alexander; Seestrom, Susan Joyce; Sharapov, E. I.; Sjue, Sky K.; Wexler, Jonathan William; Womack, Todd Lane; Young, Albert Raymond; Zeck, Bryan Alexander; Wang, Zhehui

    2016-05-16

    Position-sensitive detection of ultracold neutrons (UCNs) is demonstrated using an imaging charge-coupled device (CCD) camera. A spatial resolution less than 15μm has been achieved, which is equivalent to a UCN energy resolution below 2 pico-electron-volts through the relation δE=m0gδx. Here, the symbols δE, δx, m0 and g are the energy resolution, the spatial resolution, the neutron rest mass and the gravitational acceleration, respectively. A multilayer surface convertor described previously is used to capture UCNs and then emits visible light for CCD imaging. Particle identification and noise rejection are discussed through the use of light intensity profile analysis. As a result, this method allows different types of UCN spectroscopy and other applications.

  2. Position-sensitive detection of ultracold neutrons with an imaging camera and its implications to spectroscopy

    DOE PAGES

    Wei, Wanchun; Broussard, Leah J.; Hoffbauer, Mark Arles; ...

    2016-05-16

    Position-sensitive detection of ultracold neutrons (UCNs) is demonstrated using an imaging charge-coupled device (CCD) camera. A spatial resolution less than 15μm has been achieved, which is equivalent to a UCN energy resolution below 2 pico-electron-volts through the relation δE=m0gδx. Here, the symbols δE, δx, m0 and g are the energy resolution, the spatial resolution, the neutron rest mass and the gravitational acceleration, respectively. A multilayer surface convertor described previously is used to capture UCNs and then emits visible light for CCD imaging. Particle identification and noise rejection are discussed through the use of light intensity profile analysis. As a result,more » this method allows different types of UCN spectroscopy and other applications.« less

  3. Firefly: A HOT camera core for thermal imagers with enhanced functionality

    NASA Astrophysics Data System (ADS)

    Pillans, Luke; Harmer, Jack; Edwards, Tim

    2015-06-01

    Raising the operating temperature of mercury cadmium telluride infrared detectors from 80K to above 160K creates new applications for high performance infrared imagers by vastly reducing the size, weight and power consumption of the integrated cryogenic cooler. Realizing the benefits of Higher Operating Temperature (HOT) requires a new kind of infrared camera core with the flexibility to address emerging applications in handheld, weapon mounted and UAV markets. This paper discusses the Firefly core developed to address these needs by Selex ES in Southampton UK. Firefly represents a fundamental redesign of the infrared signal chain reducing power consumption and providing compatibility with low cost, low power Commercial Off-The-Shelf (COTS) computing technology. This paper describes key innovations in this signal chain: a ROIC purpose built to minimize power consumption in the proximity electronics, GPU based image processing of infrared video, and a software customisable infrared core which can communicate wirelessly with other Battlespace systems.

  4. Estimation of Enterococci Input from Bathers and Animals on A Recreational Beach Using Camera Images

    PubMed Central

    D, Wang John; M, Solo-Gabriele Helena; M, Abdelzaher Amir; E, Fleming Lora

    2010-01-01

    Enterococci, are used nationwide as a water quality indicator of marine recreational beaches. Prior research has demonstrated that enterococci inputs to the study beach site (located in Miami, FL) are dominated by non-point sources (including humans and animals). We have estimated their respective source functions by developing a counting methodology for individuals to better understand their non-point source load impacts. The method utilizes camera images of the beach taken at regular time intervals to determine the number of people and animal visitors. The developed method translates raw image counts for weekdays and weekend days into daily and monthly visitation rates. Enterococci source functions were computed from the observed number of unique individuals for average days of each month of the year, and from average load contributions for humans and for animals. Results indicate that dogs represent the larger source of enterococci relative to humans and birds. PMID:20381094

  5. A JPEG-like algorithm for compression of single-sensor camera image

    NASA Astrophysics Data System (ADS)

    Benahmed Daho, Omar; Larabi, Mohamed-Chaker; Mukhopadhyay, Jayanta

    2011-01-01

    This paper presents a JPEG-like coder for image compression of single-sensor camera images using a Bayer Color Filter Array (CFA). The originality of the method is a joint scheme of compression.demosaicking in the DCT domain. In this method, the captured CFA raw data is first separated in four distinct components and then converted to YCbCr. A JPEG compression scheme is then applied. At the decoding level, the bitstream is decompressed until reaching the DCT coefficients. These latter are used for the interpolation stage. The obtained results are better than those obtained by the conventional JPEG in terms of CPSNR, ΔE2000 and SSIM. The obtained JPEG-like scheme is also less complex.

  6. Automatic camera-based identification and 3-D reconstruction of electrode positions in electrocardiographic imaging.

    PubMed

    Schulze, Walther H W; Mackens, Patrick; Potyagaylo, Danila; Rhode, Kawal; Tülümen, Erol; Schimpf, Rainer; Papavassiliu, Theano; Borggrefe, Martin; Dössel, Olaf

    2014-12-01

    Electrocardiographic imaging (ECG imaging) is a method to depict electrophysiological processes in the heart. It is an emerging technology with the potential of making the therapy of cardiac arrhythmia less invasive, less expensive, and more precise. A major challenge for integrating the method into clinical workflow is the seamless and correct identification and localization of electrodes on the thorax and their assignment to recorded channels. This work proposes a camera-based system, which can localize all electrode positions at once and to an accuracy of approximately 1 ± 1 mm. A system for automatic identification of individual electrodes is implemented that overcomes the need of manual annotation. For this purpose, a system of markers is suggested, which facilitates a precise localization to subpixel accuracy and robust identification using an error-correcting code. The accuracy of the presented system in identifying and localizing electrodes is validated in a phantom study. Its overall capability is demonstrated in a clinical scenario.

  7. Body-Based Gender Recognition Using Images from Visible and Thermal Cameras

    PubMed Central

    Nguyen, Dat Tien; Park, Kang Ryoung

    2016-01-01

    Gender information has many useful applications in computer vision systems, such as surveillance systems, counting the number of males and females in a shopping mall, accessing control systems in restricted areas, or any human-computer interaction system. In most previous studies, researchers attempted to recognize gender by using visible light images of the human face or body. However, shadow, illumination, and time of day greatly affect the performance of these methods. To overcome this problem, we propose a new gender recognition method based on the combination of visible light and thermal camera images of the human body. Experimental results, through various kinds of feature extraction and fusion methods, show that our approach is efficient for gender recognition through a comparison of recognition rates with conventional systems. PMID:26828487

  8. Imaging system for cardiac planar imaging using a dedicated dual-head gamma camera

    DOEpatents

    Majewski, Stanislaw; Umeno, Marc M.

    2011-09-13

    A cardiac imaging system employing dual gamma imaging heads co-registered with one another to provide two dynamic simultaneous views of the heart sector of a patient torso. A first gamma imaging head is positioned in a first orientation with respect to the heart sector and a second gamma imaging head is positioned in a second orientation with respect to the heart sector. An adjustment arrangement is capable of adjusting the distance between the separate imaging heads and the angle between the heads. With the angle between the imaging heads set to 180 degrees and operating in a range of 140-159 keV and at a rate of up to 500kHz, the imaging heads are co-registered to produce simultaneous dynamic recording of two stereotactic views of the heart. The use of co-registered imaging heads maximizes the uniformity of detection sensitivity of blood flow in and around the heart over the whole heart volume and minimizes radiation absorption effects. A normalization/image fusion technique is implemented pixel-by-corresponding pixel to increase signal for any cardiac region viewed in two images obtained from the two opposed detector heads for the same time bin. The imaging system is capable of producing enhanced first pass studies, bloodpool studies including planar, gated and non-gated EKG studies, planar EKG perfusion studies, and planar hot spot imaging.

  9. A Compton scatter camera for spectral imaging of 0.5 to 3.0 MeV gamma rays

    SciTech Connect

    Martin, Jeffrey Basil

    1994-01-01

    A prototype Compton scatter camera for imaging gamma rays has been built and tested. This camera addresses unique aspects of gamma-ray imaging at nuclear industrial sites, including gamma-ray energies in the 0.5 to 3.0 MeV range and polychromatic fields. Analytic models of camera efficiency, resolution and contaminating events are developed. The response of the camera bears strong similarity to emission computed tomography devices used in nuclear medicine. A direct Fourier based algorithm is developed to reconstruct two-dimensional images of measured gamma-ray fields. Iterative ART and MLE algorithms are also investigated. The point response of the camera to gamma rays of energies from 0.5 to 2.8 MeV is measured and compared to the analytic models. The direct reconstruction algorithm is at least ten times more efficient than the iterative algorithms are also investigated. The point response of the camera to gamma rays energies from 0.5 to 2.8 MeV is measured and compared to the analytic models. The direct reconstruction algorithm is at least ten times more efficient than the iterative algorithms and produces images that are, in general, of the same quality. Measured images of several phantoms are shown. Important results include angular resolutions as low as 4.4{degrees}, reproduction of phantom size and position within 7%, and contrast recovery of 84% or better. Spectral imaging is demonstrated with independent images from a multi-energy phantom consisting of two sources imaged simultaneously.

  10. CMOS detector arrays in a virtual 10-kilopixel camera for coherent terahertz real-time imaging.

    PubMed

    Boppel, Sebastian; Lisauskas, Alvydas; Max, Alexander; Krozer, Viktor; Roskos, Hartmut G

    2012-02-15

    We demonstrate the principle applicability of antenna-coupled complementary metal oxide semiconductor (CMOS) field-effect transistor arrays as cameras for real-time coherent imaging at 591.4 GHz. By scanning a few detectors across the image plane, we synthesize a focal-plane array of 100×100 pixels with an active area of 20×20 mm2, which is applied to imaging in transmission and reflection geometries. Individual detector pixels exhibit a voltage conversion loss of 24 dB and a noise figure of 41 dB for 16 μW of the local oscillator (LO) drive. For object illumination, we use a radio-frequency (RF) source with 432 μW at 590 GHz. Coherent detection is realized by quasioptical superposition of the image and the LO beam with 247 μW. At an effective frame rate of 17 Hz, we achieve a maximum dynamic range of 30 dB in the center of the image and more than 20 dB within a disk of 18 mm diameter. The system has been used for surface reconstruction resolving a height difference in the μm range.

  11. Adaptive Nonlocal Sparse Representation for Dual-Camera Compressive Hyperspectral Imaging.

    PubMed

    Wang, Lizhi; Xiong, Zhiwei; Shi, Guangming; Wu, Feng; Zeng, Wenjun

    2016-10-25

    Leveraging the compressive sensing (CS) theory, coded aperture snapshot spectral imaging (CASSI) provides an efficient solution to recover 3D hyperspectral data from a 2D measurement. The dual-camera design of CASSI, by adding an uncoded panchromatic measurement, enhances the reconstruction fidelity while maintaining the snapshot advantage. In this paper, we propose an adaptive nonlocal sparse representation (ANSR) model to boost the performance of dualcamera compressive hyperspectral imaging (DCCHI). Specifically, the CS reconstruction problem is formulated as a 3D cube based sparse representation to make full use of the nonlocal similarity in both the spatial and spectral domains. Our key observation is that, the panchromatic image, besides playing the role of direct measurement, can be further exploited to help the nonlocal similarity estimation. Therefore, we design a joint similarity metric by adaptively combining the internal similarity within the reconstructed hyperspectral image and the external similarity within the panchromatic image. In this way, the fidelity of CS reconstruction is greatly enhanced. Both simulation and hardware experimental results show significant improvement of the proposed method over the state-of-the-art.

  12. Calibration of HST wide field camera for quantitative analysis of faint galaxy images

    NASA Technical Reports Server (NTRS)

    Ratnatunga, Kavan U.; Griffiths, Richard E.; Casertano, Stefano; Neuschaefer, Lyman W.; Wyckoff, Eric W.

    1994-01-01

    We present the methods adopted to optimize the calibration of images obtained with the Hubble Space Telescope (HST) Wide Field Camera (WFC) (1991-1993). Our main goal is to improve quantitative measurement of faint images, with special emphasis on the faint (I approximately 20-24 mag) stars and galaxies observed as a part of the Medium-Deep Survey. Several modifications to the standard calibration procedures have been introduced, including improved bias and dark images, and a new supersky flatfield obtained by combining a large number of relatively object-free Medium-Deep Survey exposures of random fields. The supersky flat has a pixel-to-pixel rms error of about 2.0% in F555W and of 2.4% in F785LP; large-scale variations are smaller than 1% rms. Overall, our modifications improve the quality of faint images with respect to the standard calibration by about a factor of five in photometric accuracy and about 0.3 mag in sensitivity, corresponding to about a factor of two in observing time. The relevant calibration images have been made available to the scientific community.

  13. SU-C-18A-02: Image-Based Camera Tracking: Towards Registration of Endoscopic Video to CT

    SciTech Connect

    Ingram, S; Rao, A; Wendt, R; Castillo, R; Court, L; Yang, J; Beadle, B

    2014-06-01

    Purpose: Endoscopic examinations are routinely performed on head and neck and esophageal cancer patients. However, these images are underutilized for radiation therapy because there is currently no way to register them to a CT of the patient. The purpose of this work is to develop a method to track the motion of an endoscope within a structure using images from standard clinical equipment. This method will be incorporated into a broader endoscopy/CT registration framework. Methods: We developed a software algorithm to track the motion of an endoscope within an arbitrary structure. We computed frame-to-frame rotation and translation of the camera by tracking surface points across the video sequence and utilizing two-camera epipolar geometry. The resulting 3D camera path was used to recover the surrounding structure via triangulation methods. We tested this algorithm on a rigid cylindrical phantom with a pattern spray-painted on the inside. We did not constrain the motion of the endoscope while recording, and we did not constrain our measurements using the known structure of the phantom. Results: Our software algorithm can successfully track the general motion of the endoscope as it moves through the phantom. However, our preliminary data do not show a high degree of accuracy in the triangulation of 3D point locations. More rigorous data will be presented at the annual meeting. Conclusion: Image-based camera tracking is a promising method for endoscopy/CT image registration, and it requires only standard clinical equipment. It is one of two major components needed to achieve endoscopy/CT registration, the second of which is tying the camera path to absolute patient geometry. In addition to this second component, future work will focus on validating our camera tracking algorithm in the presence of clinical imaging features such as patient motion, erratic camera motion, and dynamic scene illumination.

  14. A semi-automatic image-based close range 3D modeling pipeline using a multi-camera configuration.

    PubMed

    Rau, Jiann-Yeou; Yeh, Po-Chia

    2012-01-01

    The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum.

  15. A Semi-Automatic Image-Based Close Range 3D Modeling Pipeline Using a Multi-Camera Configuration

    PubMed Central

    Rau, Jiann-Yeou; Yeh, Po-Chia

    2012-01-01

    The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum. PMID:23112656

  16. Light field sensor and real-time panorama imaging multi-camera system and the design of data acquisition

    NASA Astrophysics Data System (ADS)

    Lu, Yu; Tao, Jiayuan; Wang, Keyi

    2014-09-01

    Advanced image sensor and powerful parallel data acquisition chip can be used to collect more detailed and comprehensive light field information. Using multiple single aperture and high resolution sensor record light field data, and processing the light field data real time, we can obtain wide field-of-view (FOV) and high resolution image. Wide FOV and high-resolution imaging has promising application in areas of navigation, surveillance and robotics. Qualityenhanced 3D rending, very high resolution depth map estimation, high dynamic-range and other applications we can obtained when we post-process these large light field data. The FOV and resolution are contradictions in traditional single aperture optic imaging system, and can't be solved very well. We have designed a multi-camera light field data acquisition system, and optimized each sensor's spatial location and relations. It can be used to wide FOV and high resolution real-time image. Using 5 megapixel CMOS sensors, and field programmable Gate Array (FPGA) acquisition light field data, paralleled processing and transmission to PC. A common clock signal is distributed to all of the cameras, and the precision of synchronization each camera achieved 40ns. Using 9 CMOSs build an initial system and obtained high resolution 360°×60° FOV image. It is intended to be flexible, modular and scalable, with much visibility and control over the cameras. In the system we used high speed dedicated camera interface CameraLink for system data transfer. The detail of the hardware architecture, its internal blocks, the algorithms, and the device calibration procedure are presented, along with imaging results.

  17. Lunar Reconnaissance Orbiter Camera Narrow Angle Cameras: Laboratory and Initial Flight Calibration

    NASA Astrophysics Data System (ADS)

    Humm, D. C.; Tschimmel, M.; Denevi, B. W.; Lawrence, S.; Mahanti, P.; Tran, T. N.; Thomas, P. C.; Eliason, E.; Robinson, M. S.

    2009-12-01

    The Lunar Reconnaissance Orbiter Camera (LROC) has two identical Narrow Angle Cameras (NACs). Each NAC is a monochrome pushbroom scanner, providing images with a pixel scale of 50 cm from a 50-km orbit. A single NAC image has a swath width of 2.5 km and a length of up to 26 km. The NACs are mounted to acquire side-by-side imaging for a combined swath width of 5 km. The NAC is designed to fully characterize future human and robotic landing sites in terms of scientific and resource merit, trafficability, and hazards. The North and South poles will be mapped at 1-meter-scale poleward of 85.5 degrees latitude. Stereo coverage is achieved by pointing the NACs off-nadir, which requires planning in advance. Read noise is 91 and 93 e- and the full well capacity is 334,000 and 352,000 e- for NAC-L and NAC-R respectively. Signal-to-noise ranges from 42 for low-reflectance material with 70 degree illumination to 230 for high-reflectance material with 0 degree illumination. Longer exposure times and 2x binning are available to further increase signal-to-noise with loss of spatial resolution. Lossy data compression from 12 bits to 8 bits uses a companding table selected from a set optimized for different signal levels. A model of focal plane temperatures based on flight data is used to command dark levels for individual images, optimizing the performance of the companding tables and providing good matching of the NAC-L and NAC-R images even before calibration. The preliminary NAC calibration pipeline includes a correction for nonlinearity at low signal levels with an offset applied for DN>600 and a logistic function for DN<600. Flight images taken on the limb of the Moon provide a measure of stray light performance. Averages over many lines of images provide a measure of flat field performance in flight. These are comparable with laboratory data taken with a diffusely reflecting uniform panel.

  18. Lymphoscintigraphic imaging study for quantitative evaluation of a small field of view (SFOV) gamma camera

    NASA Astrophysics Data System (ADS)

    Alqahtani, M. S.; Lees, J. E.; Bugby, S. L.; Jambi, L. K.; Perkins, A. C.

    2015-07-01

    The Hybrid Compact Gamma Camera (HCGC) is a portable optical-gamma hybrid imager designed for intraoperative medical imaging, particularly for sentinel lymph node biopsy procedures. To investigate the capability of the HCGC in lymphatic system imaging, two lymphoscintigraphic phantoms have been designed and constructed. These phantoms allowed quantitative assessment and evaluation of the HCGC for lymphatic vessel (LV) and sentinel lymph node (SLN) detection. Fused optical and gamma images showed good alignment of the two modalities allowing localisation of activity within the LV and the SLN. At an imaging distance of 10 cm, the spatial resolution of the HCGC during the detection process of the simulated LV was not degraded at a separation of more than 1.5 cm (variation <5%) from the injection site (IS). Even in the presence of the IS the targeted LV was detectable with an acquisition time of less than 2 minutes. The HCGC could detect SLNs containing different radioactivity concentrations (ranging between 1:20 to 1:100 SLN to IS activity ratios) and under various scattering thicknesses (ranging between 5 mm to 30 mm) with high contrast-to-noise ratio (CNR) values (ranging between 11.6 and 110.8). The HCGC can detect the simulated SLNs at various IS to SLN distances, different IS to SLN activity ratios and through varied scattering medium thicknesses. The HCGC provided an accurate physical localisation of radiopharmaceutical uptake in the simulated SLN. These characteristics of the HCGC reflect its suitability for utilisation in lymphatic vessel drainage imaging and SLN imaging in patients in different critical clinical situations such as interventional and surgical procedures.

  19. Imaging early demineralization on tooth occlusional surfaces with a high definition InGaAs camera

    NASA Astrophysics Data System (ADS)

    Fried, William A.; Fried, Daniel; Chan, Kenneth H.; Darling, Cynthia L.

    In vivo and in vitro studies have shown that high contrast images of tooth demineralization can be acquired in the near-IR due to the high transparency of dental enamel. The purpose of this study is to compare the lesion contrast in reflectance at near-IR wavelengths coincident with high water absorption with those in the visible, the near-IR at 1300-nm and with fluorescence measurements for early lesions in occlusal surfaces. Twenty-four human molars were used in this in vitro study. Teeth were painted with an acidresistant varnish, leaving a 4×4 mm window in the occlusal surface of each tooth exposed for demineralization. Artificial lesions were produced in the exposed windows after 1 and 2-day exposure to a demineralizing solution at pH 4.5. Lesions were imaged using NIR reflectance at 3 wavelengths, 1310, 1460 and 1600-nm using a high definition InGaAs camera. Visible light reflectance, and fluorescence with 405-nm excitation and detection at wavelengths greater than 500-nm were also used to acquire images for comparison. Crossed polarizers were used for reflectance measurements to reduce interference from specular reflectance. The contrast of both the 24 hr and 48 hr lesions were significantly higher (P<0.05) for NIR reflectance imaging at 1460-nm and 1600-nm than it was for NIR reflectance imaging at 1300-nm, visible reflectance imaging, and fluorescence. The results of this study suggest that NIR reflectance measurements at longer near-IR wavelengths coincident with higher water absorption are better suited for imaging early caries lesions.

  20. Image mosaic based on the camera self-calibration of combining two vanishing points and pure rotational motion

    NASA Astrophysics Data System (ADS)

    Duan, Shaoli; Zang, Huaping; Zhang, Xiaofang; Gong, Qiaoxia; Tian, Yongzhi; Wang, Junqiao; Liang, Erjun; Liu, Xiaomin; Zhao, Shujun

    2016-10-01

    Camera calibration is one of the indispensable processes to obtain 3D depth information from 2D images in the field of computer vision. Camera self-calibration is more convenient and flexible, especially in the application of large depth of fields, wide fields of view, and scene conversion, as well as other occasions like zooms. In this paper, two selfcalibration methods respectively based on two vanishing points and homography are studied, and finally realizing the image mosaic based on self-calibration of the camera purely rotating around optical center. The geometric characteristic of disappear points formed by two groups of orthogonal parallel lines is applied to self-calibration based on two vanishing points. By using the vectors' orthogonal properties of connection optical centers and the vanishing points, the constraint equations on the camera intrinsic parameters are established. By this method, four internal parameters of the camera can be solved though only four images taked from different viewpoints in a scene. Compared with the other selfcalibration based on homography, the method based on two vanishing points has more convenient calibration process and simple algorithm. To check the quality of the self-calibration, we create a spherical mosaic of the images that were used for the self-calibration based on homography. Compared with the experimental results of two methods respectively based on calibration plate and self-calibration method using machine vision software Halcon, the practicability and effectiveness of self-calibration respectively based on two vanishing points and homography is verified.

  1. Research on characteristic of radiometric imaging quality for space-borne camera with super-wide field of view

    NASA Astrophysics Data System (ADS)

    Yin, Huan; Xu, Qingshan; Wu, Xiaoqing; Zhu, Lin; Bai, Zhaoguang; Zhu, Jun; Lu, Chunling; Di, Guodong; Wu, Bin

    2016-10-01

    In order to ensure the radiometric imaging quality for the space-borne camera with super-wide field of view, we put forward a new method, with which the apparent spectral radiance for any field of view can be precisely calculated. Firstly, building the imaging model for the space-borne camera, and the parameters of the orbits and attitudes of satellite, the look direction for the given field of view, the characteristics of different ground objects, the state of the atmosphere, et al.. Secondly, calculating the geometrical observation parameters for the given look direction of the space-borne camera. Finally, using the radiative transfer model to calculate the value of the apparent spectral radiance. Then, we use this method to calculate the apparent spectral radiance for the space-borne camera with the FOV of 75°based on freeform mirror on a small remote sensing satellite. And the result shows that the relative non-consistency of the apparent spectral radiance can be 25.3%, when the satellite takes the imaging of the area of North Atlantic. And we should greatly consider the characteristic of the radiometric imaging quality for the space-borne camera with super-wide field of view.

  2. Imaging in laser spectroscopy by a single-pixel camera based on speckle patterns

    NASA Astrophysics Data System (ADS)

    Žídek, K.; Václavík, J.

    2016-11-01

    Compressed sensing (CS) is a branch of computational optics able to reconstruct an image (or any other information) from a reduced number of measurements - thus significantly saving measurement time. It relies on encoding the detected information by a random pattern and consequent mathematical reconstruction. CS can be the enabling step to carry out imaging in many time-consuming measurements. The critical step in CS experiments is the method to invoke encoding by a random mask. Complex devices and relay optics are commonly used for the purpose. We present a new approach of creating the random mask by using laser speckles from coherent laser light passing through a diffusor. This concept is especially powerful in laser spectroscopy, where it does not require any complicated modification of the current techniques. The main advantage consist in the unmatched simplicity of the random pattern generation and a versatility of the pattern resolution. Unlike in the case of commonly used random masks, here the pattern fineness can be adjusted by changing the laser spot size being diffused. We demonstrate the pattern tuning together with the connected changes in the pattern statistics. In particular, the issue of patterns orthogonality, which is important for the CS applications, is discussed. Finally, we demonstrate on a set of 200 acquired speckle patterns that the concept can be successfully employed for single-pixel camera imaging. We discuss requirements on detector noise for the image reconstruction.

  3. Geometrical Calibration of X-Ray Imaging With RGB Cameras for 3D Reconstruction.

    PubMed

    Albiol, Francisco; Corbi, Alberto; Albiol, Alberto

    2016-08-01

    We present a methodology to recover the geometrical calibration of conventional X-ray settings with the help of an ordinary video camera and visible fiducials that are present in the scene. After calibration, equivalent points of interest can be easily identifiable with the help of the epipolar geometry. The same procedure also allows the measurement of real anatomic lengths and angles and obtains accurate 3D locations from image points. Our approach completely eliminates the need for X-ray-opaque reference marks (and necessary supporting frames) which can sometimes be invasive for the patient, occlude the radiographic picture, and end up projected outside the imaging sensor area in oblique protocols. Two possible frameworks are envisioned: a spatially shifting X-ray anode around the patient/object and a moving patient that moves/rotates while the imaging system remains fixed. As a proof of concept, experiences with a device under test (DUT), an anthropomorphic phantom and a real brachytherapy session have been carried out. The results show that it is possible to identify common points with a proper level of accuracy and retrieve three-dimensional locations, lengths and shapes with a millimetric level of precision. The presented approach is simple and compatible with both current and legacy widespread diagnostic X-ray imaging deployments and it can represent a good and inexpensive alternative to other radiological modalities like CT.

  4. Field-programmable gate array-based hardware architecture for high-speed camera with KAI-0340 CCD image sensor

    NASA Astrophysics Data System (ADS)

    Wang, Hao; Yan, Su; Zhou, Zuofeng; Cao, Jianzhong; Yan, Aqi; Tang, Linao; Lei, Yangjie

    2013-08-01

    We present a field-programmable gate array (FPGA)-based hardware architecture for high-speed camera which have fast auto-exposure control and colour filter array (CFA) demosaicing. The proposed hardware architecture includes the design of charge coupled devices (CCD) drive circuits, image processing circuits, and power supply circuits. CCD drive circuits transfer the TTL (Transistor-Transistor-Logic) level timing Sequences which is produced by image processing circuits to the timing Sequences under which CCD image sensor can output analog image signals. Image processing circuits convert the analog signals to digital signals which is processing subsequently, and the TTL timing, auto-exposure control, CFA demosaicing, and gamma correction is accomplished in this module. Power supply circuits provide the power for the whole system, which is very important for image quality. Power noises effect image quality directly, and we reduce power noises by hardware way, which is very effective. In this system, the CCD is KAI-0340 which is can output 210 full resolution frame-per-second, and our camera can work outstandingly in this mode. The speed of traditional auto-exposure control algorithms to reach a proper exposure level is so slow that it is necessary to develop a fast auto-exposure control method. We present a new auto-exposure algorithm which is fit high-speed camera. Color demosaicing is critical for digital cameras, because it converts a Bayer sensor mosaic output to a full color image, which determines the output image quality of the camera. Complexity algorithm can acquire high quality but cannot implement in hardware. An low-complexity demosaicing method is presented which can implement in hardware and satisfy the demand of quality. The experiment results are given in this paper in last.

  5. Improved Digitization of Lunar Mare Ridges with LROC Derived Products

    NASA Astrophysics Data System (ADS)

    Crowell, J. M.; Robinson, M. S.; Watters, T. R.; Bowman-Cisneros, E.; Enns, A. C.; Lawrence, S.

    2011-12-01

    Lunar wrinkle ridges (mare ridges) are positive-relief structures formed from compressional stress in basin-filling flood basalt deposits [1]. Previous workers have measured wrinkle ridge orientations and lengths to investigate their spatial distribution and infer basin-localized stress fields [2,3]. Although these plots include the most prominent mare ridges and their general trends, they may not have fully captured all of the ridges, particularly the smaller-scale ridges. Using Lunar Reconnaissance Orbiter Wide Angle Camera (WAC) global mosaics and derived topography (100m pixel scale) [4], we systematically remapped wrinkle ridges in Mare Serenitatis. By comparing two WAC mosaics with different lighting geometry, and shaded relief maps made from a WAC digital elevation model (DEM) [5], we observed that some ridge segments and some smaller ridges are not visible in previous structure maps [2,3]. In the past, mapping efforts were limited by a fixed Sun direction [6,7]. For systematic mapping we created three shaded relief maps from the WAC DEM with solar azimuth angles of 0°, 45°, and 90°, and a fourth map was created by combining the three shaded reliefs into one, using a simple averaging scheme. Along with the original WAC mosaic and the WAC DEM, these four datasets were imported into ArcGIS, and the mare ridges of Imbrium, Serenitatis, and Tranquillitatis were digitized from each of the six maps. Since the mare ridges are often divided into many ridge segments [8], each major component was digitized separately, as opposed to the ridge as a whole. This strategy enhanced our ability to analyze the lengths, orientations, and abundances of these ridges. After the initial mapping was completed, the six products were viewed together to identify and resolve discrepancies in order to produce a final wrinkle ridge map. Comparing this new mare ridge map with past lunar tectonic maps, we found that many mare ridges were not recorded in the previous works. It was noted

  6. Development of ultra-fast 2D ion Doppler tomography using image intensified CMOS fast camera

    NASA Astrophysics Data System (ADS)

    Tanabe, Hiroshi; Kuwahata, Akihiro; Yamanaka, Haruki; Inomoto, Michiaki; Ono, Yasushi; TS-group Team

    2015-11-01

    The world fastest novel time-resolved 2D ion Doppler tomography diagnostics has been developed using fast camera with high-speed gated image intensifier (frame rate: 200kfps. phosphor decay time: ~ 1 μ s). Time evolution of line-integrated spectra are diffracted from a f=1m, F/8.3 and g=2400L/mm Czerny-Turner polychromator, whose output is intensified and recorded to a high-speed camera with spectral resolution of ~0.005nm/pixel. The system can accommodate up to 36 (9 ×4) spatial points recorded at 5 μs time resolution, tomographic reconstruction is applied for the line-integrated spectra, time-resolved (5 μs/frame) local 2D ion temperature measurement has been achieved without any assumption of shot repeatability. Ion heating during intermittent reconnection event which tends to happen during high guide field merging tokamak was measured around diffusion region in UTST. The measured 2D profile shows ion heating inside the acceleration channel of reconnection outflow jet, stagnation point and downstream region where reconnected field forms thick closed flux surface as in MAST. Achieved maximum ion temperature increases as a function of Brec2 and shows good fit with MAST experiment, demonstrating promising CS-less startup scenario for spherical tokamak. This work is supported by JSPS KAKENHI Grant Number 15H05750 and 15K20921.

  7. Retrieval of Garstang's emission function from all-sky camera images

    NASA Astrophysics Data System (ADS)

    Kocifaj, Miroslav; Solano Lamphar, Héctor Antonio; Kundracik, František

    2015-10-01

    The emission function from ground-based light sources predetermines the skyglow features to a large extent, while most mathematical models that are used to predict the night sky brightness require the information on this function. The radiant intensity distribution on a clear sky is experimentally determined as a function of zenith angle using the theoretical approach published only recently in MNRAS, 439, 3405-3413. We have made the experiments in two localities in Slovakia and Mexico by means of two digital single lens reflex professional cameras operating with different lenses that limit the system's field-of-view to either 180º or 167º. The purpose of using two cameras was to identify variances between two different apertures. Images are taken at different distances from an artificial light source (a city) with intention to determine the ratio of zenith radiance relative to horizontal irradiance. Subsequently, the information on the fraction of the light radiated directly into the upward hemisphere (F) is extracted. The results show that inexpensive devices can properly identify the upward emissions with adequate reliability as long as the clear sky radiance distribution is dominated by a largest ground-based light source. Highly unstable turbidity conditions can also make the parameter F difficult to find or even impossible to retrieve. The measurements at low elevation angles should be avoided due to a potentially parasitic effect of direct light emissions from luminaires surrounding the measuring site.

  8. AMICA: The First camera for Near- and Mid-Infrared Astronomical Imaging at Dome C

    NASA Astrophysics Data System (ADS)

    Straniero, O.; Dolci, M.; Valentini, A.; Valentini, G.; di Rico, G.; Ragni, M.; Giuliani, C.; di Cianno, A.; di Varano, I.; Corcione, L.; Bortoletto, F.; D'Alessandro, M.; Magrin, D.; Bonoli, C.; Giro, E.; Fantinel, D.; Zerbi, F. M.; Riva, A.; de Caprio, V.; Molinari, E.; Conconi, P.; Busso, M.; Tosti, G.; Abia, C. A.

    AMICA (Antarctic Multiband Infrared CAmera) is an instrument designed to perform astronomical imaging in the near- (1{-}5 μm) and mid- (5 27 μm) infrared wavelength regions. Equipped with two detectors, an InSb 2562 and a Si:As 1282 IBC, cooled at 35 and 7 K respectively, it will be the first instrument to investigate the potential of the Italian-French base Concordia for IR astronomy. The main technical challenge is represented by the extreme conditions of Dome C (T ˜ -90 °C, p ˜640 mbar). An environmental control system ensures the correct start-up, shut-down and housekeeping of the various components of the camera. AMICA will be mounted on the IRAIT telescope and will perform survey-mode observations in the Southern sky. The first task is to provide important site-quality data. Substantial contributions to the solution of fundamental astrophysical quests, such as those related to late phases of stellar evolution and to star formation processes, are also expected.

  9. Single-camera stereo-digital image correlation with a four-mirror adapter: optimized design and validation

    NASA Astrophysics Data System (ADS)

    Yu, Liping; Pan, Bing

    2016-12-01

    A low-cost, easy-to-implement but practical single-camera stereo-digital image correlation (DIC) system using a four-mirror adapter is established for accurate shape and three-dimensional (3D) deformation measurements. The mirrors assisted pseudo-stereo imaging system can convert a single camera into two virtual cameras, which view a specimen from different angles and record the surface images of the test object onto two halves of the camera sensor. To enable deformation measurement in non-laboratory conditions or extreme high temperature environments, an active imaging optical design, combining an actively illuminated monochromatic source with a coupled band-pass optical filter, is compactly integrated to the pseudo-stereo DIC system. The optical design, basic principles and implementation procedures of the established system for 3D profile and deformation measurements are described in detail. The effectiveness and accuracy of the established system are verified by measuring the profile of a regular cylinder surface and displacements of a translated planar plate. As an application example, the established system is used to determine the tensile strains and Poisson's ratio of a composite solid propellant specimen during stress relaxation test. Since the established single-camera stereo-DIC system only needs a single camera and presents strong robustness against variations in ambient light or the thermal radiation of a hot object, it demonstrates great potential in determining transient deformation in non-laboratory or high-temperature environments with the aid of a single high-speed camera.

  10. Benchmarking of depth of field for large out-of-plane deformations with single camera digital image correlation

    NASA Astrophysics Data System (ADS)

    Van Mieghem, Bart; Ivens, Jan; Van Bael, Albert

    2017-04-01

    A problem that arises when performing stereo digital image correlation in applications with large out-of-plane displacements is that the images may become unfocused. This unfocusing could result in correlation instabilities or inaccuracies. When performing DIC measurements and expecting large out-of-plane displacements researchers either trust on their experience or use the equations from photography to estimate the parameters affecting the depth of field (DOF) of the camera. A limitation of the latter approach is that the definition of sharpness is a human defined parameter and that it does not reflect the performance of the digital image correlation system. To get a more representative DOF value for DIC applications, a standardised testing method is presented here, making use of real camera and lens combinations as well as actual image correlation results. The method is based on experimental single camera DIC measurements of a backwards moving target. Correlation results from focused and unfocused images are compared and a threshold value defines whether or not the correlation results are acceptable even if the images are (slightly) unfocused. By following the proposed approach, the complete DOF of a specific camera/lens combination as function of the aperture setting and distance from the camera to the target can be defined. The comparison between the theoretical and the experimental DOF results shows that the achievable DOF for DIC applications is larger than what theoretical calculations predict. Practically this means that the cameras can be positioned closer to the target than what is expected from the theoretical approach. This leads to a gain in resolution and measurement accuracy.

  11. Development of a soft x-ray plasma camera with a Fresnel zone plate to image laser produced plasmas

    NASA Astrophysics Data System (ADS)

    Kado, M.; Mori, M.; Nishiuchi, M.; Ishino, M.; Kawachi, T.

    2009-09-01

    A soft x-ray plasma camera operated at 3.35nm in the water window x-ray region is developed and demonstrated imaging gas jet plasmas of several spices produced with a 10TW Ti: sapphire laser. The plasma camera consists of a 300nm thick Ag/Ti/Si3N4 x-ray band pass filter with bandwidth of 1.43nm to cut visible light and also to reduce colour aberration of the Fresnel zone plate, a Fresnel zone plate with diameter of 1mm and outermost zone width of 300nm, and a soft x-ray CCD camera. The magnification of the plasma camera is 10. The soft x-ray plasma camera powered by a Fresnel zone plate is a very powerful tool to observe laser produced plasmas since it is 1000 times brighter and has 5 times higher spatial resolution comparing ordinary x-ray pinhole camera. The soft x-ray images of helium, nitrogen, argon, krypton, and xenon gas jet plasmas are obtained changing gas pressure from 0.01MPa to 1MPa.

  12. Research on auto-calibration technology of the image plane's center of 360-degree and all round looking camera

    NASA Astrophysics Data System (ADS)

    Zhang, Shaojun; Xu, Xiping

    2015-10-01

    The 360-degree and all round looking camera, as its characteristics of suitable for automatic analysis and judgment on the ambient environment of the carrier by image recognition algorithm, is usually applied to opto-electronic radar of robots and smart cars. In order to ensure the stability and consistency of image processing results of mass production, it is necessary to make sure the centers of image planes of different cameras are coincident, which requires to calibrate the position of the image plane's center. The traditional mechanical calibration method and electronic adjusting mode of inputting the offsets manually, both exist the problem of relying on human eyes, inefficiency and large range of error distribution. In this paper, an approach of auto- calibration of the image plane of this camera is presented. The imaging of the 360-degree and all round looking camera is a ring-shaped image consisting of two concentric circles, the center of the image is a smaller circle and the outside is a bigger circle. The realization of the technology is just to exploit the above characteristics. Recognizing the two circles through HOUGH TRANSFORM algorithm and calculating the center position, we can get the accurate center of image, that the deviation of the central location of the optic axis and image sensor. The program will set up the image sensor chip through I2C bus automatically, we can adjusting the center of the image plane automatically and accurately. The technique has been applied to practice, promotes productivity and guarantees the consistent quality of products.

  13. Space-bandwidth extension in parallel phase-shifting digital holography using a four-channel polarization-imaging camera.

    PubMed

    Tahara, Tatsuki; Ito, Yasunori; Xia, Peng; Awatsuji, Yasuhiro; Nishio, Kenzo; Ura, Shogo; Kubota, Toshihiro; Matoba, Osamu

    2013-07-15

    We propose a method for extending the space bandwidth (SBW) available for recording an object wave in parallel phase-shifting digital holography using a four-channel polarization-imaging camera. A linear spatial carrier of the reference wave is introduced to an optical setup of parallel four-step phase-shifting interferometry using a commercially available polarization-imaging camera that has four polarization-detection channels. Then a hologram required for parallel two-step phase shifting, which is a technique capable of recording the widest SBW in parallel phase shifting, can be obtained. The effectiveness of the proposed method was numerically and experimentally verified.

  14. An accelerated threshold-based back-projection algorithm for Compton camera image reconstruction

    SciTech Connect

    Mundy, Daniel W.; Herman, Michael G.

    2011-01-15

    Purpose: Compton camera imaging (CCI) systems are currently under investigation for radiotherapy dose reconstruction and verification. The ability of such a system to provide real-time images during dose delivery will be limited by the computational speed of the image reconstruction algorithm. In this work, the authors present a fast and simple method by which to generate an initial back-projected image from acquired CCI data, suitable for use in a filtered back-projection algorithm or as a starting point for iterative reconstruction algorithms, and compare its performance to the current state of the art. Methods: Each detector event in a CCI system describes a conical surface that includes the true point of origin of the detected photon. Numerical image reconstruction algorithms require, as a first step, the back-projection of each of these conical surfaces into an image space. The algorithm presented here first generates a solution matrix for each slice of the image space by solving the intersection of the conical surface with the image plane. Each element of the solution matrix is proportional to the distance of the corresponding voxel from the true intersection curve. A threshold function was developed to extract those pixels sufficiently close to the true intersection to generate a binary intersection curve. This process is repeated for each image plane for each CCI detector event, resulting in a three-dimensional back-projection image. The performance of this algorithm was tested against a marching algorithm known for speed and accuracy. Results: The threshold-based algorithm was found to be approximately four times faster than the current state of the art with minimal deficit to image quality, arising from the fact that a generically applicable threshold function cannot provide perfect results in all situations. The algorithm fails to extract a complete intersection curve in image slices near the detector surface for detector event cones having axes nearly

  15. Multi-temporal database of High Resolution Stereo Camera (HRSC) images - Alpha version

    NASA Astrophysics Data System (ADS)

    Erkeling, G.; Luesebrink, D.; Hiesinger, H.; Reiss, D.; Jaumann, R.

    2014-04-01

    Image data transmitted to Earth by Martian spacecraft since the 1970s, for example by Mariner and Viking, Mars Global Surveyor (MGS), Mars Express (MEx) and the Mars Reconnaissance Orbiter (MRO) showed, that the surface of Mars has changed dramatically and actually is continually changing [e.g., 1-8]. The changes are attributed to a large variety of atmospherical, geological and morphological processes, including eolian processes [9,10], mass wasting processes [11], changes of the polar caps [12] and impact cratering processes [13]. In addition, comparisons between Mariner, Viking and Mars Global Surveyor images suggest that more than one third of the Martian surface has brightened or darkened by at least 10% [6]. Albedo changes can have effects on the global heat balance and the circulation of winds, which can result in further surface changes [14-15]. The High Resolution Stereo Camera (HRSC) [16,17] on board Mars Express (MEx) covers large areas at high resolution and is therefore suited to detect the frequency, extent and origin of Martian surface changes. Since 2003 HRSC acquires highresolution images of the Martian surface and contributes to Martian research, with focus on the surface morphology, the geology and mineralogy, the role of liquid water on the surface and in the atmosphere, on volcanism, as well as on the proposed climate change throughout the Martian history and has improved our understanding of the evolution of Mars significantly [18-21]. The HRSC data are available at ESA's Planetary Science Archive (PSA) as well as through the NASA Planetary Data System (PDS). Both data platforms are frequently used by the scientific community and provide additional software and environments to further generate map-projected and geometrically calibrated HRSC data. However, while previews of the images are available, there is no possibility to quickly and conveniently see the spatial and temporal availability of HRSC images in a specific region, which is

  16. An upgraded camera-based imaging system for mapping venous blood oxygenation in human skin tissue

    NASA Astrophysics Data System (ADS)

    Li, Jun; Zhang, Xiao; Qiu, Lina; Leotta, Daniel F.

    2016-07-01

    A camera-based imaging system was previously developed for mapping venous blood oxygenation in human skin. However, several limitations were realized in later applications, which could lead to either significant bias in the estimated oxygen saturation value or poor spatial resolution in the map of the oxygen saturation. To overcome these issues, an upgraded system was developed using improved modeling and image processing algorithms. In the modeling, Monte Carlo (MC) simulation was used to verify the effectiveness of the ratio-to-ratio method for semi-infinite and two-layer skin models, and then the relationship between the venous oxygen saturation and the ratio-to-ratio was determined. The improved image processing algorithms included surface curvature correction and motion compensation. The curvature correction is necessary when the imaged skin surface is uneven. The motion compensation is critical for the imaging system because surface motion is inevitable when the venous volume alteration is induced by cuff inflation. In addition to the modeling and image processing algorithms in the upgraded system, a ring light guide was used to achieve perpendicular and uniform incidence of light. Cross-polarization detection was also adopted to suppress surface specular reflection. The upgraded system was applied to mapping of venous oxygen saturation in the palm, opisthenar and forearm of human subjects. The spatial resolution of the oxygenation map achieved is much better than that of the original system. In addition, the mean values of the venous oxygen saturation for the three locations were verified with a commercial near-infrared spectroscopy system and were consistent with previously published data.

  17. Formation of the color image based on the vidicon TV camera

    NASA Astrophysics Data System (ADS)

    Iureva, Radda A.; Maltseva, Nadezhda K.; Dunaev, Vadim I.

    2016-09-01

    The main goal of nuclear safety is to protect from accidents in nuclear power plant (NPP) against radiation arising during normal operation of nuclear installations, or as a result of accidents on them. The most important task in any activities aimed at the maintenance of NPP is a constant maintenance of the desired level of security and reliability. The periodic non-destructive testing during operation provides the most relevant criteria for the integrity of the components of the primary circuit pressure. The objective of this study is to develop a system for forming a color image on the television camera on vidicon which is used to conduct non-destructive testing in conditions of increased radiation at NPPs.

  18. Results of shuttle EMU thermal vacuum tests incorporating an infrared imaging camera data acquisition system

    NASA Technical Reports Server (NTRS)

    Anderson, James E.; Tepper, Edward H.; Trevino, Louis A.

    1991-01-01

    Manned tests in Chamber B at NASA JSC were conducted in May and June of 1990 to better quantify the Space Shuttle Extravehicular Mobility Unit's (EMU) thermal performance in the cold environmental extremes of space. Use of an infrared imaging camera with real-time video monitoring of the output significantly added to the scope, quality and interpretation of the test conduct and data acquisition. Results of this test program have been effective in the thermal certification of a new insulation configuration and the '5000 Series' glove. In addition, the acceptable thermal performance of flight garments with visually deteriorated insulation was successfully demonstrated, thereby saving significant inspection and garment replacement cost. This test program also established a new method for collecting data vital to improving crew thermal comfort in a cold environment.

  19. Evaluation of a CdTe semiconductor based compact gamma camera for sentinel lymph node imaging

    SciTech Connect

    Russo, Paolo; Curion, Assunta S.; Mettivier, Giovanni; Esposito, Michela; Aurilio, Michela; Caraco, Corradina; Aloj, Luigi; Lastoria, Secondo

    2011-03-15

    Purpose: The authors assembled a prototype compact gamma-ray imaging probe (MediPROBE) for sentinel lymph node (SLN) localization. This probe is based on a semiconductor pixel detector. Its basic performance was assessed in the laboratory and clinically in comparison with a conventional gamma camera. Methods: The room-temperature CdTe pixel detector (1 mm thick) has 256x256 square pixels arranged with a 55 {mu}m pitch (sensitive area 14.08x14.08 mm{sup 2}), coupled pixel-by-pixel via bump-bonding to the Medipix2 photon-counting readout CMOS integrated circuit. The imaging probe is equipped with a set of three interchangeable knife-edge pinhole collimators (0.94, 1.2, or 2.1 mm effective diameter at 140 keV) and its focal distance can be regulated in order to set a given field of view (FOV). A typical FOV of 70 mm at 50 mm skin-to-collimator distance corresponds to a minification factor 1:5. The detector is operated at a single low-energy threshold of about 20 keV. Results: For {sup 99m}Tc, at 50 mm distance, a background-subtracted sensitivity of 6.5x10{sup -3} cps/kBq and a system spatial resolution of 5.5 mm FWHM were obtained for the 0.94 mm pinhole; corresponding values for the 2.1 mm pinhole were 3.3x10{sup -2} cps/kBq and 12.6 mm. The dark count rate was 0.71 cps. Clinical images in three patients with melanoma indicate detection of the SLNs with acquisition times between 60 and 410 s with an injected activity of 26 MBq {sup 99m}Tc and prior localization with standard gamma camera lymphoscintigraphy. Conclusions: The laboratory performance of this imaging probe is limited by the pinhole collimator performance and the necessity of working in minification due to the limited detector size. However, in clinical operative conditions, the CdTe imaging probe was effective in detecting SLNs with adequate resolution and an acceptable sensitivity. Sensitivity is expected to improve with the future availability of a larger CdTe detector permitting operation at shorter

  20. Star Observations by Asteroid Multiband Imaging Camera (AMICA) on Hayabusa (MUSES-C) Cruising Phase

    NASA Astrophysics Data System (ADS)

    Saito, J.; Hashimoto, T.; Kubota, T.; Hayabusa AMICA Team

    Muses-C is the first Japanese asteroid mission and also a technology demonstration one to the S-type asteroid, 25143 Itokawa (1998SF36). It was launched at May 9, 2003, and renamed Hayabusa after the spacecraft was confirmed to be on the interplanetary orbit. This spacecraft has the event of the Earth-swingby for gravitational assist in the way to Itokawa on 2004 May. The arrival to Itokawa is scheduled on 2005 summer. During the visit to Itokawa, the remote-sensing observation with AMICA, NIRS (Near Infrared Spectrometer), XRS (X-ray Fluorescence Spectrometer), and LIDAR are performed, and the spacecraft descends and collects the surface samples at the touch down to the surface. The captured asteroid sample will be returned to the Earth in the middle of 2007. The telescopic optical navigation camera (ONC-T) with seven bandpass filters (and one wide-band filter) and polarizers is called AMICA (Asteroid Multiband Imaging CAmera) when ONC-T is used for scientific observations. The AMICA's seven bandpass filters are nearly equivalent to the seven filters of the ECAS (Eight Color Asteroid Survey) system. Obtained spectroscopic data will be compared with previously obtained ECAS observations. AMICA also has four polarizers, which are located on one edge of the CCD chip (covering 1.1 x 1.1 degrees each). Using the polarizers of AMICA, we can obtain polarimetric information of the target asteroid's surface. Since last November, we planned the test observations of some stars and planets by AMICA and could successfully obtain these images. Here, we briefly report these observations and its calibration by the ground-based observational data. In addition, we also present a current status of AMICA.

  1. Human detection based on the generation of a background image by using a far-infrared light camera.

    PubMed

    Jeon, Eun Som; Choi, Jong-Suk; Lee, Ji Hoon; Shin, Kwang Yong; Kim, Yeong Gon; Le, Toan Thanh; Park, Kang Ryoung

    2015-03-19

    The need for computer vision-based human detection has increased in fields, such as security, intelligent surveillance and monitoring systems. However, performance enhancement of human detection based on visible light cameras is limited, because of factors, such as nonuniform illumination, shadows and low external light in the evening and night. Consequently, human detection based on thermal (far-infrared light) cameras has been considered as an alternative. However, its performance is influenced by the factors, such as low image resolution, low contrast and the large noises of thermal images. It is also affected by the high temperature of backgrounds during the day. To solve these problems, we propose a new method for detecting human areas in thermal camera images. Compared to previous works, the proposed research is novel in the following four aspects. One background image is generated by median and average filtering. Additional filtering procedures based on maximum gray level, size filtering and region erasing are applied to remove the human areas from the background image. Secondly, candidate human regions in the input image are located by combining the pixel and edge difference images between the input and background images. The thresholds for the difference images are adaptively determined based on the brightness of the generated background image. Noise components are removed by component labeling, a morphological operation and size filtering. Third, detected areas that may have more than two human regions are merged or separated based on the information in the horizontal and vertical histograms of the detected area. This procedure is adaptively operated based on the brightness of the generated background image. Fourth, a further procedure for the separation and removal of the candidate human regions is performed based on the size and ratio of the height to width information of the candidate regions considering the camera viewing direction and perspective

  2. Human Detection Based on the Generation of a Background Image by Using a Far-Infrared Light Camera

    PubMed Central

    Jeon, Eun Som; Choi, Jong-Suk; Lee, Ji Hoon; Shin, Kwang Yong; Kim, Yeong Gon; Le, Toan Thanh; Park, Kang Ryoung

    2015-01-01

    The need for computer vision-based human detection has increased in fields, such as security, intelligent surveillance and monitoring systems. However, performance enhancement of human detection based on visible light cameras is limited, because of factors, such as nonuniform illumination, shadows and low external light in the evening and night. Consequently, human detection based on thermal (far-infrared light) cameras has been considered as an alternative. However, its performance is influenced by the factors, such as low image resolution, low contrast and the large noises of thermal images. It is also affected by the high temperature of backgrounds during the day. To solve these problems, we propose a new method for detecting human areas in thermal camera images. Compared to previous works, the proposed research is novel in the following four aspects. One background image is generated by median and average filtering. Additional filtering procedures based on maximum gray level, size filtering and region erasing are applied to remove the human areas from the background image. Secondly, candidate human regions in the input image are located by combining the pixel and edge difference images between the input and background images. The thresholds for the difference images are adaptively determined based on the brightness of the generated background image. Noise components are removed by component labeling, a morphological operation and size filtering. Third, detected areas that may have more than two human regions are merged or separated based on the information in the horizontal and vertical histograms of the detected area. This procedure is adaptively operated based on the brightness of the generated background image. Fourth, a further procedure for the separation and removal of the candidate human regions is performed based on the size and ratio of the height to width information of the candidate regions considering the camera viewing direction and perspective

  3. Volume estimation of tonsil phantoms using an oral camera with 3D imaging

    PubMed Central

    Das, Anshuman J.; Valdez, Tulio A.; Vargas, Jose Arbouin; Saksupapchon, Punyapat; Rachapudi, Pushyami; Ge, Zhifei; Estrada, Julio C.; Raskar, Ramesh

    2016-01-01

    Three-dimensional (3D) visualization of oral cavity and oropharyngeal anatomy may play an important role in the evaluation for obstructive sleep apnea (OSA). Although computed tomography (CT) and magnetic resonance (MRI) imaging are capable of providing 3D anatomical descriptions, this type of technology is not readily available in a clinic setting. Current imaging of the oropharynx is performed using a light source and tongue depressors. For better assessment of the inferior pole of the tonsils and tongue base flexible laryngoscopes are required which only provide a two dimensional (2D) rendering. As a result, clinical diagnosis is generally subjective in tonsillar hypertrophy where current physical examination has limitations. In this report, we designed a hand held portable oral camera with 3D imaging capability to reconstruct the anatomy of the oropharynx in tonsillar hypertrophy where the tonsils get enlarged and can lead to increased airway resistance. We were able to precisely reconstruct the 3D shape of the tonsils and from that estimate airway obstruction percentage and volume of the tonsils in 3D printed realistic models. Our results correlate well with Brodsky’s classification of tonsillar hypertrophy as well as intraoperative volume estimations. PMID:27446667

  4. Influence of the Viewing Geometry Within Hyperspectral Images Retrieved from Uav Snapshot Cameras

    NASA Astrophysics Data System (ADS)

    Aasen, Helge

    2016-06-01

    Hyperspectral data has great potential for vegetation parameter retrieval. However, due to angular effects resulting from different sun-surface-sensor geometries, objects might appear differently depending on the position of an object within the field of view of a sensor. Recently, lightweight snapshot cameras have been introduced, which capture hyperspectral information in two spatial and one spectral dimension and can be mounted on unmanned aerial vehicles. This study investigates the influence of the different viewing geometries within an image on the apparent hyperspectral reflection retrieved by these sensors. Additionally, it is evaluated how hyperspectral vegetation indices like the NDVI are effected by the angular effects within a single image and if the viewing geometry influences the apparent heterogeneity with an area of interest. The study is carried out for a barley canopy at booting stage. The results show significant influences of the position of the area of interest within the image. The red region of the spectrum is more influenced by the position than the near infrared. The ability of the NDVI to compensate these effects was limited to the capturing positions close to nadir. The apparent heterogeneity of the area of interest is the highest close to a nadir.

  5. Versatile illumination platform and fast optical switch to give standard observation camera gated active imaging capacity

    NASA Astrophysics Data System (ADS)

    Grasser, R.; Peyronneaudi, Benjamin; Yon, Kevin; Aubry, Marie

    2015-10-01

    CILAS, subsidiary of Airbus Defense and Space, develops, manufactures and sales laser-based optronics equipment for defense and homeland security applications. Part of its activity is related to active systems for threat detection, recognition and identification. Active surveillance and active imaging systems are often required to achieve identification capacity in case for long range observation in adverse conditions. In order to ease the deployment of active imaging systems often complex and expensive, CILAS suggests a new concept. It consists on the association of two apparatus working together. On one side, a patented versatile laser platform enables high peak power laser illumination for long range observation. On the other side, a small camera add-on works as a fast optical switch to select photons with specific time of flight only. The association of the versatile illumination platform and the fast optical switch presents itself as an independent body, so called "flash module", giving to virtually any passive observation systems gated active imaging capacity in NIR and SWIR.

  6. A survey of Martian dust devil activity using Mars Global Surveyor Mars Orbiter Camera images

    NASA Astrophysics Data System (ADS)

    Fisher, Jenny A.; Richardson, Mark I.; Newman, Claire E.; Szwast, Mark A.; Graf, Chelsea; Basu, Shabari; Ewald, Shawn P.; Toigo, Anthony D.; Wilson, R. John

    2005-03-01

    A survey of dust devils using the Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) wide- and narrow-angle (WA and NA) images has been undertaken. The survey comprises two parts: (1) sampling of nine broad regions from September 1997 to July 2001 and (2) a focused seasonal monitoring of variability in the Amazonis region, an active dust devil site, from March 2001 to April 2004. For part 1, dust devils were identified in NA and WA images, and dust devil tracks were identified in NA images. Great spatial variability in dust devil occurrence is highlighted, with Amazonis Planitia being the most active region examined. Other active regions included Cimmerium, Sinai, and Solis. Numerous dust devil tracks, but very few dust devils, were observed in Casius. This may suggest dust devils here occur at local times other than that of the MGS orbit (~2 pm). Alternatively, variations in surface properties may affect the ability of dust devils to leave visible tracks. The seasonal campaign within Amazonis shows a relatively smooth variation of dust devil activity with season, peaking in mid northern summer and falling to zero in southern spring and summer. This pattern of activity correlates well with the boundary layer maximum depth and hence the vigor of convection. Global maps of boundary layer depth and surface temperature do not predict that Amazonis should be especially active, potentially suggesting a role for mesoscale circulations. Measurement of observed dust devils yields heights of up to 8 km and widths in excess of 0.5 km.

  7. Digital chemiluminescence imaging of DNA sequencing blots using a charge-coupled device camera.

    PubMed Central

    Karger, A E; Weiss, R; Gesteland, R F

    1992-01-01

    Digital chemiluminescence imaging with a cryogenically cooled charge-coupled device (CCD) camera is used to visualize DNA sequencing fragments covalently bound to a blotting membrane. The detection is based on DNA hybridization with an alkaline phosphatase(AP) labeled oligodeoxyribonucleotide probe and AP triggered chemiluminescence of the substrate 3-(2'-spiro-adamantane)-4-methoxy-4-(3"-phosphoryloxy)phenyl- 1,2-dioxetane (AMPPD). The detection using a direct AP-oligonucleotide conjugate is compared to the secondary detection of biotinylated oligonucleotides with respect to their sensitivity and nonspecific binding to the nylon membrane by quantitative imaging. Using the direct oligonucleotide-AP conjugate as a hybridization probe, sub-attomol (0.5 pg of 2.7 kb pUC plasmid DNA) quantities of membrane bound DNA are detectable with 30 min CCD exposures. Detection using the biotinylated probe in combination with streptavidin-AP was found to be background limited by nonspecific binding of streptavidin-AP and the oligo(biotin-11-dUTP) label in equal proportions. In contrast, the nonspecific background of AP-labeled oligonucleotide is indistinguishable from that seen with 5'-32P-label, in that respect making AP an ideal enzymatic label. The effect of hybridization time, probe concentration, and presence of luminescence enhancers on the detection of plasmid DNA were investigated. Images PMID:1480487

  8. Measurement of effective temperature range of fire service thermal imaging cameras

    NASA Astrophysics Data System (ADS)

    Amon, Francine; Bryner, Nelson

    2008-04-01

    The use of thermal imaging cameras (TIC) by the fire service is increasing as fire fighters become more aware of the value of these tools. The National Fire Protection Association (NFPA) is currently developing a consensus standard for design and performance requirements of TIC as used by the fire service. The National Institute of Standards and Technology facilitates this process by providing recommendations for science-based performance metrics and test methods to the NFPA technical committee charged with the development of this standard. A suite of imaging performance metrics and test methods, based on the harsh operating environment and limitations of use particular to the fire service, has been proposed for inclusion in the standard. The Effective Temperature Range (ETR) measures the range of temperatures that a TIC can view while still providing useful information to the user. Specifically, extreme heat in the field of view tends to inhibit a TIC's ability to discern surfaces having intermediate temperatures, such as victims and fire fighters. The ETR measures the contrast of a target having alternating 25 °C and 30 °C bars while an increasing temperature range is imposed on other surfaces in the field of view. The ETR also indicates the thermal conditions that trigger a shift in integration time common to TIC employing microbolometer sensors. The reported values for this imaging performance metric are the hot surface temperature range within which the TIC provides adequate bar contrast, and the hot surface temperature at which the TIC shifts integration time.

  9. Examination of the semi-automatic calculation technique of vegetation cover rate by digital camera images.

    NASA Astrophysics Data System (ADS)

    Takemine, S.; Rikimaru, A.; Takahashi, K.

    The rice is one of the staple foods in the world High quality rice production requires periodically collecting rice growth data to control the growth of rice The height of plant the number of stem the color of leaf is well known parameters to indicate rice growth Rice growth diagnosis method based on these parameters is used operationally in Japan although collecting these parameters by field survey needs a lot of labor and time Recently a laborsaving method for rice growth diagnosis is proposed which is based on vegetation cover rate of rice Vegetation cover rate of rice is calculated based on discriminating rice plant areas in a digital camera image which is photographed in nadir direction Discrimination of rice plant areas in the image was done by the automatic binarization processing However in the case of vegetation cover rate calculation method depending on the automatic binarization process there is a possibility to decrease vegetation cover rate against growth of rice In this paper a calculation method of vegetation cover rate was proposed which based on the automatic binarization process and referred to the growth hysteresis information For several images obtained by field survey during rice growing season vegetation cover rate was calculated by the conventional automatic binarization processing and the proposed method respectively And vegetation cover rate of both methods was compared with reference value obtained by visual interpretation As a result of comparison the accuracy of discriminating rice plant areas was increased by the proposed

  10. HERSCHEL/SCORE, imaging the solar corona in visible and EUV light: CCD camera characterization.

    PubMed

    Pancrazzi, M; Focardi, M; Landini, F; Romoli, M; Fineschi, S; Gherardi, A; Pace, E; Massone, G; Antonucci, E; Moses, D; Newmark, J; Wang, D; Rossi, G

    2010-07-01

    The HERSCHEL (helium resonant scattering in the corona and heliosphere) experiment is a rocket mission that was successfully launched last September from White Sands Missile Range, New Mexico, USA. HERSCHEL was conceived to investigate the solar corona in the extreme UV (EUV) and in the visible broadband polarized brightness and provided, for the first time, a global map of helium in the solar environment. The HERSCHEL payload consisted of a telescope, HERSCHEL EUV Imaging Telescope (HEIT), and two coronagraphs, HECOR (helium coronagraph) and SCORE (sounding coronagraph experiment). The SCORE instrument was designed and developed mainly by Italian research institutes and it is an imaging coronagraph to observe the solar corona from 1.4 to 4 solar radii. SCORE has two detectors for the EUV lines at 121.6 nm (HI) and 30.4 nm (HeII) and the visible broadband polarized brightness. The SCORE UV detector is an intensified CCD with a microchannel plate coupled to a CCD through a fiber-optic bundle. The SCORE visible light detector is a frame-transfer CCD coupled to a polarimeter based on a liquid crystal variable retarder plate. The SCORE coronagraph is described together with the performances of the cameras for imaging the solar corona.

  11. Astrophysical Research Consortium Telescope Imaging Camera (ARCTIC) facility optical imager for the Apache Point Observatory 3.5m telescope

    NASA Astrophysics Data System (ADS)

    Huehnerhoff, Joseph; Ketzeback, William; Bradley, Alaina; Dembicky, Jack; Doughty, Caitlin; Hawley, Suzanne; Johnson, Courtney; Klaene, Mark; Leon, Ed; McMillan, Russet; Owen, Russell; Sayres, Conor; Sheen, Tyler; Shugart, Alysha

    2016-08-01

    The Astrophysical Research Consortium Telescope Imaging Camera, ARCTIC, is a new optical imaging camera now in use at the Astrophysical Research Consortium (ARC) 3.5m telescope at Apache Point Observatory (APO). As a facility instrument, the design criteria broadly encompassed many current and future science opportunities, and the components were built for quick repair or replacement, to minimize down-time. Examples include a quick change shutter, filter drive components accessible from the exterior and redundant amplifiers on the detector. The detector is a Semiconductor Technology Associates (STA) device with several key properties (e.g. high quantum efficiency, low read-noise, quick readout, minimal fringing, operational bandpass 350-950nm). Focal reducing optics (f/10.3 to f/8.0) were built to control aberrations over a 7.8'x7.8' field, with a plate scale of 0.11" per 0.15 micron pixel. The instrument body and dewar were designed to be simple and robust with only two components to the structure forward of the dewar, which in turn has minimal feedthroughs and permeation areas and holds a vacuum <10-8 Torr. A custom shutter was also designed, using pneumatics as the driving force. This device provides exceptional performance and reduces heat near the optical path. Measured performance is repeatable at the 2ms level and offers field uniformity to the same level of precision. The ARCTIC facility imager will provide excellent science capability with robust operation and minimal maintenance for the next decade or more at APO.

  12. In vitro near-infrared imaging of occlusal dental caries using a germanium-enhanced CMOS camera

    NASA Astrophysics Data System (ADS)

    Lee, Chulsung; Darling, Cynthia L.; Fried, Daniel

    2010-02-01

    The high transparency of dental enamel in the near-infrared (NIR) at 1310-nm can be exploited for imaging dental caries without the use of ionizing radiation. The objective of this study was to determine whether the lesion contrast derived from NIR transillumination can be used to estimate lesion severity. Another aim was to compare the performance of a new Ge enhanced complementary metal-oxide-semiconductor (CMOS) based NIR imaging camera with the InGaAs focal plane array (FPA). Extracted human teeth (n=52) with natural occlusal caries were imaged with both cameras at 1310-nm and the image contrast between sound and carious regions was calculated. After NIR imaging, teeth were sectioned and examined using more established methods, namely polarized light microscopy (PLM) and transverse microradiography (TMR) to calculate lesion severity. Lesions were then classified into 4 categories according to the lesion severity. Lesion contrast increased significantly with lesion severity for both cameras (p<0.05). The Ge enhanced CMOS camera equipped with the larger array and smaller pixels yielded higher contrast values compared with the smaller InGaAs FPA (p<0.01). Results demonstrate that NIR lesion contrast can be used to estimate lesion severity.

  13. Determining patient 6-degrees-of-freedom motion from stereo infrared cameras during supine medical imaging

    NASA Astrophysics Data System (ADS)

    Beach, Richard D.; Feng, Bing; Shazeeb, Mohammed S.; King, Michael A.

    2006-03-01

    Patient motion during SPECT acquisition causes inconsistent projection data and reconstruction artifacts which can significantly affect the diagnostic accuracy of SPECT. The tracking of motion by infrared monitoring spherical reflectors (markers) on the patient's surface can provide 6-Degrees-of-Freedom (6-DOF) motion information capable of providing clinically robust correction. Object rigid-body motion can be described by 3 translational DOF and 3 rotational DOF. Polaris marker position information obtained by stereo infrared cameras requires algorithmic processing to correctly record the tracked markers, and to calibrate and map Polaris co-ordinate data into the SPECT co-ordinate system. Marker data then requires processing to determine the rotational and translational 6-DOF motion to ultimately be used for SPECT image corrections. This processing utilizes an algorithm involving least-squares fitting, to each other, of two 3-D point sets using singular value decomposition (SVD) resulting in the rotation matrix and translation of the rigid body centroid. We have demonstrated the ability to monitor 12 clinical patients as well as 7 markers on 2 elastic belts worn by a volunteer while intentionally moving, and determined the 3 axis Euclidian rotation angles and centroid translations. An anthropomorphic phantom with Tc-99m added to the heart, liver, and body was simultaneously SPECT imaged and motion tracked using 4 rigidly mounted markers. The determined rotation matrix and translation information was used to correct the image resulting in virtually identical "no motion" and "corrected" images. We plan to initiate routine 6-DOF tracking of patient motion during SPECT imaging in the future.

  14. Speckle Imaging of Binary Stars with the RYTSI Speckle Camera and the Kitt Peak Mini-Mosaic Imager

    NASA Astrophysics Data System (ADS)

    Horch, E. P.; Riedel, H.; van Altena, W. F.; Meyer, R. D.; Corson, C.

    2004-05-01

    The RYTSI speckle camera, completed in 2001 and currently located at the WIYN Observatory at Kitt Peak, is a device that allows speckle images to be recorded on any large-format CCD device. It uses a two-axis galvanometric scanning mirror system to move the location of the target in a serpentine step-and-stop pattern, thus building up a sequence of speckle images that covers the CCD chip prior to full-frame readout. In February of 2004, the RYTSI device was combined with the Kitt Peak Mini-Mosaic imager for speckle observations at the WIYN 3.5 m Telescope. The mechanics and successes of that observing run are reported, including some diffraction-limited image reconstructions of binary stars. Estimates of the capabilities of the RYTSI/Mini-Mosaic combination based on this experience are also discussed. This work is funded by NSF Grant AST-0307450. The WIYN Telescope is a joint facility of the University of Wisconsin-Madison,Indiana University, Yale University, and the National Optical Astronomy Observatories.

  15. Automated Registration of Images from Multiple Bands of Resourcesat-2 Liss-4 camera

    NASA Astrophysics Data System (ADS)

    Radhadevi, P. V.; Solanki, S. S.; Jyothi, M. V.; Varadan, G.

    2014-11-01

    Continuous and automated co-registration and geo-tagging of images from multiple bands of Liss-4 camera is one of the interesting challenges of Resourcesat-2 data processing. Three arrays of the Liss-4 camera are physically separated in the focal plane in alongtrack direction. Thus, same line on the ground will be imaged by extreme bands with a time interval of as much as 2.1 seconds. During this time, the satellite would have covered a distance of about 14 km on the ground and the earth would have rotated through an angle of 30". A yaw steering is done to compensate the earth rotation effects, thus ensuring a first level registration between the bands. But this will not do a perfect co-registration because of the attitude fluctuations, satellite movement, terrain topography, PSM steering and small variations in the angular placement of the CCD lines (from the pre-launch values) in the focal plane. This paper describes an algorithm based on the viewing geometry of the satellite to do an automatic band to band registration of Liss-4 MX image of Resourcesat-2 in Level 1A. The algorithm is using the principles of photogrammetric collinearity equations. The model employs an orbit trajectory and attitude fitting with polynomials. Then, a direct geo-referencing with a global DEM with which every pixel in the middle band is mapped to a particular position on the surface of the earth with the given attitude. Attitude is estimated by interpolating measurement data obtained from star sensors and gyros, which are sampled at low frequency. When the sampling rate of attitude information is low compared to the frequency of jitter or micro-vibration, images processed by geometric correction suffer from distortion. Therefore, a set of conjugate points are identified between the bands to perform a relative attitude error estimation and correction which will ensure the internal accuracy and co-registration of bands. Accurate calculation of the exterior orientation parameters with

  16. Two Years of Digital Terrain Model Production Using the Lunar Reconnaissance Orbiter Narrow Angle Camera

    NASA Astrophysics Data System (ADS)

    Burns, K.; Robinson, M. S.; Speyerer, E.; LROC Science Team

    2011-12-01

    One of the primary objectives of the Lunar Reconnaissance Orbiter Camera (LROC) is to gather stereo observations with the Narrow Angle Camera (NAC). These stereo observations are used to generate digital terrain models (DTMs). The NAC has a pixel scale of 0.5 to 2.0 meters but was not designed for stereo observations and thus requires the spacecraft to roll off-nadir to acquire these images. Slews interfere with the data collection of the other instruments, so opportunities are currently limited to four per day. Arizona State University has produced DTMs from 95 stereo pairs for 11 Constellation Project (CxP) sites (Aristarchus, Copernicus crater, Gruithuisen domes, Hortensius domes, Ina D-caldera, Lichtenberg crater, Mare Ingenii, Marius hills, Reiner Gamma, South Pole-Aitkin Rim, Sulpicius Gallus) as well as 30 other regions of scientific interest (including: Bhabha crater, highest and lowest elevation points, Highland Ponds, Kugler Anuchin, Linne Crater, Planck Crater, Slipher crater, Sears Crater, Mandel'shtam Crater, Virtanen Graben, Compton/Belkovich, Rumker Domes, King Crater, Luna 16/20/23/24 landing sites, Ranger 6 landing site, Wiener F Crater, Apollo 11/14/15/17, fresh craters, impact melt flows, Larmor Q crater, Mare Tranquillitatis pit, Hansteen Alpha, Moore F Crater, and Lassell Massif). To generate DTMs, the USGS ISIS software and SOCET SET° from BAE Systems are used. To increase the absolute accuracy of the DTMs, data obtained from the Lunar Orbiter Laser Altimeter (LOLA) is used to coregister the NAC images and define the geodetic reference frame. NAC DTMs have been used in examination of several sites, e.g. Compton-Belkovich, Marius Hills and Ina D-caldera [1-3]. LROC will continue to acquire high-resolution stereo images throughout the science phase of the mission and any extended mission opportunities, thus providing a vital dataset for scientific research as well as future human and robotic exploration. [1] B.L. Jolliff (2011) Nature

  17. Tower Camera Handbook

    SciTech Connect

    Moudry, D

    2005-01-01

    The tower camera in Barrow provides hourly images of ground surrounding the tower. These images may be used to determine fractional snow cover as winter arrives, for comparison with the albedo that can be calculated from downward-looking radiometers, as well as some indication of present weather. Similarly, during spring time, the camera images show the changes in the ground albedo as the snow melts. The tower images are saved in hourly intervals. In addition, two other cameras, the skydeck camera in Barrow and the piling camera in Atqasuk, show the current conditions at those sites.

  18. SPLASSH: Open source software for camera-based high-speed, multispectral in-vivo optical image acquisition.

    PubMed

    Sun, Ryan; Bouchard, Matthew B; Hillman, Elizabeth M C

    2010-08-02

    Camera-based in-vivo optical imaging can provide detailed images of living tissue that reveal structure, function, and disease. High-speed, high resolution imaging can reveal dynamic events such as changes in blood flow and responses to stimulation. Despite these benefits, commercially available scientific cameras rarely include software that is suitable for in-vivo imaging applications, making this highly versatile form of optical imaging challenging and time-consuming to implement. To address this issue, we have developed a novel, open-source software package to control high-speed, multispectral optical imaging systems. The software integrates a number of modular functions through a custom graphical user interface (GUI) and provides extensive control over a wide range of inexpensive IEEE 1394 Firewire cameras. Multispectral illumination can be incorporated through the use of off-the-shelf light emitting diodes which the software synchronizes to image acquisition via a programmed microcontroller, allowing arbitrary high-speed illumination sequences. The complete software suite is available for free download. Here we describe the software's framework and provide details to guide users with development of this and similar software.

  19. Calibration of the Lunar Reconnaissance Orbiter Camera

    NASA Astrophysics Data System (ADS)

    Tschimmel, M.; Robinson, M. S.; Humm, D. C.; Denevi, B. W.; Lawrence, S. J.; Brylow, S.; Ravine, M.; Ghaemi, T.

    2008-12-01

    The Lunar Reconnaissance Orbiter Camera (LROC) onboard the NASA Lunar Reconnaissance Orbiter (LRO) spacecraft consists of three cameras: the Wide-Angle Camera (WAC) and two identical Narrow Angle Cameras (NAC-L, NAC-R). The WAC is push-frame imager with 5 visible wavelength filters (415 to 680 nm) at a spatial resolution of 100 m/pixel and 2 UV filters (315 and 360 nm) with a resolution of 400 m/pixel. In addition to the multicolor imaging the WAC can operate in monochrome mode to provide a global large- incidence angle basemap and a time-lapse movie of the illumination conditions at both poles. The WAC has a highly linear response, a read noise of 72 e- and a full well capacity of 47,200 e-. The signal-to-noise ratio in each band is 140 in the worst case. There are no out-of-band leaks and the spectral response of each filter is well characterized. Each NAC is a monochrome pushbroom scanner, providing images with a resolution of 50 cm/pixel from a 50-km orbit. A single NAC image has a swath width of 2.5 km and a length of up to 26 km. The NACs are mounted to acquire side-by-side imaging for a combined swath width of 5 km. The NAC is designed to fully characterize future human and robotic landing sites in terms of topography and hazard risks. The North and South poles will be mapped on a 1-meter-scale poleward of 85.5° latitude. Stereo coverage can be provided by pointing the NACs off-nadir. The NACs are also highly linear. Read noise is 71 e- for NAC-L and 74 e- for NAC-R and the full well capacity is 248,500 e- for NAC-L and 262,500 e- for NAC- R. The focal lengths are 699.6 mm for NAC-L and 701.6 mm for NAC-R; the system MTF is 28% for NAC-L and 26% for NAC-R. The signal-to-noise ratio is at least 46 (terminator scene) and can be higher than 200 (high sun scene). Both NACs exhibit a straylight feature, which is caused by out-of-field sources and is of a magnitude of 1-3%. However, as this feature is well understood it can be greatly reduced during ground

  20. Human Visual System-Based Fundus Image Quality Assessment of Portable Fundus Camera Photographs.

    PubMed

    Wang, Shaoze; Jin, Kai; Lu, Haitong; Cheng, Chuming; Ye, Juan; Qian, Dahong

    2016-04-01

    Telemedicine and the medical "big data" era in ophthalmology highlight the use of non-mydriatic ocular fundus photography, which has given rise to indispensable applications of portable fundus cameras. However, in the case of portable fundus photography, non-mydriatic image quality is more vulnerable to distortions, such as uneven illumination, color distortion, blur, and low contrast. Such distortions are called generic quality distortions. This paper proposes an algorithm capable of selecting images of fair generic quality that would be especially useful to assist inexperienced individuals in collecting meaningful and interpretable data with consistency. The algorithm is based on three characteristics of the human visual system--multi-channel sensation, just noticeable blur, and the contrast sensitivity function to detect illumination and color distortion, blur, and low contrast distortion, respectively. A total of 536 retinal images, 280 from proprietary databases and 256 from public databases, were graded independently by one senior and two junior ophthalmologists, such that three partial measures of quality and generic overall quality were classified into two categories. Binary classification was implemented by the support vector machine and the decision tree, and receiver operating characteristic (ROC) curves were obtained and plotted to analyze the performance of the proposed algorithm. The experimental results revealed that the generic overall quality classification achieved a sensitivity of 87.45% at a specificity of 91.66%, with an area under the ROC curve of 0.9452, indicating the value of applying the algorithm, which is based on the human vision system, to assess the image quality of non-mydriatic photography, especially for low-cost ophthalmological telemedicine applications.

  1. Resolved Spectrophotometric Properties of the Ceres Surface from Dawn Framing Camera Images

    NASA Astrophysics Data System (ADS)

    Schroeder, Stefan; Mottola, Stefano; Carsenty, Uri; Jaumann, Ralf; Li, Jian-Yang; Palmer, Eric; Pieters, Carle; Preusker, Frank; Stephan, Katrin; Raymond, Carol; Russell, Christopher T.

    2016-10-01

    We present a global spectrophotometric characterization of the Ceres surface using Dawn Framing Camera images. We identify the photometric model that yields the best results for photometrically correcting images. Corrected images acquired on approach to Ceres were assembled into global maps of albedo and color. The albedo map is dominated by a large, circular, bright basin-like feature, known from HST images, and dotted by smaller bright features mostly associated with fresh-looking craters. The dominant color variation over the surface is represented by the presence of "blue" material in and around such craters, which has a negative spectral slope when compared to average terrain. We also mapped variations of the phase curve by employing an exponential photometric model, a technique previously applied to asteroid Vesta. The surface of Ceres scatters light differently from Vesta in the sense that the ejecta of several fresh-looking craters appear to be physically smooth rather than rough. Albedo, blue color, and physical smoothness all appear to be indicators of youth. The physical smoothness of some blue terrains is consistent with an initially liquid condition, perhaps as a consequence of impact melting of subsurface water ice. We propose that the color of blue ejecta derives from an originally ice-rich condition, which implies the presence of sub-surface deposits of water ice. Space weathering may be responsible for the fading of the blue color over time. The large positive albedo feature of which the Dantu crater forms the northern part may be an ancient impact basin, or palimpsest, whose relief has mostly disappeared. Its visible color and phase curve are similar to those of Ceres average. Occator crater has several bright spots. The center spot is the brightest with a visual normal albedo of six times Ceres average and has a red color. Its scattering properties are consistent with those of a particulate surface deposit of high albedo. The less

  2. Performance Evaluations and Quality Validation System for Optical Gas Imaging Cameras That Visualize Fugitive Hydrocarbon Gas Emissions

    EPA Science Inventory

    Optical gas imaging (OGI) cameras have the unique ability to exploit the electromagnetic properties of fugitive chemical vapors to make invisible gases visible. This ability is extremely useful for industrial facilities trying to mitigate product losses from escaping gas and fac...

  3. The implementation of a thermal imaging camera for testing the temperature of the cutting zone in turning

    NASA Astrophysics Data System (ADS)

    Ślusarczyk, Ł.

    2016-09-01

    The paper presents research concerning temperature distribution in cutting zone in AMS 5643 steel turning with the help of a thermal imaging camera. Experimental studies served for the verification of the material model, used in simulation examinations for the optimization of cutting data.

  4. Radioisotope guided surgery with imaging probe, a hand-held high-resolution gamma camera

    NASA Astrophysics Data System (ADS)

    Soluri, A.; Trotta, C.; Scopinaro, F.; Tofani, A.; D'Alessandria, C.; Pasta, V.; Stella, S.; Massari, R.

    2007-12-01

    Since 1997, our group of Physics together with Nuclear Physicians studies imaging probes (IP), hand-held, high-resolution gamma cameras for radio-guided surgery (RGS). Present work is aimed to verify the usefulness of two updated IP in different surgical operations. Forty patients scheduled for breast cancer sentinel node (SN) biopsy, five patients with nodal recurrence of thyroid cancer, seven patients with parathyro