Science.gov

Sample records for camera lroc images

  1. Lunar Reconnaissance Orbiter Camera (LROC) instrument overview

    USGS Publications Warehouse

    Robinson, M.S.; Brylow, S.M.; Tschimmel, M.; Humm, D.; Lawrence, S.J.; Thomas, P.C.; Denevi, B.W.; Bowman-Cisneros, E.; Zerr, J.; Ravine, M.A.; Caplinger, M.A.; Ghaemi, F.T.; Schaffner, J.A.; Malin, M.C.; Mahanti, P.; Bartels, A.; Anderson, J.; Tran, T.N.; Eliason, E.M.; McEwen, A.S.; Turtle, E.; Jolliff, B.L.; Hiesinger, H.

    2010-01-01

    The Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) and Narrow Angle Cameras (NACs) are on the NASA Lunar Reconnaissance Orbiter (LRO). The WAC is a 7-color push-frame camera (100 and 400 m/pixel visible and UV, respectively), while the two NACs are monochrome narrow-angle linescan imagers (0.5 m/pixel). The primary mission of LRO is to obtain measurements of the Moon that will enable future lunar human exploration. The overarching goals of the LROC investigation include landing site identification and certification, mapping of permanently polar shadowed and sunlit regions, meter-scale mapping of polar regions, global multispectral imaging, a global morphology base map, characterization of regolith properties, and determination of current impact hazards.

  2. Retrieving lunar topography from multispectral LROC images

    NASA Astrophysics Data System (ADS)

    Korokhin, Viktor V.; Velikodsky, Yuri I.; Shalygin, Eugene V.; Shkuratov, Yuriy G.; Kaydash, Vadym G.; Videen, Gorden

    2014-03-01

    A technique for retrieving information about the lunar topography from any individual multispectral LROC Wide Angle Camera (WAC) image has been developed. This technology is possible, since images acquired at different wavelengths correspond to different viewing angles and the influence of color differences between the images on the parallax assessments is small. This method provides the precision of Digital Elevation Models (DEMs) comparable to the global lunar 100 m raster DTM retrieved from the LROC WAC stereo model (GLD100). It potentially allows one to obtain maps of the elevations with better horizontal resolution than those of the GLD100. An empirical model of the distortion for LROC WAC has been developed and used for correction of the initial WAC images. In contrast to the standard pre-flight model, our model allows for compensation of the radial distortion, decentering the optics, and tilt of the CCD array almost fully. The DEMs obtained using our approach exhibit real morphological details in some cases that are invisible in GLD100 maps. Thus, our method suggests additional independent information about the lunar topography. The fact that our elevation maps have the same projection as the initial images allows valid corrections of these images to account for topographic effects (i. e. orthorectification) in contrast to the use of the GLD100 that may have slightly different coordinates referencing in comparison to individual WAC images.

  3. Flight Calibration of the LROC Narrow Angle Camera

    NASA Astrophysics Data System (ADS)

    Humm, D. C.; Tschimmel, M.; Brylow, S. M.; Mahanti, P.; Tran, T. N.; Braden, S. E.; Wiseman, S.; Danton, J.; Eliason, E. M.; Robinson, M. S.

    2015-09-01

    Characterization and calibration are vital for instrument commanding and image interpretation in remote sensing. The Lunar Reconnaissance Orbiter Camera Narrow Angle Camera (LROC NAC) takes 500 Mpixel greyscale images of lunar scenes at 0.5 meters/pixel. It uses two nominally identical line scan cameras for a larger crosstrack field of view. Stray light, spatial crosstalk, and nonlinearity were characterized using flight images of the Earth and the lunar limb. These are important for imaging shadowed craters, studying ˜1 meter size objects, and photometry respectively. Background, nonlinearity, and flatfield corrections have been implemented in the calibration pipeline. An eight-column pattern in the background is corrected. The detector is linear for DN = 600-2000 but a signal-dependent additive correction is required and applied for DN<600. A predictive model of detector temperature and dark level was developed to command dark level offset. This avoids images with a cutoff at DN=0 and minimizes quantization error in companding. Absolute radiometric calibration is derived from comparison of NAC images with ground-based images taken with the Robotic Lunar Observatory (ROLO) at much lower spatial resolution but with the same photometric angles.

  4. Characterization of previously unidentified lunar pyroclastic deposits using Lunar Reconnaissance Orbiter Camera (LROC) data

    USGS Publications Warehouse

    Gustafson, J. Olaf; Bell, James F.; Gaddis, Lisa R.R.; Hawke, B. Ray Ray; Giguere, Thomas A.

    2012-01-01

    We used a Lunar Reconnaissance Orbiter Camera (LROC) global monochrome Wide-angle Camera (WAC) mosaic to conduct a survey of the Moon to search for previously unidentified pyroclastic deposits. Promising locations were examined in detail using LROC multispectral WAC mosaics, high-resolution LROC Narrow Angle Camera (NAC) images, and Clementine multispectral (ultraviolet-visible or UVVIS) data. Out of 47 potential deposits chosen for closer examination, 12 were selected as probable newly identified pyroclastic deposits. Potential pyroclastic deposits were generally found in settings similar to previously identified deposits, including areas within or near mare deposits adjacent to highlands, within floor-fractured craters, and along fissures in mare deposits. However, a significant new finding is the discovery of localized pyroclastic deposits within floor-fractured craters Anderson E and F on the lunar farside, isolated from other known similar deposits. Our search confirms that most major regional and localized low-albedo pyroclastic deposits have been identified on the Moon down to ~100 m/pix resolution, and that additional newly identified deposits are likely to be either isolated small deposits or additional portions of discontinuous, patchy deposits.

  5. Preliminary Mapping of Permanently Shadowed and Sunlit Regions Using the Lunar Reconnaissance Orbiter Camera (LROC)

    NASA Astrophysics Data System (ADS)

    Speyerer, E.; Koeber, S.; Robinson, M. S.

    2010-12-01

    The spin axis of the Moon is tilted by only 1.5° (compared with the Earth's 23.5°), leaving some areas near the poles in permanent shadow while other nearby regions remain sunlit for a majority of the year. Theory, radar data, neutron measurements, and Lunar CRater Observation and Sensing Satellite (LCROSS) observations suggest that volatiles may be present in the cold traps created inside these permanently shadowed regions. While areas of near permanent illumination are prime locations for future lunar outposts due to benign thermal conditions and near constant solar power. The Lunar Reconnaissance Orbiter (LRO) has two imaging systems that provide medium and high resolution views of the poles. During almost every orbit the LROC Wide Angle Camera (WAC) acquires images at 100 m/pixel of the polar region (80° to 90° north and south latitude). In addition, the LROC Narrow Angle Camera (NAC) targets selected regions of interest at 0.7 to 1.5 m/pixel [Robinson et al., 2010]. During the first 11 months of the nominal mission, LROC acquired almost 6,000 WAC images and over 7,300 NAC images of the polar region (i.e., within 2° of pole). By analyzing this time series of WAC and NAC images, regions of permanent shadow and permanent, or near-permanent illumination can be quantified. The LROC Team is producing several reduced data products that graphically illustrate the illumination conditions of the polar regions. Illumination movie sequences are being produced that show how the lighting conditions change over a calendar year. Each frame of the movie sequence is a polar stereographic projected WAC image showing the lighting conditions at that moment. With the WAC’s wide field of view (~100 km at an altitude of 50 km), each frame has repeat coverage between 88° and 90° at each pole. The same WAC images are also being used to develop multi-temporal illumination maps that show the percent each 100 m × 100 m area is illuminated over a period of time. These maps are derived by stacking all the WAC frames, selecting a threshold to determine if the surface is illuminated, and summing the resulting binary images. In addition, mosaics of NAC images are also being produced for regions of interest at a scale of 0.7 to 1.5 m/pixel. The mosaics produced so far have revealed small illuminated surfaces on the tens of meters scale that were previously thought to be shadowed during that time. The LROC dataset of the polar regions complements previous illumination analysis of Clementine images [Bussey et al., 1999], Kaguya topography [Bussey et al., 2010], and the current efforts underway by the Lunar Orbiter Laser Altimeter (LOLA) Team [Mazarico et al., 2010] and provide an important new dataset for science and exploration. References: Bussey et al. (1999), Illumination conditions at the lunar south pole, Geophysical Research Letters, 26(9), 1187-1190. Bussey et al. (2010), Illumination conditions of the south pole of the Moon derived from Kaguya topography, Icarus, 208, 558-564. Mazarico et al. (2010), Illumination of the lunar poles from the Lunar Orbiter Laser Altimeter (LOLA) Topography Data, paper presented at 41st LPSC, Houston, TX. Robinson et al. (2010), Lunar Reconnaissance Orbiter Camera (LROC) Instrument Overview, Space Sci Rev, 150, 81-124.

  6. Exploring the Moon at High-Resolution: First Results From the Lunar Reconnaissance Orbiter Camera (LROC)

    NASA Astrophysics Data System (ADS)

    Robinson, Mark; Hiesinger, Harald; McEwen, Alfred; Jolliff, Brad; Thomas, Peter C.; Turtle, Elizabeth; Eliason, Eric; Malin, Mike; Ravine, A.; Bowman-Cisneros, Ernest

    The Lunar Reconnaissance Orbiter (LRO) spacecraft was launched on an Atlas V 401 rocket from the Cape Canaveral Air Force Station Launch Complex 41 on June 18, 2009. After spending four days in Earth-Moon transit, the spacecraft entered a three month commissioning phase in an elliptical 30×200 km orbit. On September 15, 2009, LRO began its planned one-year nominal mapping mission in a quasi-circular 50 km orbit. A multi-year extended mission in a fixed 30×200 km orbit is optional. The Lunar Reconnaissance Orbiter Camera (LROC) consists of a Wide Angle Camera (WAC) and two Narrow Angle Cameras (NACs). The WAC is a 7-color push-frame camera, which images the Moon at 100 and 400 m/pixel in the visible and UV, respectively, while the two NACs are monochrome narrow-angle linescan imagers with 0.5 m/pixel spatial resolution. LROC was specifically designed to address two of the primary LRO mission requirements and six other key science objectives, including 1) assessment of meter-and smaller-scale features in order to select safe sites for potential lunar landings near polar resources and elsewhere on the Moon; 2) acquire multi-temporal synoptic 100 m/pixel images of the poles during every orbit to unambiguously identify regions of permanent shadow and permanent or near permanent illumination; 3) meter-scale mapping of regions with permanent or near-permanent illumination of polar massifs; 4) repeat observations of potential landing sites and other regions to derive high resolution topography; 5) global multispectral observations in seven wavelengths to characterize lunar resources, particularly ilmenite; 6) a global 100-m/pixel basemap with incidence angles (60° -80° ) favorable for morphological interpretations; 7) sub-meter imaging of a variety of geologic units to characterize their physical properties, the variability of the regolith, and other key science questions; 8) meter-scale coverage overlapping with Apollo-era panoramic images (1-2 m/pixel) to document the number of small impacts since 1971-1972. LROC allows us to determine the recent impact rate of bolides in the size range of 0.5 to 10 meters, which is currently not well known. Determining the impact rate at these sizes enables engineering remediation measures for future surface operations and interplanetary travel. The WAC has imaged nearly the entire Moon in seven wavelengths. A preliminary global WAC stereo-based topographic model is in preparation [1] and global color processing is underway [2]. As the mission progresses repeat global coverage will be obtained as lighting conditions change providing a robust photometric dataset. The NACs are revealing a wealth of morpho-logic features at the meter scale providing the engineering and science constraints needed to support future lunar exploration. All of the Apollo landing sites have been imaged, as well as the majority of robotic landing and impact sites. Through the use of off-nadir slews a collection of stereo pairs is being acquired that enable 5-m scale topographic mapping [3-7]. Impact mor-phologies (terraces, impact melt, rays, etc) are preserved in exquisite detail at all Copernican craters and are enabling new studies of impact mechanics and crater size-frequency distribution measurements [8-12]. Other topical studies including, for example, lunar pyroclastics, domes, and tectonics are underway [e.g., 10-17]. The first PDS data release of LROC data will be in March 2010, and will include all images from the commissioning phase and the first 3 months of the mapping phase. [1] Scholten et al. (2010) 41st LPSC, #2111; [2] Denevi et al. (2010a) 41st LPSC, #2263; [3] Beyer et al. (2010) 41st LPSC, #2678; [4] Archinal et al. (2010) 41st LPSC, #2609; [5] Mattson et al. (2010) 41st LPSC, #1871; [6] Tran et al. (2010) 41st LPSC, #2515; [7] Oberst et al. (2010) 41st LPSC, #2051; [8] Bray et al. (2010) 41st LPSC, #2371; [9] Denevi et al. (2010b) 41st LPSC, #2582; [10] Hiesinger et al. (2010a) 41st LPSC, #2278; [11] Hiesinger et al. (2010b) 41st LPSC, #2304; [12] van der Bogert et al. (2010) 41st LPSC, #2165;

  7. Occurrence probability of slopes on the lunar surface: Estimate by the shaded area percentage in the LROC NAC images

    NASA Astrophysics Data System (ADS)

    Abdrakhimov, A. M.; Basilevsky, A. T.; Ivanov, M. A.; Kokhanov, A. A.; Karachevtseva, I. P.; Head, J. W.

    2015-09-01

    The paper describes the method of estimating the distribution of slopes by the portion of shaded areas measured in the images acquired at different Sun elevations. The measurements were performed for the benefit of the Luna-Glob Russian mission. The western ellipse for the spacecraft landing in the crater Bogus-lawsky in the southern polar region of the Moon was investigated. The percentage of the shaded area was measured in the images acquired with the LROC NAC camera with a resolution of ~0.5 m. Due to the close vicinity of the pole, it is difficult to build digital terrain models (DTMs) for this region from the LROC NAC images. Because of this, the method described has been suggested. For the landing ellipse investigated, 52 LROC NAC images obtained at the Sun elevation from 4° to 19° were used. In these images the shaded portions of the area were measured, and the values of these portions were transferred to the values of the occurrence of slopes (in this case, at the 3.5-m baseline) with the calibration by the surface characteristics of the Lunokhod-1 study area. For this area, the digital terrain model of the ~0.5-m resolution and 13 LROC NAC images obtained at different elevations of the Sun are available. From the results of measurements and the corresponding calibration, it was found that, in the studied landing ellipse, the occurrence of slopes gentler than 10° at the baseline of 3.5 m is 90%, while it is 9.6, 5.7, and 3.9% for the slopes steeper than 10°, 15°, and 20°, respectively. Obviously, this method can be recommended for application if there is no DTM of required granularity for the regions of interest, but there are high-resolution images taken at different elevations of the Sun.

  8. Photometric parameter maps of the Moon derived from LROC WAC images

    NASA Astrophysics Data System (ADS)

    Sato, H.; Robinson, M. S.; Hapke, B. W.; Denevi, B. W.; Boyd, A. K.

    2013-12-01

    Spatially resolved photometric parameter maps were computed from 21 months of Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) images. Due to a 60° field-of-view (FOV), the WAC achieves nearly global coverage of the Moon each month with more than 50% overlap from orbit-to-orbit. From the repeat observations at various viewing and illumination geometries, we calculated Hapke bidirectional reflectance model parameters [1] for 1°x1° "tiles" from 70°N to 70°S and 0°E to 360°E. About 66,000 WAC images acquired from February 2010 to October 2011 were converted from DN to radiance factor (I/F) though radiometric calibration, partitioned into gridded tiles, and stacked in a time series (tile-by-tile method [2]). Lighting geometries (phase, incidence, emission) were computed using the WAC digital terrain model (100 m/pixel) [3]. The Hapke parameters were obtained by model fitting against I/F within each tile. Among the 9 parameters of the Hapke model, we calculated 3 free parameters (w, b, and hs) by setting constant values for 4 parameters (Bco=0, hc=1, ?, ?=0) and interpolating 2 parameters (c, Bso). In this simplification, we ignored the Coherent Backscatter Opposition Effect (CBOE) to avoid competing CBOE and Shadow Hiding Opposition Effect (SHOE). We also assumed that surface regolith porosity is uniform across the Moon. The roughness parameter (?) was set to an averaged value from the equator (× 3°N). The Henyey-Greenstein double lobe function (H-G2) parameter (c) was given by the 'hockey stick' relation [4] (negative correlation) between b and c based on laboratory measurements. The amplitude of SHOE (Bso) was given by the correlation between w and Bso at the equator (× 3°N). Single scattering albedo (w) is strongly correlated to the photometrically normalized I/F, as expected. The c shows an inverse trend relative to b due to the 'hockey stick' relation. The parameter c is typically low for the maria (0.08×0.06) relative to the highlands (0.47×0.16). Since c controls the fraction of backward/forward scattering in H-G2, lower c for the maria indicates more forward scattering relative to the highlands. This trend is opposite to what was expected because darker particles are usually more backscattering. However, the lower albedo of the maria is due to the higher abundance of ilmenite, which is an opaque mineral that scatters all of the light by specular reflection from the its surface. If their surface facets are relatively smooth the ilmenite particles will be forward scattering. Other factors (e.g. grain shape, grain size, porosity, maturity) besides the mineralogy might also be affecting c. The angular-width of SHOE (hs) typically shows lower values (0.047×0.02) for the maria relative to the highlands (0.074×0.025). An increase in hs for the maria theoretically suggests lower porosity or a narrower grain size distribution [1], but the link between actual materials and hs is not well constrained. Further experiments using both laboratory and spacecraft observations will help to unravel the photometric properties of the surface materials of the Moon. [1] Hapke, B.: Cambridge Univ. Press, 2012. [2] Sato, H. et al.: 42nd LPSC, abstract #1974, 2011. [3] Scholten, F. et al.: JGR, 117, E00H17, 2012. [4] Hapke, B.: Icarus, 221(2), p1079-1083, 2012.

  9. Secondary Craters and the Size-Velocity Distribution of Ejected Fragments around Lunar Craters Measured Using LROC Images

    NASA Astrophysics Data System (ADS)

    Singer, K. N.; Jolliff, B. L.; McKinnon, W. B.

    2013-12-01

    Title: Secondary Craters and the Size-Velocity Distribution of Ejected Fragments around Lunar Craters Measured Using LROC Images Authors: Kelsi N. Singer1, Bradley L. Jolliff1, and William B. McKinnon1 Affiliations: 1. Earth and Planetary Sciences, Washington University in St Louis, St. Louis, MO, United States. We report results from analyzing the size-velocity distribution (SVD) of secondary crater forming fragments from the 93 km diameter Copernicus impact. We measured the diameters of secondary craters and their distances from Copernicus using LROC Wide Angle Camera (WAC) and Narrow Angle Camera (NAC) image data. We then estimated the velocity and size of the ejecta fragment that formed each secondary crater from the range equation for a ballistic trajectory on a sphere and Schmidt-Holsapple scaling relations. Size scaling was carried out in the gravity regime for both non-porous and porous target material properties. We focus on the largest ejecta fragments (dfmax) at a given ejection velocity (?ej) and fit the upper envelope of the SVD using quantile regression to an equation of the form dfmax = A*?ej ^- ?. The velocity exponent, ?, describes how quickly fragment sizes fall off with increasing ejection velocity during crater excavation. For Copernicus, we measured 5800 secondary craters, at distances of up to 700 km (15 crater radii), corresponding to an ejecta fragment velocity of approximately 950 m/s. This mapping only includes secondary craters that are part of a radial chain or cluster. The two largest craters in chains near Copernicus that are likely to be secondaries are 6.4 and 5.2 km in diameter. We obtained a velocity exponent, ?, of 2.2 × 0.1 for a non-porous surface. This result is similar to Vickery's [1987, GRL 14] determination of ? = 1.9 × 0.2 for Copernicus using Lunar Orbiter IV data. The availability of WAC 100 m/pix global mosaics with illumination geometry optimized for morphology allows us to update and extend the work of Vickery [1986, Icarus 67, and 1987], who compared secondary crater SVDs for craters on the Moon, Mercury, and Mars. Additionally, meter-scale NAC images enable characterization of secondary crater morphologies and fields around much smaller primary craters than were previously investigated. Combined results from all previous studies of ejecta fragment SVDs from secondary crater fields show that ? ranges between approximately 1 and 3. First-order spallation theory predicts a ? of 1 [Melosh 1989, Impact Cratering, Oxford Univ. Press]. Results in Vickery [1987] for the Moon exhibit a generally decreasing ? with increasing primary crater size (5 secondary fields mapped). In the same paper, however, this trend is flat for Mercury (3 fields mapped) and opposite for Mars (4 fields mapped). SVDs for craters on large icy satellites (Ganymede and Europa), with gravities not too dissimilar to lunar gravity, show generally low velocity exponents (? between 1 and 1.5), except for the very largest impactor measured: the 585-km-diameter Gilgamesh basin on Ganymede (? = 2.6 × 0.4) [Singer et al., 2013, Icarus 226]. The present work, focusing initially on lunar craters using LROC data, will attempt to confirm or clarify these trends, and expand the number of examples under a variety of impact conditions and surface materials to evaluate possible causes of variations.

  10. LROC Advances in Lunar Science

    NASA Astrophysics Data System (ADS)

    Robinson, M. S.

    2012-12-01

    Since entering orbit in 2009 the Lunar Reconnaissance Orbiter Camera (LROC) has acquired over 700,000 Wide Angle Camera (WAC) and Narrow Angle Camera (NAC) images of the Moon. This new image collection is fueling research into the origin and evolution of the Moon. NAC images revealed a volcanic complex 35 x 25 km (60N, 100E), between Compton and Belkovich craters (CB). The CB terrain sports volcanic domes and irregular depressed areas (caldera-like collapses). The volcanic complex corresponds to an area of high-silica content (Diviner) and high Th (Lunar Prospector). A low density of impact craters on the CB complex indicates a relatively young age. The LROC team mapped over 150 volcanic domes and 90 volcanic cones in the Marius Hills (MH), many of which were not previously identified. Morphology and compositional estimates (Diviner) indicate that MH domes are silica poor, and are products of low-effusion mare lavas. Impact melt deposits are observed with Copernican impact craters (>10 km) on exterior ejecta, the rim, inner wall, and crater floors. Preserved impact melt flow deposits are observed around small craters (25 km diam.), and estimated melt volumes exceed predictions. At these diameters the amount of melt predicted is small, and melt that is produced is expected to be ejected from the crater. However, we observe well-defined impact melt deposits on the floor of highland craters down to 200 m diameter. A globally distributed population of previously undetected contractional structures were discovered. Their crisp appearance and associated impact crater populations show that they are young landforms (<1 Ga). NAC images also revealed small extensional troughs. Crosscutting relations with small-diameter craters and depths as shallow as 1 m indicate ages <50 Ma. These features place bounds on the amount of global radial contraction and the level of compressional stress in the crust. WAC temporal coverage of the poles allowed quantification of highly illuminated regions, including one site that remains lit for 94% of a year (longest eclipse period of 43 hours). Targeted NAC images provide higher resolution characterization of key sites with permanent shadow and extended illumination. Repeat WAC coverage provides an unparalleled photometric dataset allowing spatially resolved solutions (currently 1 degree) to Hapke's photometric equation - data invaluable for photometric normalization and interpreting physical properties of the regolith. The WAC color also provides the means to solve for titanium, and distinguish subtle age differences within Copernican aged materials. The longevity of the LRO mission allows follow up NAC and WAC observations of previously known and newly discovered targets over a range of illumination and viewing geometries. Of particular merit is the acquisition of NAC stereo pairs and oblique sequences. With the extended SMD phase, the LROC team is working towards imaging the whole Moon with pixel scales of 50 to 200 cm.

  11. Using LROC analysis to evaluate detection accuracy of microcalcification clusters imaged with flat-panel CT mammography

    NASA Astrophysics Data System (ADS)

    Gong, Xing; Glick, Stephen J.; Vedula, Aruna A.

    2004-05-01

    The purpose of this study is to investigate the detectability of microcalcification clusters (MCCs) using CT mammography with a flat-panel detector. Compared with conventional mammography, CT mammography can provide improved discrimination between malignant and benign cases as it can provide the radiologist with more accurate morphological information on MCCs. In this study, two aspects of MCC detection with flat-panel CT mammography were examined: (1) the minimal size of MCCs detectable with mean glandular dose (MGD) used in conventional mammography; (2) the effect of different detector pixel size on the detectability of MCCs. A realistic computer simulation modeling x-ray transport through the breast, as well as both signal and noise propagation through the flat-panel imager, was developed to investigate these questions. Microcalcifications were simulated as calcium carbonate spheres with diameters set at the levels of 125, 150 and 175 ?m. Each cluster consisted of 10 spheres spread randomly in a 6×6 mm2 region of interest (ROI) and the detector pixel size was set to 100×100, 200×200, or 300×300?m2. After reconstructing 100 projection sets for each case (half with signal present) with the cone-beam Feldkamp (FDK) algorithm, a localization receiver operating characteristic (LROC) study was conducted to evaluate the detectability of MCCs. Five observers chose the locations of cluster centers with correspondent confidence ratings. The average area under the LROC curve suggested that the 175 ?m MCCs can be detected at a high level of confidence. Results also indicate that flat-panel detectors with pixel size of 200×200 ?m2 are appropriate for detecting small targets, such as MCCs.

  12. Dry imaging cameras

    PubMed Central

    Indrajit, IK; Alam, Aftab; Sahni, Hirdesh; Bhatia, Mukul; Sahu, Samaresh

    2011-01-01

    Dry imaging cameras are important hard copy devices in radiology. Using dry imaging camera, multiformat images of digital modalities in radiology are created from a sealed unit of unexposed films. The functioning of a modern dry camera, involves a blend of concurrent processes, in areas of diverse sciences like computers, mechanics, thermal, optics, electricity and radiography. Broadly, hard copy devices are classified as laser and non laser based technology. When compared with the working knowledge and technical awareness of different modalities in radiology, the understanding of a dry imaging camera is often superficial and neglected. To fill this void, this article outlines the key features of a modern dry camera and its important issues that impact radiology workflow. PMID:21799589

  13. Spacecraft camera image registration

    NASA Technical Reports Server (NTRS)

    Kamel, Ahmed A. (Inventor); Graul, Donald W. (Inventor); Chan, Fred N. T. (Inventor); Gamble, Donald W. (Inventor)

    1987-01-01

    A system for achieving spacecraft camera (1, 2) image registration comprises a portion external to the spacecraft and an image motion compensation system (IMCS) portion onboard the spacecraft. Within the IMCS, a computer (38) calculates an image registration compensation signal (60) which is sent to the scan control loops (84, 88, 94, 98) of the onboard cameras (1, 2). At the location external to the spacecraft, the long-term orbital and attitude perturbations on the spacecraft are modeled. Coefficients (K, A) from this model are periodically sent to the onboard computer (38) by means of a command unit (39). The coefficients (K, A) take into account observations of stars and landmarks made by the spacecraft cameras (1, 2) themselves. The computer (38) takes as inputs the updated coefficients (K, A) plus synchronization information indicating the mirror position (AZ, EL) of each of the spacecraft cameras (1, 2), operating mode, and starting and stopping status of the scan lines generated by these cameras (1, 2), and generates in response thereto the image registration compensation signal (60). The sources of periodic thermal errors on the spacecraft are discussed. The system is checked by calculating measurement residuals, the difference between the landmark and star locations predicted at the external location and the landmark and star locations as measured by the spacecraft cameras (1, 2).

  14. Selective-imaging camera

    NASA Astrophysics Data System (ADS)

    Szu, Harold; Hsu, Charles; Landa, Joseph; Cha, Jae H.; Krapels, Keith A.

    2015-05-01

    How can we design cameras that image selectively in Full Electro-Magnetic (FEM) spectra? Without selective imaging, we cannot use, for example, ordinary tourist cameras to see through fire, smoke, or other obscurants contributing to creating a Visually Degraded Environment (VDE). This paper addresses a possible new design of selective-imaging cameras at firmware level. The design is consistent with physics of the irreversible thermodynamics of Boltzmann's molecular entropy. It enables imaging in appropriate FEM spectra for sensing through the VDE, and displaying in color spectra for Human Visual System (HVS). We sense within the spectra the largest entropy value of obscurants such as fire, smoke, etc. Then we apply a smart firmware implementation of Blind Sources Separation (BSS) to separate all entropy sources associated with specific Kelvin temperatures. Finally, we recompose the scene using specific RGB colors constrained by the HVS, by up/down shifting Planck spectra at each pixel and time.

  15. Satellite camera image navigation

    NASA Technical Reports Server (NTRS)

    Kamel, Ahmed A. (Inventor); Graul, Donald W. (Inventor); Savides, John (Inventor); Hanson, Charles W. (Inventor)

    1987-01-01

    Pixels within a satellite camera (1, 2) image are precisely located in terms of latitude and longitude on a celestial body, such as the earth, being imaged. A computer (60) on the earth generates models (40, 50) of the satellite's orbit and attitude, respectively. The orbit model (40) is generated from measurements of stars and landmarks taken by the camera (1, 2), and by range data. The orbit model (40) is an expression of the satellite's latitude and longitude at the subsatellite point, and of the altitude of the satellite, as a function of time, using as coefficients (K) the six Keplerian elements at epoch. The attitude model (50) is based upon star measurements taken by each camera (1, 2). The attitude model (50) is a set of expressions for the deviations in a set of mutually orthogonal reference optical axes (x, y, z) as a function of time, for each camera (1, 2). Measured data is fit into the models (40, 50) using a walking least squares fit algorithm. A transformation computer (66 ) transforms pixel coordinates as telemetered by the camera (1, 2) into earth latitude and longitude coordinates, using the orbit and attitude models (40, 50).

  16. LROC Observations of Geologic Features in the Marius Hills

    NASA Astrophysics Data System (ADS)

    Lawrence, S.; Stopar, J. D.; Hawke, R. B.; Denevi, B. W.; Robinson, M. S.; Giguere, T.; Jolliff, B. L.

    2009-12-01

    Lunar volcanic cones, domes, and their associated geologic features are important objects of study for the LROC science team because they represent possible volcanic endmembers that may yield important insights into the history of lunar volcanism and are potential sources of lunar resources. Several hundred domes, cones, and associated volcanic features are currently targeted for high-resolution LROC Narrow Angle Camera [NAC] imagery[1]. The Marius Hills, located in Oceanus Procellarum (centered at ~13.4°N, -55.4°W), represent the largest concentration of these volcanic features on the Moon including sinuous rilles, volcanic cones, domes, and depressions [e.g., 2-7]. The Marius region is thus a high priority for future human lunar exploration, as signified by its inclusion in the Project Constellation list of notional future human lunar exploration sites [8], and will be an intense focus of interest for LROC science investigations. Previous studies of the Marius Hills have utilized telescopic, Lunar Orbiter, Apollo, and Clementine imagery to study the morphology and composition of the volcanic features in the region. Complementary LROC studies of the Marius region will focus on high-resolution NAC images of specific features for studies of morphology (including flow fronts, dome/cone structure, and possible layering) and topography (using stereo imagery). Preliminary studies of the new high-resolution images of the Marius Hills region reveal small-scale features in the sinuous rilles including possible outcrops of bedrock and lobate lava flows from the domes. The observed Marius Hills are characterized by rough surface textures, including the presence of large boulders at the summits (~3-5m diameter), which is consistent with the radar-derived conclusions of [9]. Future investigations will involve analysis of LROC stereo photoclinometric products and coordinating NAC images with the multispectral images collected by the LROC WAC, especially the ultraviolet data, to enable measurements of color variations within and amongst deposits and provide possible compositional insights, including the location of possibly related pyroclastic deposits. References: [1] J. D. Stopar et al. (2009), LRO Science Targeting Meeting, Abs. 6039 [2] Greeley R (1971) Moon, 3, 289-314 [3] Guest J. E. (1971) Geol. and Phys. of the Moon, p. 41-53. [4] McCauley J. F. (1967) USGS Geologic Atlas of the Moon, Sheet I-491 [5] Weitz C. M. and Head J. W. (1999) JGR, 104, 18933-18956 [6] Heather D. J. et al. (2003) JGR, doi:10.1029/2002JE001938 [7] Whitford-Stark, J. L., and J. W. Head (1977) Proc. LSC 8th, 2705-2724 [8] Gruener J. and Joosten B. K. (2009) LRO Science Targeting Meeting, Abs. 6036 [9] Campbell B. A. et al. (2009) JGR, doi:10.1029/2008JE003253.

  17. Neutron Imaging Camera

    NASA Technical Reports Server (NTRS)

    Hunter, Stanley; deNolfo, G. A.; Barbier, L. M.; Link, J. T.; Son, S.; Floyd, S. R.; Guardala, N.; Skopec, M.; Stark, B.

    2008-01-01

    The Neutron Imaging Camera (NIC) is based on the Three-dimensional Track Imager (3DTI) technology developed at GSFC for gamma-ray astrophysics applications. The 3-DTI, a large volume time-projection chamber, provides accurate, approximately 0.4 mm resolution, 3-D tracking of charged particles. The incident direction of fast neutrons, En > 0.5 MeV, are reconstructed from the momenta and energies of the proton and triton fragments resulting from (sup 3)He(n,p) (sup 3)H interactions in the 3-DTI volume. The performance of the NIC from laboratory and accelerator tests is presented.

  18. Image Sensors Enhance Camera Technologies

    NASA Technical Reports Server (NTRS)

    2010-01-01

    In the 1990s, a Jet Propulsion Laboratory team led by Eric Fossum researched ways of improving complementary metal-oxide semiconductor (CMOS) image sensors in order to miniaturize cameras on spacecraft while maintaining scientific image quality. Fossum s team founded a company to commercialize the resulting CMOS active pixel sensor. Now called the Aptina Imaging Corporation, based in San Jose, California, the company has shipped over 1 billion sensors for use in applications such as digital cameras, camera phones, Web cameras, and automotive cameras. Today, one of every three cell phone cameras on the planet feature Aptina s sensor technology.

  19. Neutron Imaging Camera

    NASA Technical Reports Server (NTRS)

    Hunter, Stanley D.; DeNolfo, Georgia; Floyd, Sam; Krizmanic, John; Link, Jason; Son, Seunghee; Guardala, Noel; Skopec, Marlene; Stark, Robert

    2008-01-01

    We describe the Neutron Imaging Camera (NIC) being developed for DTRA applications by NASA/GSFC and NSWC/Carderock. The NIC is based on the Three-dimensional Track Imager (3-DTI) technology developed at GSFC for gamma-ray astrophysics applications. The 3-DTI, a large volume time-projection chamber, provides accurate, approximately 0.4 mm resolution. 3-D tracking of charged particles. The incident direction of fast neutrons, E(sub N) > 0.5 MeV. arc reconstructed from the momenta and energies of the proton and triton fragments resulting from 3He(n,p)3H interactions in the 3-DTI volume. We present angular and energy resolution performance of the NIC derived from accelerator tests.

  20. Insights into Pyroclastic Volcanism on the Moon with LROC Data

    NASA Astrophysics Data System (ADS)

    Gaddis, L. R.; Robinson, M. S.; Hawke, B. R.; Giguere, T.; Gustafson, O.; Keszthelyi, L. P.; Lawrence, S.; Stopar, J.; Jolliff, B. L.; Bell, J. F.; Garry, W. B.

    2009-12-01

    Lunar pyroclastic deposits are high-priority targets for the Lunar Reconnaissance Orbiter Camera. Images from the Narrow Angle Camera (NAC; 0.5 m/pixel) and Wide Angle Camera (WAC; 7 bands, 100 m/p visible, 400 m/p ultraviolet) are being acquired. Studies of pyroclastic deposits with LRO data have the potential to resolve major questions concerning their distribution, composition, volume, eruptive styles, and role in early lunar volcanism. Analyses of LROC Commissioning and early Exploration Phase data focus on preliminary assessment of morphology and compositional variation among lunar pyroclastic deposits. At sites such as Rima Bode, Sulpicius Gallus, Aristarchus plateau, and Humorum, Alphonsus and Oppenheimer craters, LROC data are being used to search for evidence that may allow us to identify separate eruptive episodes from the same vent, pulses of magma intrusions and/or crustal dikes, and possible changes in composition and volatility of source materials with time. Preliminary observations of NAC data for possible pyroclastic vents reveal typically smooth, dark surfaces with variations in surface texture, roughness, and apparent albedo that may be related to differences in eruption mechanism and/or duration. Evidence of layering at some sites suggest low-volume eruptions or multiple events. Further analyses of LROC data will allow identification of intra-deposit compositional variations, possible juvenile components, and evaluation of the distributions and relative amounts of juvenile vs. host-rock components. Combined NAC and WAC data also will enable us to characterize spatial extents, distributions, and compositions of pyroclastic deposits and relate them to other sampled glass types and possibly to their associated basalts. WAC color data will be used to characterize titanium contents of pyroclastic deposits, to map the diversity of effusive and pyroclastic units with variable titanium contents that are currently not recognized, and to identify which pyroclastic deposits are the best sources of titanium and associated volatile elements. Using NAC stereo data, meter-scale topographic models of the surface will allow us to better constrain emplacement and distribution of possible juvenile materials, the geometry of small pyroclastic eruptions, and models of their eruption.

  1. LROC WAC Ultraviolet Reflectance of the Moon

    NASA Astrophysics Data System (ADS)

    Robinson, M. S.; Denevi, B. W.; Sato, H.; Hapke, B. W.; Hawke, B. R.

    2011-10-01

    Earth-based color filter photography, first acquired in the 1960s, showed color differences related to morphologic boundaries on the Moon [1]. These color units were interpreted to indicate compositional differences, thought to be the result of variations in titanium content [1]. Later it was shown that iron abundance (FeO) also plays a dominant role in controlling color in lunar soils [2]. Equally important is the maturity of a lunar soil in terms of its reflectance properties (albedo and color) [3]. Maturity is a measure of the state of alteration of surface materials due to sputtering and high velocity micrometeorite impacts over time [3]. The Clementine (CL) spacecraft provided the first global and digital visible through infrared observations of the Moon [4]. This pioneering dataset allowed significant advances in our understanding of compositional (FeO and TiO2) and maturation differences across the Moon [5,6]. Later, the Lunar Prospector (LP) gamma ray and neutron experiments provided the first global, albeit low resolution, elemental maps [7]. Newly acquired Moon Mineralogic Mapper hyperspectral measurements are now providing the means to better characterize mineralogic variations on a global scale [8]. Our knowledge of ultraviolet color differences between geologic units is limited to low resolution (km scale) nearside telescopic observations, and high resolution Hubble Space Telescope images of three small areas [9], and laboratory analyses of lunar materials [10,11]. These previous studies detailed color differences in the UV (100 to 400 nm) related to composition and physical state. HST UV (250 nm) and visible (502 nm) color differences were found to correlate with TiO2, and were relatively insensitive to maturity effects seen in visible ratios (CL) [9]. These two results led to the conclusion that improvements in TiO2 estimation accuracy over existing methods may be possible through a simple UV/visible ratio [9]. The Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) provides the first global lunar ultraviolet through visible (321 nm to 689 nm) multispectral observations [12]. The WAC is a sevencolor push-frame imager with nominal resolutions of 400 m (321, 360 nm) and 100 m (415, 566, 604, 643, 689 nm). Due to its wide field-of-view (60° in color mode) the phase angle within a single line varies ±30°, thus requiring the derivation of a precise photometric characterization [13] before any interpretations of lunar reflectance properties can be made. The current WAC photometric correction relies on multiple WAC observations of the same area over a broad range of phase angles and typically results in relative corrections good to a few percent [13].

  2. Height-to-diameter ratios of moon rocks from analysis of Lunokhod-1 and -2 and Apollo 11-17 panoramas and LROC NAC images

    NASA Astrophysics Data System (ADS)

    Demidov, N. E.; Basilevsky, A. T.

    2014-09-01

    An analysis is performed of 91 panoramic photographs taken by Lunokhod-1 and -2, 17 panoramic images composed of photographs taken by Apollo 11-15 astronauts, and six LROC NAC photographs. The results are used to measure the height-to-visible-diameter ( h/ d) and height-to-maximum-diameter ( h/ D) ratios for lunar rocks at three highland and three mare sites on the Moon. The average h/ d and h/ D for the six sites are found to be indistinguishable at a significance level of 95%. Therefore, our estimates for the average h/ d = 0.6 ± 0.03 and h/ D = 0.54 ± 0.03 on the basis of 445 rocks are applicable for the entire Moon's surface. Rounding off, an h/ D ratio of ?0.5 is suggested for engineering models of the lunar surface. The ratios between the long, medium, and short axes of the lunar rocks are found to be similar to those obtained in high-velocity impact experiments for different materials. It is concluded, therefore, that the degree of penetration of the studied lunar rocks into the regolith is negligible, and micrometeorite abrasion and other factors do not dominate in the evolution of the shape of lunar rocks.

  3. Morphological Analysis of Lunar Lobate Scarps Using LROC NAC and LOLA Data

    NASA Astrophysics Data System (ADS)

    Banks, M. E.; Watters, T. R.; Robinson, M. S.; Tornabene, L. L.; Tran, T.; Ojha, L.

    2011-10-01

    Lobate scarps on the Moon are relatively smallscale tectonic landforms observed in mare basalts and more commonly, highland material [1-4]. These scarps are the surface expression of thrust faults, and are the most common tectonic landform on the lunar farside [1-4]. Prior to Lunar Reconnaissance Orbiter (LRO) observations, lobate scarps were largely detected only in equatorial regions because of limited Apollo Panoramic Camera and high resolution Lunar Orbiter coverage with optimum lighting geometry [1-3]. Previous measurements of the relief of lobate scarps were made for 9 low-latitude scarps (<±20°), and range from ~6 to 80 m (mean relief of ~32 m) [1]. However, the relief of these scarps was primarily determined from shadow measurements with limited accuracy from Apollo-era photography. We present the results of a detailed characterization of the relief and morphology of a larger sampling of the population of lobate scarps. Outstanding questions include what is the range of maximum relief of the lobate scarps? Is their size and structural relief consistent with estimates of the global contractional strain? What is the range of horizontal shortening expressed by lunar scarps and how does this range compare with that found for planetary lobate scarps? Lunar Reconnaissance Orbiter Camera (LROC) images and Lunar Orbiter Laser Altimeter (LOLA) ranging enable detection and detailed morphological analysis of lobate scarps at all latitudes. To date, previously undetected scarps have been identified in LROC imagery in 75 different locations, over 20 of which occur at latitudes greater than ±60° [5-6]. LROC stereo-derived digital terrain models (DTMs) and LOLA data are used to measure the relief and characterize the morphology of 26 previously known (n = 8) and newly detected (n = 18) lobate scarps. Lunar examples are compared to lobate scarps on Mars, Mercury, and 433 Eros (Hinks Dorsum).

  4. Uncertainty Analysis of LROC NAC Derived Elevation Models

    NASA Astrophysics Data System (ADS)

    Burns, K.; Yates, D. G.; Speyerer, E.; Robinson, M. S.

    2012-12-01

    One of the primary objectives of the Lunar Reconnaissance Orbiter Camera (LROC) [1] is to gather stereo observations with the Narrow Angle Camera (NAC) to generate digital elevation models (DEMs). From an altitude of 50 km, the NAC acquires images with a pixel scale of 0.5 meters, and a dual NAC observation covers approximately 5 km cross-track by 25 km down-track. This low altitude was common from September 2009 to December 2011. Images acquired during the commissioning phase and those acquired from the fixed orbit (after 11 December 2011) have pixel scales that range from 0.35 meters at the south pole to 2 meters at the north pole. Alimetric observations obtained by the Lunar Orbiter Laser Altimeter (LOLA) provide measurements of ±0.1 m between the spacecraft and the surface [2]. However, uncertainties in the spacecraft positioning can result in offsets (±20m) between altimeter tracks over many orbits. The LROC team is currently developing a tool to automatically register alimetric observations to NAC DEMs [3]. Using a generalized pattern search (GPS) algorithm, the new automatic registration adjusts the spacecraft position and pointing information during times when NAC images, as well as LOLA measurements, of the same region are acquired to provide an absolute reference frame for the DEM. This information is then imported into SOCET SET to aide in creating controlled NAC DEMs. For every DEM, a figure of merit (FOM) map is generated using SOCET SET software. This is a valuable tool for determining the relative accuracy of a specific pixel in a DEM. Each pixel in a FOM map is given a value to determine its "quality" by determining if the specific pixel was shadowed, saturated, suspicious, interpolated/extrapolated, or successfully correlated. The overall quality of a NAC DEM is a function of both the absolute and relative accuracies. LOLA altimetry provides the most accurate absolute geodetic reference frame with which the NAC DEMs can be compared. Offsets between LOLA profiles and NAC DEMs are used to quantify the absolute accuracy. Small lateral movements in the LOLA points coupled with large changes in topography contribute to sizeable offsets between the datasets. The steep topography of Lichtenberg Crater provides an example of the offsets in the LOLA data. Ten tracks that cross the region of interest were used to calculate the offset with a root mean square (RMS) error of 9.67 m, an average error of 7.02 m, and a standard deviation of 9.61m. Large areas (>375 km sq) covered by a mosaic of NAC DEMs were compared to the Wide Angel Camera (WAC) derived Global Lunar DTM 100 m topographic model (GLD100) [4]. The GLD100 has a pixel scale of 100 m; therefore, the NAC DEMs were reduced to calculate the offsets between two datasets. When comparing NAC DEMs to WAC DEMs, it was determined that the vertical offsets were as follows [Site name (average offset in meters, standard deviation in meters)]: Lichtenberg Crater (-7.74, 20.49), Giordano Bruno (-5.31, 28.80), Hortensius Domes (-3.52, 16.00), and Reiner Gamma (-0.99,14.11). Resources: [1] Robinson et al. (2010) Space Sci. Rev. [2] Smith et al. (2010) Space Sci. Rev. [3]Speyerer et al. (2012) European Lunar Symp. [4] Scholten et al. (2012) JGR-Planets.

  5. On an assessment of surface roughness estimates from lunar laser altimetry pulse-widths for the Moon from LOLA using LROC narrow-angle stereo DTMs.

    NASA Astrophysics Data System (ADS)

    Muller, Jan-Peter; Poole, William

    2013-04-01

    Neumann et al. [1] proposed that laser altimetry pulse-widths could be employed to derive "within-footprint" surface roughness as opposed to surface roughness estimated from between laser altimetry pierce-points such as the example for Mars [2] and more recently from the 4-pointed star-shaped LOLA (Lunar reconnaissance Orbiter Laser Altimeter) onboard the NASA-LRO [3]. Since 2009, the LOLA has been collecting extensive global laser altimetry data with a 5m footprint and ?25m between the 5 points in a star-shape. In order to assess how accurately surface roughness (defined as simple RMS after slope correction) derived from LROC matches with surface roughness derived from LOLA footprints, publicly released LROC-NA (LRO Camera Narrow Angle) 1m Digital Terrain Models (DTMs) were employed to measure the surface roughness directly within each 5m footprint. A set of 20 LROC-NA DTMs were examined. Initially the match-up between the LOLA and LROC-NA orthorectified images (ORIs) is assessed visually to ensure that the co-registration is better than the LOLA footprint resolution. For each LOLA footprint, the pulse-width geolocation is then retrieved and this is used to "cookie-cut" the surface roughness and slopes derived from the LROC-NA DTMs. The investigation which includes data from a variety of different landforms shows little, if any correlation between surface roughness estimated from DTMs with LOLA pulse-widths at sub-footprint scale. In fact there is only any perceptible correlation between LOLA and LROC-DTMs at baselines of 40-60m for surface roughness and 20m for slopes. [1] Neumann et al. Mars Orbiter Laser Altimeter pulse width measurements and footprint-scale roughness. Geophysical Research Letters (2003) vol. 30 (11), paper 1561. DOI: 10.1029/2003GL017048 [2] Kreslavsky and Head. Kilometer-scale roughness of Mars: results from MOLA data analysis. J Geophys Res (2000) vol. 105 (E11) pp. 26695-26711. [3] Rosenburg et al. Global surface slopes and roughness of the Moon from the Lunar Orbiter Laser Altimeter. Journal of Geophysical Research (2011) vol. 116, paper E02001. DOI: 10.1029/2010JE003716 [4] Chin et al. Lunar Reconnaissance Orbiter Overview: The Instrument Suite and Mission. Space Science Reviews (2007) vol. 129 (4) pp. 391-419

  6. Image dissector camera system study

    NASA Technical Reports Server (NTRS)

    Howell, L.

    1984-01-01

    Various aspects of a rendezvous and docking system using an image dissector detector as compared to a GaAs detector were discussed. Investigation into a gimbled scanning system is also covered and the measured video response curves from the image dissector camera are presented. Rendezvous will occur at ranges greater than 100 meters. The maximum range considered was 1000 meters. During docking, the range, range-rate, angle, and angle-rate to each reflector on the satellite must be measured. Docking range will be from 3 to 100 meters. The system consists of a CW laser diode transmitter and an image dissector receiver. The transmitter beam is amplitude modulated with three sine wave tones for ranging. The beam is coaxially combined with the receiver beam. Mechanical deflection of the transmitter beam, + or - 10 degrees in both X and Y, can be accomplished before or after it is combined with the receiver beam. The receiver will have a field-of-view (FOV) of 20 degrees and an instantaneous field-of-view (IFOV) of two milliradians (mrad) and will be electronically scanned in the image dissector. The increase in performance obtained from the GaAs photocathode is not needed to meet the present performance requirements.

  7. LRO Camera Imaging of Potential Landing Sites in the South Pole-Aitken Basin

    NASA Astrophysics Data System (ADS)

    Jolliff, B. L.; Wiseman, S. M.; Gibson, K. E.; Lauber, C.; Robinson, M.; Gaddis, L. R.; Scholten, F.; Oberst, J.; LROC Science; Operations Team

    2010-12-01

    We show results of WAC (Wide Angle Camera) and NAC (Narrow Angle Camera) imaging of candidate landing sites within the South Pole-Aitken (SPA) basin of the Moon obtained by the Lunar Reconnaissance Orbiter during the first full year of operation. These images enable a greatly improved delineation of geologic units, determination of unit thicknesses and stratigraphy, and detailed surface characterization that has not been possible with previous data. WAC imaging encompasses the entire SPA basin, located within an area ranging from ~ 130-250 degrees east longitude and ~15 degrees south latitude to the South Pole, at different incidence angles, with the specific range of incidence dependent on latitude. The WAC images show morphology and surface detail at better than 100 m per pixel, with spatial coverage and quality unmatched by previous data sets. NAC images reveal details at the sub-meter pixel scale that enable new ways to evaluate the origins and stratigraphy of deposits. Key among new results is the capability to discern extents of ancient volcanic deposits that are covered by later crater ejecta (cryptomare) [see Petro et al., this conference] using new, complementary color data from Kaguya and Chandrayaan-1. Digital topographic models derived from WAC and NAC geometric stereo coverage show broad intercrater-plains areas where slopes are acceptably low for high-probability safe landing [see Archinal et al., this conference]. NAC images allow mapping and measurement of small, fresh craters that excavated boulders and thus provide information on surface roughness and depth to bedrock beneath regolith and plains deposits. We use these data to estimate deposit thickness in areas of interest for landing and potential sample collection to better understand the possible provenance of samples. Also, small regions marked by fresh impact craters and their associated boulder fields are readily identified by their bright ejecta patterns and marked as lander keep-out zones. We will show examples of LROC data including those for Constellation sites on the SPA rim and interior, a site between Bose and Alder Craters, sites east of Bhabha Crater, and sites on and near the “Mafic Mound” [see Pieters et al., this conference]. Together the LROC data and complementary products provide essential information for ensuring identification of safe landing and sampling sites within SPA basin that has never before been available for a planetary mission.

  8. 15 CFR 743.3 - Thermal imaging camera reporting.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...2013-01-01 2013-01-01 false Thermal imaging camera reporting. 743...REGULATIONS SPECIAL REPORTING § 743.3 Thermal imaging camera reporting. (a) General requirement. Exports of thermal imaging cameras must be reported...

  9. 15 CFR 743.3 - Thermal imaging camera reporting.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...2014-01-01 2014-01-01 false Thermal imaging camera reporting. 743...REPORTING AND NOTIFICATION § 743.3 Thermal imaging camera reporting. (a) General requirement. Exports of thermal imaging cameras must be reported...

  10. 15 CFR 743.3 - Thermal imaging camera reporting.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ...2012-01-01 2012-01-01 false Thermal imaging camera reporting. 743...REGULATIONS SPECIAL REPORTING § 743.3 Thermal imaging camera reporting. (a) General requirement. Exports of thermal imaging cameras must be reported...

  11. Coherent infrared imaging camera (CIRIC)

    SciTech Connect

    Hutchinson, D.P.; Simpson, M.L.; Bennett, C.A.; Richards, R.K.; Emery, M.S.; Crutcher, R.I.; Sitter, D.N. Jr.; Wachter, E.A.; Huston, M.A.

    1995-07-01

    New developments in 2-D, wide-bandwidth HgCdTe (MCT) and GaAs quantum-well infrared photodetectors (QWIP) coupled with Monolithic Microwave Integrated Circuit (MMIC) technology are now making focal plane array coherent infrared (IR) cameras viable. Unlike conventional IR cameras which provide only thermal data about a scene or target, a coherent camera based on optical heterodyne interferometry will also provide spectral and range information. Each pixel of the camera, consisting of a single photo-sensitive heterodyne mixer followed by an intermediate frequency amplifier and illuminated by a separate local oscillator beam, constitutes a complete optical heterodyne receiver. Applications of coherent IR cameras are numerous and include target surveillance, range detection, chemical plume evolution, monitoring stack plume emissions, and wind shear detection.

  12. Generating Stereoscopic Television Images With One Camera

    NASA Technical Reports Server (NTRS)

    Coan, Paul P.

    1996-01-01

    Straightforward technique for generating stereoscopic television images involves use of single television camera translated laterally between left- and right-eye positions. Camera acquires one of images (left- or right-eye image), and video signal from image delayed while camera translated to position where it acquires other image. Length of delay chosen so both images displayed simultaneously or as nearly simultaneously as necessary to obtain stereoscopic effect. Technique amenable to zooming in on small areas within broad scenes. Potential applications include three-dimensional viewing of geological features and meteorological events from spacecraft and aircraft, inspection of workpieces moving along conveyor belts, and aiding ground and water search-and-rescue operations. Also used to generate and display imagery for public education and general information, and possible for medical purposes.

  13. LROC model observers for emission tomographic reconstruction

    NASA Astrophysics Data System (ADS)

    Khurd, Parmeshwar; Gindi, Gene

    2004-05-01

    Detection and localization performance with signal location uncertainty may be summarized by Figures of Merit (FOM's) obtained from the LROC curve. We consider model observers that may be used to compute the two LROC FOM's: ALROC and PCL, for emission tomographic MAP reconstruction. We address the case background-known-exactly (BKE) and signal known except for location. Model observers may be used, for instance, to rapidly prototype studies that use human observers. Our FOM calculation is an ensemble method (no samples of reconstructions needed) that makes use of theoretical expressions for the mean and covariance of the reconstruction. An affine local observer computes a response at each location, and the maximum of these is used as the global observer - the response needed by the LROC curve. In previous work, we had assumed the local observers to be independent and normally distributed, which allowed the use of closed form expressions to compute the FOM's. Here, we relax the independence assumption and make the approximation that the local observer responses are jointly normal. We demonstrate a fast theoretical method to compute the mean and covariance of this joint distribution (for the signal absent and present cases) given the theoretical expressions for the reconstruction mean and covariance. We can then generate samples from this joint distribution and rapidly (since no reconstructions need be computed) compute the LROC FOM's. We validate the results of the procedure by comparison to FOM's obtained using a gold-standard Monte Carlo method employing a large set of reconstructed noise trials.

  14. Classroom multispectral imaging using inexpensive digital cameras.

    NASA Astrophysics Data System (ADS)

    Fortes, A. D.

    2007-12-01

    The proliferation of increasingly cheap digital cameras in recent years means that it has become easier to exploit the broad wavelength sensitivity of their CCDs (360 - 1100 nm) for classroom-based teaching. With the right tools, it is possible to open children's eyes to the invisible world of UVA and near-IR radiation either side of our narrow visual band. The camera-filter combinations I describe can be used to explore the world of animal vision, looking for invisible markings on flowers, or in bird plumage, for example. In combination with a basic spectroscope (such as the Project-STAR handheld plastic spectrometer, 25), it is possible to investigate the range of human vision and camera sensitivity, and to explore the atomic and molecular absorption lines from the solar and terrestrial atmospheres. My principal use of the cameras has been to teach multispectral imaging of the kind used to determine remotely the composition of planetary surfaces. A range of camera options, from 50 circuit-board mounted CCDs up to $900 semi-pro infrared camera kits (including mobile phones along the way), and various UV-vis-IR filter options will be presented. Examples of multispectral images taken with these systems are used to illustrate the range of classroom topics that can be covered. Particular attention is given to learning about spectral reflectance curves and comparing images from Earth and Mars taken using the same filter combination that it used on the Mars Rovers.

  15. Prediction of Viking lander camera image quality

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Burcher, E. E.; Jobson, D. J.; Wall, S. D.

    1976-01-01

    Formulations are presented that permit prediction of image quality as a function of camera performance, surface radiance properties, and lighting and viewing geometry. Predictions made for a wide range of surface radiance properties reveal that image quality depends strongly on proper camera dynamic range command and on favorable lighting and viewing geometry. Proper camera dynamic range commands depend mostly on the surface albedo that will be encountered. Favorable lighting and viewing geometries depend mostly on lander orientation with respect to the diurnal sun path over the landing site, and tend to be independent of surface albedo and illumination scattering function. Side lighting with low sun elevation angles (10 to 30 deg) is generally favorable for imaging spatial details and slopes, whereas high sun elevation angles are favorable for measuring spectral reflectances.

  16. Camera Trajectory fromWide Baseline Images

    NASA Astrophysics Data System (ADS)

    Havlena, M.; Torii, A.; Pajdla, T.

    2008-09-01

    Camera trajectory estimation, which is closely related to the structure from motion computation, is one of the fundamental tasks in computer vision. Reliable camera trajectory estimation plays an important role in 3D reconstruction, self localization, and object recognition. There are essential issues for a reliable camera trajectory estimation, for instance, choice of the camera and its geometric projection model, camera calibration, image feature detection and description, and robust 3D structure computation. Most of approaches rely on classical perspective cameras because of the simplicity of their projection models and ease of their calibration. However, classical perspective cameras offer only a limited field of view, and thus occlusions and sharp camera turns may cause that consecutive frames look completely different when the baseline becomes longer. This makes the image feature matching very difficult (or impossible) and the camera trajectory estimation fails under such conditions. These problems can be avoided if omnidirectional cameras, e.g. a fish-eye lens convertor, are used. The hardware which we are using in practice is a combination of Nikon FC-E9 mounted via a mechanical adaptor onto a Kyocera Finecam M410R digital camera. Nikon FC-E9 is a megapixel omnidirectional addon convertor with 180° view angle which provides images of photographic quality. Kyocera Finecam M410R delivers 2272×1704 images at 3 frames per second. The resulting combination yields a circular view of diameter 1600 pixels in the image. Since consecutive frames of the omnidirectional camera often share a common region in 3D space, the image feature matching is often feasible. On the other hand, the calibration of these cameras is non-trivial and is crucial for the accuracy of the resulting 3D reconstruction. We calibrate omnidirectional cameras off-line using the state-of-the-art technique and Mi?ušík's two-parameter model, that links the radius of the image point r to the angle ? of its corresponding rays w.r.t. the optical axis as ? = ar 1+br2 . After a successful calibration, we know the correspondence of the image points to the 3D optical rays in the coordinate system of the camera. The following steps aim at finding the transformation between the camera and the world coordinate systems, i.e. the pose of the camera in the 3D world, using 2D image matches. For computing 3D structure, we construct a set of tentative matches detecting different affine covariant feature regions including MSER, Harris Affine, and Hessian Affine in acquired images. These features are alternative to popular SIFT features and work comparably in our situation. Parameters of the detectors are chosen to limit the number of regions to 1-2 thousands per image. The detected regions are assigned local affine frames (LAF) and transformed into standard positions w.r.t. their LAFs. Discrete Cosine Descriptors are computed for each region in the standard position. Finally, mutual distances of all regions in one image and all regions in the other image are computed as the Euclidean distances of their descriptors and tentative matches are constructed by selecting the mutually closest pairs. Opposed to the methods using short baseline images, simpler image features which are not affine covariant cannot be used because the view point can change a lot between consecutive frames. Furthermore, feature matching has to be performed on the whole frame because no assumptions on the proximity of the consecutive projections can be made for wide baseline images. This is making the feature detection, description, and matching much more time-consuming than it is for short baseline images and limits the usage to low frame rate sequences when operating in real-time. Robust 3D structure can be computed by RANSAC which searches for the largest subset of the set of tentative matches which is, within a predefined threshold ", consistent with an epipolar geometry. We use ordered sampling as suggested in to draw 5-tuples from the list of tentative matches ordered ascendin

  17. COMPUTER ANALYSIS OF PLANAR GAMMA CAMERA IMAGES

    EPA Science Inventory



    COMPUTER ANALYSIS OF PLANAR GAMMA CAMERA IMAGES

    T Martonen1 and J Schroeter2

    1Experimental Toxicology Division, National Health and Environmental Effects Research Laboratory, U.S. EPA, Research Triangle Park, NC 27711 USA and 2Curriculum in Toxicology, Unive...

  18. 15 CFR 743.3 - Thermal imaging camera reporting.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 15 Commerce and Foreign Trade 2 2011-01-01 2011-01-01 false Thermal imaging camera reporting. 743... REPORTING § 743.3 Thermal imaging camera reporting. (a) General requirement. Exports of thermal imaging... are not authorized by an individually validated license of thermal imaging cameras controlled by...

  19. 15 CFR 743.3 - Thermal imaging camera reporting.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 15 Commerce and Foreign Trade 2 2012-01-01 2012-01-01 false Thermal imaging camera reporting. 743... REPORTING § 743.3 Thermal imaging camera reporting. (a) General requirement. Exports of thermal imaging... are not authorized by an individually validated license of thermal imaging cameras controlled by...

  20. 15 CFR 743.3 - Thermal imaging camera reporting.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 15 Commerce and Foreign Trade 2 2013-01-01 2013-01-01 false Thermal imaging camera reporting. 743... REPORTING § 743.3 Thermal imaging camera reporting. (a) General requirement. Exports of thermal imaging... are not authorized by an individually validated license of thermal imaging cameras controlled by...

  1. 15 CFR 743.3 - Thermal imaging camera reporting.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 15 Commerce and Foreign Trade 2 2010-01-01 2010-01-01 false Thermal imaging camera reporting. 743... REPORTING § 743.3 Thermal imaging camera reporting. (a) General requirement. Exports of thermal imaging... are not authorized by an individually validated license of thermal imaging cameras controlled by...

  2. Camera for High-Speed THz Imaging

    NASA Astrophysics Data System (ADS)

    Zdanevi?ius, Justinas; Bauer, Maris; Boppel, Sebastian; Palenskis, Vilius; Lisauskas, Alvydas; Krozer, Viktor; Roskos, Hartmut G.

    2015-10-01

    We present a 24 × 24 pixel camera capable of high-speed THz imaging in power-detection mode. Each pixel of the sensor array consists of a pair of 150-nm NMOS transistors coupled to a patch antenna with resonance at 600 GHz. The camera can operate with a speed of up to 450 frames per second where it exhibits a minimum resolvable power of 10.5 nW per pixel. For a 30-Hz frame rate, the minimum resolvable power is 1.4 nW.

  3. Imaging spectrometer/camera having convex grating

    NASA Technical Reports Server (NTRS)

    Reininger, Francis M. (Inventor)

    2000-01-01

    An imaging spectrometer has fore-optics coupled to a spectral resolving system with an entrance slit extending in a first direction at an imaging location of the fore-optics for receiving the image, a convex diffraction grating for separating the image into a plurality of spectra of predetermined wavelength ranges; a spectrometer array for detecting the spectra; and at least one concave sperical mirror concentric with the diffraction grating for relaying the image from the entrance slit to the diffraction grating and from the diffraction grating to the spectrometer array. In one embodiment, the spectrometer is configured in a lateral mode in which the entrance slit and the spectrometer array are displaced laterally on opposite sides of the diffraction grating in a second direction substantially perpendicular to the first direction. In another embodiment, the spectrometer is combined with a polychromatic imaging camera array disposed adjacent said entrance slit for recording said image.

  4. 15 CFR 743.3 - Thermal imaging camera reporting.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 15 Commerce and Foreign Trade 2 2014-01-01 2014-01-01 false Thermal imaging camera reporting. 743... REPORTING AND NOTIFICATION § 743.3 Thermal imaging camera reporting. (a) General requirement. Exports of thermal imaging cameras must be reported to BIS as provided in this section. (b) Transactions to...

  5. X-ray imaging using digital cameras

    NASA Astrophysics Data System (ADS)

    Winch, Nicola M.; Edgar, Andrew

    2012-03-01

    The possibility of using the combination of a computed radiography (storage phosphor) cassette and a semiprofessional grade digital camera for medical or dental radiography is investigated. We compare the performance of (i) a Canon 5D Mk II single lens reflex camera with f1.4 lens and full-frame CMOS array sensor and (ii) a cooled CCD-based camera with a 1/3 frame sensor and the same lens system. Both systems are tested with 240 x 180 mm cassettes which are based on either powdered europium-doped barium fluoride bromide or needle structure europium-doped cesium bromide. The modulation transfer function for both systems has been determined and falls to a value of 0.2 at around 2 lp/mm, and is limited by light scattering of the emitted light from the storage phosphor rather than the optics or sensor pixelation. The modulation transfer function for the CsBr:Eu2+ plate is bimodal, with a high frequency wing which is attributed to the light-guiding behaviour of the needle structure. The detective quantum efficiency has been determined using a radioisotope source and is comparatively low at 0.017 for the CMOS camera and 0.006 for the CCD camera, attributed to the poor light harvesting by the lens. The primary advantages of the method are portability, robustness, digital imaging and low cost; the limitations are the low detective quantum efficiency and hence signal-to-noise ratio for medical doses, and restricted range of plate sizes. Representative images taken with medical doses are shown and illustrate the potential use for portable basic radiography.

  6. The Widespread Distribution of Swirls in Lunar Reconnaissance Orbiter Camera Images

    NASA Astrophysics Data System (ADS)

    Denevi, B. W.; Robinson, M. S.; Boyd, A. K.; Blewett, D. T.

    2015-10-01

    Lunar swirls, the sinuous high-and low-reflectance features that cannot be mentioned without the associated adjective "enigmatic,"are of interest because of their link to crustal magnetic anomalies [1,2]. These localized magnetic anomalies create mini-magnetospheres [3,4] and may alter the typical surface modification processes or result in altogether distinct processes that form the swirls. One hypothesis is that magnetic anomalies may provide some degree of shielding from the solar wind [1,2], which could impede space weathering due to solar wind sputtering. In this case, swirls would serve as a way to compare areas affected by typical lunar space weathering (solar wind plus micrometeoroid bombardment) to those where space weathering is dominated by micrometeoroid bombardment alone, providing a natural means to assess the relative contributions of these two processes to the alteration of fresh regolith. Alternately,magnetic anomalies may play a role in the sorting of soil grains, such that the high-reflectance portion of swirls may preferentially accumulate feldspar-rich dust [5]or soils with a lower component of nanophase iron [6].Each of these scenarios presumes a pre-existing magnetic anomaly; swirlshave also been suggested to be the result of recent cometary impacts in which the remanent magnetic field is generated by the impact event[7].Here we map the distribution of swirls using ultraviolet and visible images from the Lunar Reconnaissance Orbiter Camera(LROC) Wide Angle Camera (WAC) [8,9]. We explore the relationship of the swirls to crustal magnetic anomalies[10], and examine regions with magnetic anomalies and no swirls.

  7. The registration of star image in multiple cameras

    NASA Astrophysics Data System (ADS)

    Wang, Shuai; Li, Yingchun; Zhang, Tinghua; Du, Lin

    2015-10-01

    As the commercial performance of camera sensor and the imaging quality of lens improving, it has the possibility to applicate in the space target observation. Multiple cameras can further improve the detection ability of the camera with image fusion. This paper mainly studies the multiple camera image fusion problem of registration with the imaging characteristics of a commercial camera, and then put forward an applicable method of star image registration. It proved that the accuracy of registration could reach the subpixel level with experiments.

  8. Cervical SPECT Camera for Parathyroid Imaging

    SciTech Connect

    2012-08-31

    Primary hyperparathyroidism characterized by one or more enlarged parathyroid glands has become one of the most common endocrine diseases in the world affecting about 1 per 1000 in the United States. Standard treatment is highly invasive exploratory neck surgery called ���¢��������Parathyroidectomy���¢�������. The surgery has a notable mortality rate because of the close proximity to vital structures. The move to minimally invasive parathyroidectomy is hampered by the lack of high resolution pre-surgical imaging techniques that can accurately localize the parathyroid with respect to surrounding structures. We propose to develop a dedicated ultra-high resolution (~ 1 mm) and high sensitivity (10x conventional camera) cervical scintigraphic imaging device. It will be based on a multiple pinhole-camera SPECT system comprising a novel solid state CZT detector that offers the required performance. The overall system will be configured to fit around the neck and comfortably image a patient.

  9. Calibrating Images from the MINERVA Cameras

    NASA Astrophysics Data System (ADS)

    Mercedes Colón, Ana

    2016-01-01

    The MINiature Exoplanet Radial Velocity Array (MINERVA) consists of an array of robotic telescopes located on Mount Hopkins, Arizona with the purpose of performing transit photometry and spectroscopy to find Earth-like planets around Sun-like stars. In order to make photometric observations, it is necessary to perform calibrations on the CCD cameras of the telescopes to take into account possible instrument error on the data. In this project, we developed a pipeline that takes optical images, calibrates them using sky flats, darks, and biases to generate a transit light curve.

  10. Methods for identification of images acquired with digital cameras

    NASA Astrophysics Data System (ADS)

    Geradts, Zeno J.; Bijhold, Jurrien; Kieft, Martijn; Kurosawa, Kenji; Kuroki, Kenro; Saitoh, Naoki

    2001-02-01

    From the court we were asked whether it is possible to determine if an image has been made with a specific digital camera. This question has to be answered in child pornography cases, where evidence is needed that a certain picture has been made with a specific camera. We have looked into different methods of examining the cameras to determine if a specific image has been made with a camera: defects in CCDs, file formats that are used, noise introduced by the pixel arrays and watermarking in images used by the camera manufacturer.

  11. New insight into lunar impact melt mobility from the LRO camera

    USGS Publications Warehouse

    Bray, Veronica J.; Tornabene, Livio L.; Keszthelyi, Laszlo P.; McEwen, Alfred S.; Hawke, B. Ray; Giguere, Thomas A.; Kattenhorn, Simon A.; Garry, William B.; Rizk, Bashar; Caudill, C.M.; Gaddis, Lisa R.; van der Bogert, Carolyn H.

    2010-01-01

    The Lunar Reconnaissance Orbiter Camera (LROC) is systematically imaging impact melt deposits in and around lunar craters at meter and sub-meter scales. These images reveal that lunar impact melts, although morphologically similar to terrestrial lava flows of similar size, exhibit distinctive features (e.g., erosional channels). Although generated in a single rapid event, the post-impact mobility and morphology of lunar impact melts is surprisingly complex. We present evidence for multi-stage influx of impact melt into flow lobes and crater floor ponds. Our volume and cooling time estimates for the post-emplacement melt movements noted in LROC images suggest that new flows can emerge from melt ponds an extended time period after the impact event.

  12. Camera system for multispectral imaging of documents

    NASA Astrophysics Data System (ADS)

    Christens-Barry, William A.; Boydston, Kenneth; France, Fenella G.; Knox, Keith T.; Easton, Roger L., Jr.; Toth, Michael B.

    2009-02-01

    A spectral imaging system comprising a 39-Mpixel monochrome camera, LED-based narrowband illumination, and acquisition/control software has been designed for investigations of cultural heritage objects. Notable attributes of this system, referred to as EurekaVision, include: streamlined workflow, flexibility, provision of well-structured data and metadata for downstream processing, and illumination that is safer for the artifacts. The system design builds upon experience gained while imaging the Archimedes Palimpsest and has been used in studies of a number of important objects in the LOC collection. This paper describes practical issues that were considered by EurekaVision to address key research questions for the study of fragile and unique cultural objects over a range of spectral bands. The system is intended to capture important digital records for access by researchers, professionals, and the public. The system was first used for spectral imaging of the 1507 world map by Martin Waldseemueller, the first printed map to reference "America." It was also used to image sections of the Carta Marina 1516 map by the same cartographer for comparative purposes. An updated version of the system is now being utilized by the Preservation Research and Testing Division of the Library of Congress.

  13. Spectral Camera based on Ghost Imaging via Sparsity Constraints

    E-print Network

    Liu, Zhentao; Wu, Jianrong; Li, Enrong; Shen, Xia; Han, Shensheng

    2015-01-01

    The information acquisition ability of conventional camera is far lower than the Shannon Limit because of the correlation between pixels of image data. Applying sparse representation of images to reduce the abundance of image data and combined with compressive sensing theory, the spectral camera based on ghost imaging via sparsity constraints (GISC spectral camera) is proposed and demonstrated experimentally. GISC spectral camera can acquire the information at a rate significantly below Nyquist, and the resolution of the cells in the three-dimensional (3D) spectral image data-cube can be achieved with a two-dimensional (2D) detector in a single exposure. For the first time, GISC spectral camera opens the way of approaching the Shannon Limit determined by Information Theory in optical imaging instruments.

  14. Image Reconstruction in the Gigavision Camera Feng Yang 1

    E-print Network

    Cortes, Corinna

    and can be used for night vision and astronomical imaging. One important aspect of the gigavision camera is how to estimate the light intensity through binary observations. We model the light intensity field cameras, a lens focuses the incident light onto the image sensor. The optical signal is then converted

  15. Measurement of the nonuniformity of first responder thermal imaging cameras

    NASA Astrophysics Data System (ADS)

    Lock, Andrew; Amon, Francine

    2008-04-01

    Police, firefighters, and emergency medical personnel are examples of first responders that are utilizing thermal imaging cameras in a very practical way every day. However, few performance metrics have been developed to assist first responders in evaluating the performance of thermal imaging technology. This paper describes one possible metric for evaluating the nonuniformity of thermal imaging cameras. Several commercially available uncooled focal plane array cameras were examined. Because of proprietary property issues, each camera was considered a 'black box'. In these experiments, an extended area black body (18 cm square) was placed very close to the objective lens of the thermal imaging camera. The resultant video output from the camera was digitized at a resolution of 640x480 pixels and a grayscale depth of 10 bits. The nonuniformity was calculated using the standard deviation of the digitized image pixel intensities divided by the mean of those pixel intensities. This procedure was repeated for each camera at several blackbody temperatures in the range from 30° C to 260° C. It has observed that the nonuniformity initially increases with temperature, then asymptotically approaches a maximum value. Nonuniformity is also applied to the calculation of Spatial Frequency Response as well providing a noise floor. The testing procedures described herein are being developed as part of a suite of tests to be incorporated into a performance standard covering thermal imaging cameras for first responders.

  16. Geometric rectification of camera-captured document images.

    PubMed

    Liang, Jian; DeMenthon, Daniel; Doermann, David

    2008-04-01

    Compared to typical scanners, handheld cameras offer convenient, flexible, portable, and non-contact image capture, which enables many new applications and breathes new life into existing ones. However, camera-captured documents may suffer from distortions caused by non-planar document shape and perspective projection, which lead to failure of current OCR technologies. We present a geometric rectification framework for restoring the frontal-flat view of a document from a single camera-captured image. Our approach estimates 3D document shape from texture flow information obtained directly from the image without requiring additional 3D/metric data or prior camera calibration. Our framework provides a unified solution for both planar and curved documents and can be applied in many, especially mobile, camera-based document analysis applications. Experiments show that our method produces results that are significantly more OCR compatible than the original images. PMID:18276966

  17. Photorealistic image synthesis and camera validation from 2D images

    NASA Astrophysics Data System (ADS)

    Santos Ferrer, Juan C.; González Chévere, David; Manian, Vidya

    2014-06-01

    This paper presents a new 3D scene reconstruction technique using the Unity 3D game engine. The method presented here allow us to reconstruct the shape of simple objects and more complex ones from multiple 2D images, including infrared and digital images from indoor scenes and only digital images from outdoor scenes and then add the reconstructed object to the simulated scene created in Unity 3D, these scenes are then validated with real world scenes. The method used different cameras settings and explores different properties in the reconstructions of the scenes including light, color, texture, shapes and different views. To achieve the highest possible resolution, it was necessary the extraction of partial textures from visible surfaces. To recover the 3D shapes and the depth of simple objects that can be represented by the geometric bodies, there geometric characteristics were used. To estimate the depth of more complex objects the triangulation method was used, for this the intrinsic and extrinsic parameters were calculated using geometric camera calibration. To implement the methods mentioned above the Matlab tool was used. The technique presented here also let's us to simulate small simple videos, by reconstructing a sequence of multiple scenes of the video separated by small margins of time. To measure the quality of the reconstructed images and video scenes the Fast Low Band Model (FLBM) metric from the Video Quality Measurement (VQM) software was used. Low bandwidth perception based features include edges and motion.

  18. Dual-camera system for high-speed imaging in particle image velocimetry

    E-print Network

    Hashimoto, K; Hara, T; Onogi, S; Mouri, H

    2012-01-01

    Particle image velocimetry is an important technique in experimental fluid mechanics, for which it has been essential to use a specialized high-speed camera. However, the high speed is at the expense of other performances of the camera, i.e., sensitivity and image resolution. Here, we demonstrate that the high-speed imaging is also possible with a pair of still cameras.

  19. Laser speckle imaging using a consumer-grade color camera

    E-print Network

    Choi, Bernard

    Laser speckle imaging using a consumer-grade color camera Owen Yang1,2, * and Bernard Choi1,2 1, 2012 Laser speckle imaging (LSI) is a noninvasive optical imaging technique able to provide wide-field two-dimensional maps of moving particles. Raw laser speckle images are typically taken

  20. Evaluation of mobile phone camera benchmarking using objective camera speed and image quality metrics

    NASA Astrophysics Data System (ADS)

    Peltoketo, Veli-Tapani

    2014-11-01

    When a mobile phone camera is tested and benchmarked, the significance of image quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. However, the speed or rapidity metrics of the mobile phone's camera system has not been used with the quality metrics even if the camera speed has become a more and more important camera performance feature. There are several tasks in this work. First, the most important image quality and speed-related metrics of a mobile phone's camera system are collected from the standards and papers and, also, novel speed metrics are identified. Second, combinations of the quality and speed metrics are validated using mobile phones on the market. The measurements are done toward application programming interface of different operating systems. Finally, the results are evaluated and conclusions are made. The paper defines a solution to combine different image quality and speed metrics to a single benchmarking score. A proposal of the combined benchmarking metric is evaluated using measurements of 25 mobile phone cameras on the market. The paper is a continuation of a previous benchmarking work expanded with visual noise measurement and updates of the latest mobile phone versions.

  1. NV-CMOS HD camera for day/night imaging

    NASA Astrophysics Data System (ADS)

    Vogelsong, T.; Tower, J.; Sudol, Thomas; Senko, T.; Chodelka, D.

    2014-06-01

    SRI International (SRI) has developed a new multi-purpose day/night video camera with low-light imaging performance comparable to an image intensifier, while offering the size, weight, ruggedness, and cost advantages enabled by the use of SRI's NV-CMOS HD digital image sensor chip. The digital video output is ideal for image enhancement, sharing with others through networking, video capture for data analysis, or fusion with thermal cameras. The camera provides Camera Link output with HD/WUXGA resolution of 1920 x 1200 pixels operating at 60 Hz. Windowing to smaller sizes enables operation at higher frame rates. High sensitivity is achieved through use of backside illumination, providing high Quantum Efficiency (QE) across the visible and near infrared (NIR) bands (peak QE <90%), as well as projected low noise (<2h+) readout. Power consumption is minimized in the camera, which operates from a single 5V supply. The NVCMOS HD camera provides a substantial reduction in size, weight, and power (SWaP) , ideal for SWaP-constrained day/night imaging platforms such as UAVs, ground vehicles, fixed mount surveillance, and may be reconfigured for mobile soldier operations such as night vision goggles and weapon sights. In addition the camera with the NV-CMOS HD imager is suitable for high performance digital cinematography/broadcast systems, biofluorescence/microscopy imaging, day/night security and surveillance, and other high-end applications which require HD video imaging with high sensitivity and wide dynamic range. The camera comes with an array of lens mounts including C-mount and F-mount. The latest test data from the NV-CMOS HD camera will be presented.

  2. Imaging Emission Spectra with Handheld and Cellphone Cameras

    ERIC Educational Resources Information Center

    Sitar, David

    2012-01-01

    As point-and-shoot digital camera technology advances it is becoming easier to image spectra in a laboratory setting on a shoestring budget and get immediate results. With this in mind, I wanted to test three cameras to see how their results would differ. Two undergraduate physics students and I used one handheld 7.1 megapixel (MP) digital Cannon…

  3. Mobile phone camera benchmarking: combination of camera speed and image quality

    NASA Astrophysics Data System (ADS)

    Peltoketo, Veli-Tapani

    2014-01-01

    When a mobile phone camera is tested and benchmarked, the significance of quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. For example, ISO 15781 defines several measurements to evaluate various camera system delays. However, the speed or rapidity metrics of the mobile phone's camera system have not been used with the quality metrics even if the camera speed has become more and more important camera performance feature. There are several tasks in this work. Firstly, the most important image quality metrics are collected from the standards and papers. Secondly, the speed related metrics of a mobile phone's camera system are collected from the standards and papers and also novel speed metrics are identified. Thirdly, combinations of the quality and speed metrics are validated using mobile phones in the market. The measurements are done towards application programming interface of different operating system. Finally, the results are evaluated and conclusions are made. The result of this work gives detailed benchmarking results of mobile phone camera systems in the market. The paper defines also a proposal of combined benchmarking metrics, which includes both quality and speed parameters.

  4. Onboard image compression for the HST Advanced Camera for Surveys

    E-print Network

    On­board image compression for the HST Advanced Camera for Surveys Richard L. White a and Ira for Surveys (ACS) produces very large 4096 × 4096 pixel images. We will have on­board image compression to the ground. This is the first time on­board compression has been included in an Hubble Space Telescope

  5. NICMOS IMAGING NICMOS images may be obtained in any camera with any of it's associated spectral elements using

    E-print Network

    Schneider, Glenn

    NICMOS IMAGING NICMOS images may be obtained in any camera with any of it's associated spectral operational restrictions. Direct (continuum, line and broad­band) images may be taken in all three cameras. Coronographic images can be obtained only in Camera 2, Grism spectra in Camera 3, and Polarimetric images

  6. Application of the CCD camera in medical imaging

    NASA Astrophysics Data System (ADS)

    Chu, Wei-Kom; Smith, Chuck; Bunting, Ralph; Knoll, Paul; Wobig, Randy; Thacker, Rod

    1999-04-01

    Medical fluoroscopy is a set of radiological procedures used in medical imaging for functional and dynamic studies of digestive system. Major components in the imaging chain include image intensifier that converts x-ray information into an intensity pattern on its output screen and a CCTV camera that converts the output screen intensity pattern into video information to be displayed on a TV monitor. To properly respond to such a wide dynamic range on a real-time basis, such as fluoroscopy procedure, are very challenging. Also, similar to all other medical imaging studies, detail resolution is of great importance. Without proper contrast, spatial resolution is compromised. The many inherent advantages of CCD make it a suitable choice for dynamic studies. Recently, CCD camera are introduced as the camera of choice for medical fluoroscopy imaging system. The objective of our project was to investigate a newly installed CCD fluoroscopy system in areas of contrast resolution, details, and radiation dose.

  7. ProxiScan™: A Novel Camera for Imaging Prostate Cancer

    SciTech Connect

    Ralph James

    2009-10-27

    ProxiScan is a compact gamma camera suited for high-resolution imaging of prostate cancer. Developed by Brookhaven National Laboratory and Hybridyne Imaging Technologies, Inc., ProxiScan won a 2009 R&D 100 Award, sponsored by R&D Magazine to recognize t

  8. ProxiScan?: A Novel Camera for Imaging Prostate Cancer

    ScienceCinema

    Ralph James

    2010-01-08

    ProxiScan is a compact gamma camera suited for high-resolution imaging of prostate cancer. Developed by Brookhaven National Laboratory and Hybridyne Imaging Technologies, Inc., ProxiScan won a 2009 R&D 100 Award, sponsored by R&D Magazine to recognize t

  9. An airborne four-camera imaging system for agricultural applications

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper describes the design and testing of an airborne multispectral digital imaging system for remote sensing applications. The system consists of four high resolution charge coupled device (CCD) digital cameras and a ruggedized PC equipped with a frame grabber and image acquisition software. T...

  10. The algorithm for generation of panoramic images for omnidirectional cameras

    NASA Astrophysics Data System (ADS)

    Lazarenko, Vasiliy P.; Yarishev, Sergey; Korotaev, Valeriy

    2015-05-01

    The omnidirectional cameras are used in areas where large field-of-view is important. Omnidirectional cameras can give a complete view of 360° along one of direction. But the distortion of omnidirectional cameras is great, which makes omnidirectional image unreadable. One way to view omnidirectional images in a readable form is the generation of panoramic images from omnidirectional images. At the same time panorama keeps the main advantage of the omnidirectional image - a large field of view. The algorithm for generation panoramas from omnidirectional images consists of several steps. Panoramas can be described as projections onto cylinders, spheres, cubes, or other surfaces that surround a viewing point. In practice, the most commonly used cylindrical, spherical and cubic panoramas. So at the first step we describe panoramas field-of-view by creating virtual surface (cylinder, sphere or cube) from matrix of 3d points in virtual object space. Then we create mapping table by finding coordinates of image points for those 3d points on omnidirectional image by using projection function. At the last step we generate panorama pixel-by-pixel image from original omnidirectional image by using of mapping table. In order to find the projection function of omnidirectional camera we used the calibration procedure, developed by Davide Scaramuzza - Omnidirectional Camera Calibration Toolbox for Matlab. After the calibration, the toolbox provides two functions which express the relation between a given pixel point and its projection onto the unit sphere. After first run of the algorithm we obtain mapping table. This mapping table can be used for real time generation of panoramic images with minimal cost of CPU time.

  11. Volumetric particle image velocimetry with a single plenoptic camera

    NASA Astrophysics Data System (ADS)

    Fahringer, Timothy W.; Lynch, Kyle P.; Thurow, Brian S.

    2015-11-01

    A novel three-dimensional (3D), three-component (3C) particle image velocimetry (PIV) technique based on volume illumination and light field imaging with a single plenoptic camera is described. A plenoptic camera uses a densely packed microlens array mounted near a high resolution image sensor to sample the spatial and angular distribution of light collected by the camera. The multiplicative algebraic reconstruction technique (MART) computed tomography algorithm is used to reconstruct a volumetric intensity field from individual snapshots and a cross-correlation algorithm is used to estimate the velocity field from a pair of reconstructed particle volumes. This work provides an introduction to the basic concepts of light field imaging with a plenoptic camera and describes the unique implementation of MART in the context of plenoptic image data for 3D/3C PIV measurements. Simulations of a plenoptic camera using geometric optics are used to generate synthetic plenoptic particle images, which are subsequently used to estimate the quality of particle volume reconstructions at various particle number densities. 3D reconstructions using this method produce reconstructed particles that are elongated by a factor of approximately 4 along the optical axis of the camera. A simulated 3D Gaussian vortex is used to test the capability of single camera plenoptic PIV to produce a 3D/3C vector field, where it was found that lateral displacements could be measured to approximately 0.2 voxel accuracy in the lateral direction and 1 voxel in the depth direction over a 300× 200× 200 voxel volume. The feasibility of the technique is demonstrated experimentally using a home-built plenoptic camera based on a 16-megapixel interline CCD camera and a 289× 193 array of microlenses and a pulsed Nd:YAG laser. 3D/3C measurements were performed in the wake of a low Reynolds number circular cylinder and compared with measurements made using a conventional 2D/2C PIV system. Overall, single camera plenoptic PIV is shown to be a viable 3D/3C velocimetry technique.

  12. Efficient height measurement method of surveillance camera image.

    PubMed

    Lee, Joong; Lee, Eung-Dae; Tark, Hyun-Oh; Hwang, Jin-Woo; Yoon, Do-Young

    2008-05-01

    As surveillance cameras are increasingly installed, their films are often submitted as evidence of crime, but very scant detailed information such as features and clothes is obtained due to the limited camera performance. Height, however, is relatively not significantly influenced by the camera performance. This paper studied the height measurement method using images from a CCTV. The information on the height was obtained via photogrammetry, including the reference points in the photographed area and the calculation of the relationship between a 3D space and a 2D image through linear and nonlinear calibration. Using this correlation, this paper suggested the height measurement method, which projects a 3D virtual ruler onto the image. This method has been proven to offer more stable values within the range of data convergence than those of other existing methods. PMID:18096339

  13. High-Resolution Mars Camera Test Image of Moon (Infrared)

    NASA Technical Reports Server (NTRS)

    2005-01-01

    This crescent view of Earth's Moon in infrared wavelengths comes from a camera test by NASA's Mars Reconnaissance Orbiter spacecraft on its way to Mars. The mission's High Resolution Imaging Science Experiment camera took the image on Sept. 8, 2005, while at a distance of about 10 million kilometers (6 million miles) from the Moon. The dark feature on the right is Mare Crisium. From that distance, the Moon would appear as a star-like point of light to the unaided eye. The test verified the camera's focusing capability and provided an opportunity for calibration. The spacecraft's Context Camera and Optical Navigation Camera also performed as expected during the test.

    The Mars Reconnaissance Orbiter, launched on Aug. 12, 2005, is on course to reach Mars on March 10, 2006. After gradually adjusting the shape of its orbit for half a year, it will begin its primary science phase in November 2006. From the mission's planned science orbit about 300 kilometers (186 miles) above the surface of Mars, the high resolution camera will be able to discern features as small as one meter or yard across.

  14. Digital camera with apparatus for authentication of images produced from an image file

    NASA Technical Reports Server (NTRS)

    Friedman, Gary L. (inventor)

    1993-01-01

    A digital camera equipped with a processor for authentication of images produced from an image file taken by the digital camera is provided. The digital camera processor has embedded therein a private key unique to it, and the camera housing has a public key that is so uniquely based upon the private key that digital data encrypted with the private key by the processor may be decrypted using the public key. The digital camera processor comprises means for calculating a hash of the image file using a predetermined algorithm, and second means for encrypting the image hash with the private key, thereby producing a digital signature. The image file and the digital signature are stored in suitable recording means so they will be available together. Apparatus for authenticating at any time the image file as being free of any alteration uses the public key for decrypting the digital signature, thereby deriving a secure image hash identical to the image hash produced by the digital camera and used to produce the digital signature. The apparatus calculates from the image file an image hash using the same algorithm as before. By comparing this last image hash with the secure image hash, authenticity of the image file is determined if they match, since even one bit change in the image hash will cause the image hash to be totally different from the secure hash.

  15. Cheetah: A high frame rate, high resolution SWIR image camera

    NASA Astrophysics Data System (ADS)

    Neys, Joel; Bentell, Jonas; O'Grady, Matt; Vermeiren, Jan; Colin, Thierry; Hooylaerts, Peter; Grietens, Bob

    2008-10-01

    A high resolution, high frame rate InGaAs based image sensor and associated camera has been developed. The sensor and the camera are capable of recording and delivering more than 1700 full 640x512pixel frames per second. The FPA utilizes a low lag CTIA current integrator in each pixel, enabling integration times shorter than one microsecond. On-chip logics allows for four different sub windows to be read out simultaneously at even higher rates. The spectral sensitivity of the FPA is situated in the SWIR range [0.9-1.7 ?m] and can be further extended into the Visible and NIR range. The Cheetah camera has max 16 GB of on-board memory to store the acquired images and transfer the data over a Gigabit Ethernet connection to the PC. The camera is also equipped with a full CameralinkTM interface to directly stream the data to a frame grabber or dedicated image processing unit. The Cheetah camera is completely under software control.

  16. Establishing imaging sensor specifications for digital still cameras

    NASA Astrophysics Data System (ADS)

    Kriss, Michael A.

    2007-02-01

    Digital Still Cameras, DSCs, have now displaced conventional still cameras in most markets. The heart of a DSC is thought to be the imaging sensor, be it Full Frame CCD, and Interline CCD, a CMOS sensor or the newer Foveon buried photodiode sensors. There is a strong tendency by consumers to consider only the number of mega-pixels in a camera and not to consider the overall performance of the imaging system, including sharpness, artifact control, noise, color reproduction, exposure latitude and dynamic range. This paper will provide a systematic method to characterize the physical requirements of an imaging sensor and supporting system components based on the desired usage. The analysis is based on two software programs that determine the "sharpness", potential for artifacts, sensor "photographic speed", dynamic range and exposure latitude based on the physical nature of the imaging optics, sensor characteristics (including size of pixels, sensor architecture, noise characteristics, surface states that cause dark current, quantum efficiency, effective MTF, and the intrinsic full well capacity in terms of electrons per square centimeter). Examples will be given for consumer, pro-consumer, and professional camera systems. Where possible, these results will be compared to imaging system currently on the market.

  17. Image quality testing of assembled IR camera modules

    NASA Astrophysics Data System (ADS)

    Winters, Daniel; Erichsen, Patrik

    2013-10-01

    Infrared (IR) camera modules for the LWIR (8-12_m) that combine IR imaging optics with microbolometer focal plane array (FPA) sensors with readout electronics are becoming more and more a mass market product. At the same time, steady improvements in sensor resolution in the higher priced markets raise the requirement for imaging performance of objectives and the proper alignment between objective and FPA. This puts pressure on camera manufacturers and system integrators to assess the image quality of finished camera modules in a cost-efficient and automated way for quality control or during end-of-line testing. In this paper we present recent development work done in the field of image quality testing of IR camera modules. This technology provides a wealth of additional information in contrast to the more traditional test methods like minimum resolvable temperature difference (MRTD) which give only a subjective overall test result. Parameters that can be measured are image quality via the modulation transfer function (MTF) for broadband or with various bandpass filters on- and off-axis and optical parameters like e.g. effective focal length (EFL) and distortion. If the camera module allows for refocusing the optics, additional parameters like best focus plane, image plane tilt, auto-focus quality, chief ray angle etc. can be characterized. Additionally, the homogeneity and response of the sensor with the optics can be characterized in order to calculate the appropriate tables for non-uniformity correction (NUC). The technology can also be used to control active alignment methods during mechanical assembly of optics to high resolution sensors. Other important points that are discussed are the flexibility of the technology to test IR modules with different form factors, electrical interfaces and last but not least the suitability for fully automated measurements in mass production.

  18. Digital Camera with Apparatus for Authentication of Images Produced from an Image File

    NASA Technical Reports Server (NTRS)

    Friedman, Gary L. (Inventor)

    1996-01-01

    A digital camera equipped with a processor for authentication of images produced from an image file taken by the digital camera is provided. The digital camera processor has embedded therein a private key unique to it, and the camera housing has a public key that is so uniquely related to the private key that digital data encrypted with the private key may be decrypted using the public key. The digital camera processor comprises means for calculating a hash of the image file using a predetermined algorithm, and second means for encrypting the image hash with the private key, thereby producing a digital signature. The image file and the digital signature are stored in suitable recording means so they will be available together. Apparatus for authenticating the image file as being free of any alteration uses the public key for decrypting the digital signature, thereby deriving a secure image hash identical to the image hash produced by the digital camera and used to produce the digital signature. The authenticating apparatus calculates from the image file an image hash using the same algorithm as before. By comparing this last image hash with the secure image hash, authenticity of the image file is determined if they match. Other techniques to address time-honored methods of deception, such as attaching false captions or inducing forced perspectives, are included.

  19. Multi-spectral image dissector camera system

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The image dissector sensor for the Earth Resources Program is evaluated using contrast and reflectance data. The ground resolution obtainable for low contrast at the targeted signal to noise ratio of 1.8 was defined. It is concluded that the system is capable of achieving the detection of small, low contrast ground targets from satellites.

  20. Image-based camera motion estimation using prior probabilities

    NASA Astrophysics Data System (ADS)

    Sargent, Dusty; Park, Sun Young; Spofford, Inbar; Vosburgh, Kirby

    2011-03-01

    Image-based camera motion estimation from video or still images is a difficult problem in the field of computer vision. Many algorithms have been proposed for estimating intrinsic camera parameters, detecting and matching features between images, calculating extrinsic camera parameters based on those features, and optimizing the recovered parameters with nonlinear methods. These steps in the camera motion inference process all face challenges in practical applications: locating distinctive features can be difficult in many types of scenes given the limited capabilities of current feature detectors, camera motion inference can easily fail in the presence of noise and outliers in the matched features, and the error surfaces in optimization typically contain many suboptimal local minima. The problems faced by these techniques are compounded when they are applied to medical video captured by an endoscope, which presents further challenges such as non-rigid scenery and severe barrel distortion of the images. In this paper, we study these problems and propose the use of prior probabilities to stabilize camera motion estimation for the application of computing endoscope motion sequences in colonoscopy. Colonoscopy presents a special case for camera motion estimation in which it is possible to characterize typical motion sequences of the endoscope. As the endoscope is restricted to move within a roughly tube-shaped structure, forward/backward motion is expected, with only small amounts of rotation and horizontal movement. We formulate a probabilistic model of endoscope motion by maneuvering an endoscope and attached magnetic tracker through a synthetic colon model and fitting a distribution to the observed motion of the magnetic tracker. This model enables us to estimate the probability of the current endoscope motion given previously observed motion in the sequence. We add these prior probabilities into the camera motion calculation as an additional penalty term in RANSAC to help reject improbable motion parameters caused by outliers and other problems with medical data. This paper presents the theoretical basis of our method along with preliminary results on indoor scenes and synthetic colon images.

  1. Coincidence ion imaging with a fast frame camera

    SciTech Connect

    Lee, Suk Kyoung; Cudry, Fadia; Lin, Yun Fei; Lingenfelter, Steven; Winney, Alexander H.; Fan, Lin; Li, Wen

    2014-12-15

    A new time- and position-sensitive particle detection system based on a fast frame CMOS (complementary metal-oxide semiconductors) camera is developed for coincidence ion imaging. The system is composed of four major components: a conventional microchannel plate/phosphor screen ion imager, a fast frame CMOS camera, a single anode photomultiplier tube (PMT), and a high-speed digitizer. The system collects the positional information of ions from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of a PMT processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of ion spots on each camera frame with the peak heights on the corresponding time-of-flight spectrum of a PMT. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide.

  2. Laser Imaging Video Camera Sees Through Fire, Fog, Smoke

    NASA Technical Reports Server (NTRS)

    2015-01-01

    Under a series of SBIR contracts with Langley Research Center, inventor Richard Billmers refined a prototype for a laser imaging camera capable of seeing through fire, fog, smoke, and other obscurants. Now, Canton, Ohio-based Laser Imaging through Obscurants (LITO) Technologies Inc. is demonstrating the technology as a perimeter security system at Glenn Research Center and planning its future use in aviation, shipping, emergency response, and other fields.

  3. Radiometric cloud imaging with an uncooled microbolometer thermal infrared camera.

    PubMed

    Shaw, Joseph; Nugent, Paul; Pust, Nathan; Thurairajah, Brentha; Mizutani, Kohei

    2005-07-25

    An uncooled microbolometer-array thermal infrared camera has been incorporated into a remote sensing system for radiometric sky imaging. The radiometric calibration is validated and improved through direct comparison with spectrally integrated data from the Atmospheric Emitted Radiance Interferometer (AERI). With the improved calibration, the Infrared Cloud Imager (ICI) system routinely obtains sky images with radiometric uncertainty less than 0.5 W/(m(2 )sr) for extended deployments in challenging field environments. We demonstrate the infrared cloud imaging technique with still and time-lapse imagery of clear and cloudy skies, including stratus, cirrus, and wave clouds. PMID:19498585

  4. A compact gamma camera for biological imaging

    SciTech Connect

    Bradley, E.L.; Cella, J.; Majewski, S.; Popov, V.; Jianguo Qian; Saha, M.S.; Smith, M.F.; Weisenberger, A.G.; Welsh, R.E.

    2006-02-01

    A compact detector, sized particularly for imaging a mouse, is described. The active area of the detector is approximately 46 mm; spl times/ 96 mm. Two flat-panel Hamamatsu H8500 position-sensitive photomultiplier tubes (PSPMTs) are coupled to a pixellated NaI(Tl) scintillator which views the animal through a copper-beryllium (CuBe) parallel-hole collimator specially designed for {sup 125}I. Although the PSPMTs have insensitive areas at their edges and there is a physical gap, corrections for scintillation light collection at the junction between the two tubes results in a uniform response across the entire rectangular area of the detector. The system described has been developed to optimize both sensitivity and resolution for in-vivo imaging of small animals injected with iodinated compounds. We demonstrate an in-vivo application of this detector, particularly to SPECT, by imaging mice injected with approximately 10-15; spl mu/Ci of {sup 125}I.

  5. Filtered backprojection reconstruction and redundancy in Compton camera imaging.

    PubMed

    Maxim, Voichi?a

    2014-01-01

    During the acquisition process with the Compton gamma-camera, integrals of the intensity distribution of the source on conical surfaces are measured. They represent the Compton projections of the intensity. The inversion of the Compton transform reposes on a particular Fourier-Slice theorem. This paper proposes a filtered backprojection algorithm for image reconstruction from planar Compton camera data. We show how different projections are related together and how they may be combined in the tomographical reconstruction step. Considering a simulated Compton imaging system, we conclude that the proposed method yields accurate reconstructed images for simple sources. An elongation of the source in the direction orthogonal to the camera may be observed and is to be related to the truncation of the projections induced by the finite extent of the device. This phenomenon was previously observed with other reconstruction methods, e.g., iterative maximum likelihood expectation maximization. The redundancy of the Compton transform is thus an important feature for the reduction of noise in Compton images, since the ideal assumptions of infinite width and observation time are never met in practice. We show that a selection operated on the set of data allows to partially get around projection truncation, at the expense of an enhancement of the noise in the images. PMID:24196864

  6. Engineering design criteria for an image intensifier/image converter camera

    NASA Technical Reports Server (NTRS)

    Sharpsteen, J. T.; Lund, D. L.; Stoap, L. J.; Solheim, C. D.

    1976-01-01

    The design, display, and evaluation of an image intensifier/image converter camera which can be utilized in various requirements of spaceshuttle experiments are described. An image intensifier tube was utilized in combination with two brassboards as power supply and used for evaluation of night photography in the field. Pictures were obtained showing field details which would have been undistinguishable to the naked eye or to an ordinary camera.

  7. ARNICA, the NICMOS 3 imaging camera of TIRGO.

    NASA Astrophysics Data System (ADS)

    Lisi, F.; Baffa, C.; Hunt, L.; Stanga, R.

    ARNICA (ARcetri Near Infrared CAmera) is the imaging camera for the near infrared bands between 1.0 and 2.5 ?m that Arcetri Observatory has designed and built as a general facility for the TIRGO telescope (1.5 m diameter, f/20) located at Gornergrat (Switzerland). The scale is 1?per pixel, with sky coverage of more than 4 min×4 min on the NICMOS 3 (256×256 pixels, 40 ?m side) detector array. The camera is remotely controlled by a PC 486, connected to the array control electronics via a fiber-optics link. A C-language package, running under MS-DOS on the PC 486, acquires and stores the frames, and controls the timing of the array. The camera is intended for imaging of large extra-galactic and Galactic fields; a large effort has been dedicated to explore the possibility of achieving precise photometric measurements in the J, H, K astronomical bands, with very promising results.

  8. Characterization of a PET Camera Optimized for ProstateImaging

    SciTech Connect

    Huber, Jennifer S.; Choong, Woon-Seng; Moses, William W.; Qi,Jinyi; Hu, Jicun; Wang, G.C.; Wilson, David; Oh, Sang; Huesman, RonaldH.; Derenzo, Stephen E.

    2005-11-11

    We present the characterization of a positron emission tomograph for prostate imaging that centers a patient between a pair of external curved detector banks (ellipse: 45 cm minor, 70 cm major axis). The distance between detector banks adjusts to allow patient access and to position the detectors as closely as possible for maximum sensitivity with patients of various sizes. Each bank is composed of two axial rows of 20 HR+ block detectors for a total of 80 detectors in the camera. The individual detectors are angled in the transaxial plane to point towards the prostate to reduce resolution degradation in that region. The detectors are read out by modified HRRT data acquisition electronics. Compared to a standard whole-body PET camera, our dedicated-prostate camera has the same sensitivity and resolution, less background (less randoms and lower scatter fraction) and a lower cost. We have completed construction of the camera. Characterization data and reconstructed images of several phantoms are shown. Sensitivity of a point source in the center is 946 cps/mu Ci. Spatial resolution is 4 mm FWHM in the central region.

  9. Measurement of camera image sensor depletion thickness with cosmic rays

    E-print Network

    J. Vandenbroucke; S. BenZvi; S. Bravo; K. Jensen; P. Karn; M. Meehan; J. Peacock; M. Plewa; T. Ruggles; M. Santander; D. Schultz; A. L. Simons; D. Tosi

    2015-10-30

    Camera image sensors can be used to detect ionizing radiation in addition to optical photons. In particular, cosmic-ray muons are detected as long, straight tracks passing through multiple pixels. The distribution of track lengths can be related to the thickness of the active (depleted) region of the camera image sensor through the known angular distribution of muons at sea level. We use a sample of cosmic-ray muon tracks recorded by the Distributed Electronic Cosmic-ray Observatory to measure the thickness of the depletion region of the camera image sensor in a commercial smart phone, the HTC Wildfire S. The track length distribution prefers a cosmic-ray muon angular distribution over an isotropic distribution. Allowing either distribution, we measure the depletion thickness to be between 13.9~$\\mu$m and 27.7~$\\mu$m. The same method can be applied to additional models of image sensor. Once measured, the thickness can be used to convert track length to incident polar angle on a per-event basis. Combined with a determination of the incident azimuthal angle directly from the track orientation in the sensor plane, this enables direction reconstruction of individual cosmic-ray events.

  10. Camera assembly design proposal for SRF cavity image collection

    SciTech Connect

    Tuozzolo, S.

    2011-10-10

    This project seeks to collect images from the inside of a superconducting radio frequency (SRF) large grain niobium cavity during vertical testing. These images will provide information on multipacting and other phenomena occurring in the SRF cavity during these tests. Multipacting, a process that involves an electron buildup in the cavity and concurrent loss of RF power, is thought to be occurring near the cathode in the SRF structure. Images of electron emission in the structure will help diagnose the source of multipacting in the cavity. Multipacting sources may be eliminated with an alteration of geometric or resonant conditions in the SRF structure. Other phenomena, including unexplained light emissions previously discovered at SLAC, may be present in the cavity. In order to effectively capture images of these events during testing, a camera assembly needs to be installed to the bottom of the RF structure. The SRF assembly operates under extreme environmental conditions: it is kept in a dewar in a bath of 2K liquid helium during these tests, is pumped down to ultra-high vacuum, and is subjected to RF voltages. Because of this, the camera needs to exist as a separate assembly attached to the bottom of the cavity. The design of the camera is constrained by a number of factors that are discussed.

  11. Measurement of camera image sensor depletion thickness with cosmic rays

    E-print Network

    Vandenbroucke, J; Bravo, S; Jensen, K; Karn, P; Meehan, M; Peacock, J; Plewa, M; Ruggles, T; Santander, M; Schultz, D; Simons, A L; Tosi, D

    2015-01-01

    Camera image sensors can be used to detect ionizing radiation in addition to optical photons. In particular, cosmic-ray muons are detected as long, straight tracks passing through multiple pixels. The distribution of track lengths can be related to the thickness of the active (depleted) region of the camera image sensor through the known angular distribution of muons at sea level. We use a sample of cosmic-ray muon tracks recorded by the Distributed Electronic Cosmic-ray Observatory to measure the thickness of the depletion region of the camera image sensor in a commercial smart phone, the HTC Wildfire S. The track length distribution prefers a cosmic-ray muon angular distribution over an isotropic distribution. Allowing either distribution, we measure the depletion thickness to be between 13.9~$\\mu$m and 27.7~$\\mu$m. The same method can be applied to additional models of image sensor. Once measured, the thickness can be used to convert track length to incident polar angle on a per-event basis. Combined with ...

  12. Refocusing images and videos with a conventional compact camera

    NASA Astrophysics Data System (ADS)

    Kang, Lai; Wu, Lingda; Wei, Yingmei; Song, Hanchen; Yang, Zheng

    2015-03-01

    Digital refocusing is an interesting and useful tool for generating dynamic depth-of-field (DOF) effects in many types of photography such as portraits and creative photography. Since most existing digital refocusing methods rely on four-dimensional light field captured by special precisely manufactured devices or a sequence of images captured by a single camera, existing systems are either expensive for wide practical use or incapable of handling dynamic scenes. We present a low-cost approach for refocusing high-resolution (up to 8 mega pixels) images and videos based on a single shot using an easy to build camera-mirror stereo system. Our proposed method consists of four main steps, namely system calibration, image rectification, disparity estimation, and refocusing rendering. The effectiveness of our proposed method has been evaluated extensively using both static and dynamic scenes with various depth ranges. Promising experimental results demonstrate that our method is able to simulate various controllable realistic DOF effects. To the best of our knowledge, our method is the first that allows one to refocus high-resolution images and videos of dynamic scenes captured by a conventional compact camera.

  13. Ingenieurspraxisbericht "Development of a Lightweight Miniature Camera with On-Board Image

    E-print Network

    Conradt, Jörg

    1 Ingenieurspraxisbericht "Development of a Lightweight Miniature Camera with On-Board Image: Enrico Boner (3611596) Development of a Lightweight Miniature Camera with On-Board Image Processing mobile robotics. In most current robotic systems, standard video cameras transmit captured images to a PC

  14. A Comparative Study of Microscopic Images Captured by a Box Type Digital Camera Versus a Standard Microscopic Photography Camera Unit

    PubMed Central

    Desai, Nandini J.; Gupta, B. D.; Patel, Pratik Narendrabhai

    2014-01-01

    Introduction: Obtaining images of slides viewed by a microscope can be invaluable for both diagnosis and teaching.They can be transferred among technologically-advanced hospitals for further consultation and evaluation. But a standard microscopic photography camera unit (MPCU)(MIPS-Microscopic Image projection System) is costly and not available in resource poor settings. The aim of our endeavour was to find a comparable and cheaper alternative method for photomicrography. Materials and Methods: We used a NIKON Coolpix S6150 camera (box type digital camera) with Olympus CH20i microscope and a fluorescent microscope for the purpose of this study. Results: We got comparable results for capturing images of light microscopy, but the results were not as satisfactory for fluorescent microscopy. Conclusion: A box type digital camera is a comparable, less expensive and convenient alternative to microscopic photography camera unit. PMID:25478350

  15. Camera system resolution and its influence on digital image correlation

    DOE PAGESBeta

    Reu, Phillip L.; Sweatt, William; Miller, Timothy; Fleming, Darryn

    2014-09-21

    Digital image correlation (DIC) uses images from a camera and lens system to make quantitative measurements of the shape, displacement, and strain of test objects. This increasingly popular method has had little research on the influence of the imaging system resolution on the DIC results. This paper investigates the entire imaging system and studies how both the camera and lens resolution influence the DIC results as a function of the system Modulation Transfer Function (MTF). It will show that when making spatial resolution decisions (including speckle size) the resolution limiting component should be considered. A consequence of the loss ofmore »spatial resolution is that the DIC uncertainties will be increased. This is demonstrated using both synthetic and experimental images with varying resolution. The loss of image resolution and DIC accuracy can be compensated for by increasing the subset size, or better, by increasing the speckle size. The speckle-size and spatial resolution are now a function of the lens resolution rather than the more typical assumption of the pixel size. The study will demonstrate the tradeoffs associated with limited lens resolution.« less

  16. Camera system resolution and its influence on digital image correlation

    SciTech Connect

    Reu, Phillip L.; Sweatt, William; Miller, Timothy; Fleming, Darryn

    2014-09-21

    Digital image correlation (DIC) uses images from a camera and lens system to make quantitative measurements of the shape, displacement, and strain of test objects. This increasingly popular method has had little research on the influence of the imaging system resolution on the DIC results. This paper investigates the entire imaging system and studies how both the camera and lens resolution influence the DIC results as a function of the system Modulation Transfer Function (MTF). It will show that when making spatial resolution decisions (including speckle size) the resolution limiting component should be considered. A consequence of the loss of spatial resolution is that the DIC uncertainties will be increased. This is demonstrated using both synthetic and experimental images with varying resolution. The loss of image resolution and DIC accuracy can be compensated for by increasing the subset size, or better, by increasing the speckle size. The speckle-size and spatial resolution are now a function of the lens resolution rather than the more typical assumption of the pixel size. The study will demonstrate the tradeoffs associated with limited lens resolution.

  17. Parallel phase-sensitive three-dimensional imaging camera

    DOEpatents

    Smithpeter, Colin L. (Albuquerque, NM); Hoover, Eddie R. (Sandia Park, NM); Pain, Bedabrata (Los Angeles, CA); Hancock, Bruce R. (Altadena, CA); Nellums, Robert O. (Albuquerque, NM)

    2007-09-25

    An apparatus is disclosed for generating a three-dimensional (3-D) image of a scene illuminated by a pulsed light source (e.g. a laser or light-emitting diode). The apparatus, referred to as a phase-sensitive 3-D imaging camera utilizes a two-dimensional (2-D) array of photodetectors to receive light that is reflected or scattered from the scene and processes an electrical output signal from each photodetector in the 2-D array in parallel using multiple modulators, each having inputs of the photodetector output signal and a reference signal, with the reference signal provided to each modulator having a different phase delay. The output from each modulator is provided to a computational unit which can be used to generate intensity and range information for use in generating a 3-D image of the scene. The 3-D camera is capable of generating a 3-D image using a single pulse of light, or alternately can be used to generate subsequent 3-D images with each additional pulse of light.

  18. An Efficient Image Compressor for Charge Coupled Devices Camera

    PubMed Central

    Li, Jin; Xing, Fei; You, Zheng

    2014-01-01

    Recently, the discrete wavelet transforms- (DWT-) based compressor, such as JPEG2000 and CCSDS-IDC, is widely seen as the state of the art compression scheme for charge coupled devices (CCD) camera. However, CCD images project on the DWT basis to produce a large number of large amplitude high-frequency coefficients because these images have a large number of complex texture and contour information, which are disadvantage for the later coding. In this paper, we proposed a low-complexity posttransform coupled with compressing sensing (PT-CS) compression approach for remote sensing image. First, the DWT is applied to the remote sensing image. Then, a pair base posttransform is applied to the DWT coefficients. The pair base are DCT base and Hadamard base, which can be used on the high and low bit-rate, respectively. The best posttransform is selected by the lp-norm-based approach. The posttransform is considered as the sparse representation stage of CS. The posttransform coefficients are resampled by sensing measurement matrix. Experimental results on on-board CCD camera images show that the proposed approach significantly outperforms the CCSDS-IDC-based coder, and its performance is comparable to that of the JPEG2000 at low bit rate and it does not have the high excessive implementation complexity of JPEG2000. PMID:25114977

  19. Goal-oriented rectification of camera-based document images.

    PubMed

    Stamatopoulos, Nikolaos; Gatos, Basilis; Pratikakis, Ioannis; Perantonis, Stavros J

    2011-04-01

    Document digitization with either flatbed scanners or camera-based systems results in document images which often suffer from warping and perspective distortions that deteriorate the performance of current OCR approaches. In this paper, we present a goal-oriented rectification methodology to compensate for undesirable document image distortions aiming to improve the OCR result. Our approach relies upon a coarse-to-fine strategy. First, a coarse rectification is accomplished with the aid of a computationally low cost transformation which addresses the projection of a curved surface to a 2-D rectangular area. The projection of the curved surface on the plane is guided only by the textual content's appearance in the document image while incorporating a transformation which does not depend on specific model primitives or camera setup parameters. Second, pose normalization is applied on the word level aiming to restore all the local distortions of the document image. Experimental results on various document images with a variety of distortions demonstrate the robustness and effectiveness of the proposed rectification methodology using a consistent evaluation methodology that encounters OCR accuracy and a newly introduced measure using a semi-automatic procedure. PMID:20876019

  20. Quantifying biodiversity using digital cameras and automated image analysis.

    NASA Astrophysics Data System (ADS)

    Roadknight, C. M.; Rose, R. J.; Barber, M. L.; Price, M. C.; Marshall, I. W.

    2009-04-01

    Monitoring the effects on biodiversity of extensive grazing in complex semi-natural habitats is labour intensive. There are also concerns about the standardization of semi-quantitative data collection. We have chosen to focus initially on automating the most time consuming aspect - the image analysis. The advent of cheaper and more sophisticated digital camera technology has lead to a sudden increase in the number of habitat monitoring images and information that is being collected. We report on the use of automated trail cameras (designed for the game hunting market) to continuously capture images of grazer activity in a variety of habitats at Moor House National Nature Reserve, which is situated in the North of England at an average altitude of over 600m. Rainfall is high, and in most areas the soil consists of deep peat (1m to 3m), populated by a mix of heather, mosses and sedges. The cameras have been continuously in operation over a 6 month period, daylight images are in full colour and night images (IR flash) are black and white. We have developed artificial intelligence based methods to assist in the analysis of the large number of images collected, generating alert states for new or unusual image conditions. This paper describes the data collection techniques, outlines the quantitative and qualitative data collected and proposes online and offline systems that can reduce the manpower overheads and increase focus on important subsets in the collected data. By converting digital image data into statistical composite data it can be handled in a similar way to other biodiversity statistics thus improving the scalability of monitoring experiments. Unsupervised feature detection methods and supervised neural methods were tested and offered solutions to simplifying the process. Accurate (85 to 95%) categorization of faunal content can be obtained, requiring human intervention for only those images containing rare animals or unusual (undecidable) conditions, and enabling automatic deletion of images generated by erroneous triggering (e.g. cloud movements). This is the first step to a hierarchical image processing framework, where situation subclasses such as birds or climatic conditions can be fed into more appropriate automated or semi-automated data mining software.

  1. PIV camera response to high frequency signal: comparison of CCD and CMOS cameras using particle image simulation

    NASA Astrophysics Data System (ADS)

    Abdelsalam, D. G.; Stanislas, M.; Coudert, S.

    2014-08-01

    We present a quantitative comparison between FlowMaster3 CCD and Phantom V9.1 CMOS cameras’ response in the scope of application to particle image velocimetry (PIV). First, the subpixel response is characterized using a specifically designed set-up. The crosstalk between adjacent pixels for the two cameras is then estimated and compared. Then, the camera response is experimentally characterized using particle image simulation. Based on a three-point Gaussian peak fitting, the bias and RMS errors between locations of simulated and real images for the two cameras are accurately calculated using a homemade program. The results show that, although the pixel response is not perfect, the optical crosstalk between adjacent pixels stays relatively low and the accuracy of the position determination of an ideal PIV particle image is much better than expected.

  2. Imaging of Venus from Galileo: Early results and camera performance

    USGS Publications Warehouse

    Belton, M.J.S.; Gierasch, P.; Klaasen, K.P.; Anger, C.D.; Carr, M.H.; Chapman, C.R.; Davies, M.E.; Greeley, R.; Greenberg, R.; Head, J.W.; Neukum, G.; Pilcher, C.B.; Veverka, J.; Fanale, F.P.; Ingersoll, A.P.; Pollock, J.B.; Morrison, D.; Clary, M.C.; Cunningham, W.; Breneman, H.

    1992-01-01

    Three images of Venus have been returned so far by the Galileo spacecraft following an encounter with the planet on UT February 10, 1990. The images, taken at effective wavelengths of 4200 and 9900 A??, characterize the global motions and distribution of haze near the Venus cloud tops and, at the latter wavelength, deep within the main cloud. Previously undetected markings are clearly seen in the near-infrared image. The global distribution of these features, which have maximum contrasts of 3%, is different from that recorded at short wavelengths. In particular, the "polar collar," which is omnipresent in short wavelength images, is absent at 9900 A??. The maximum contrast in the features at 4200 A?? is about 20%. The optical performance of the camera is described and is judged to be nominal. ?? 1992.

  3. First experiences with ARNICA, the ARCETRI observatory imaging camera

    NASA Astrophysics Data System (ADS)

    Lisi, F.; Baffa, C.; Hunt, L.; Maiolino, R.; Moriondo, G.; Stanga, R.

    1994-03-01

    ARNICA (ARcetri Near Infrared CAmera) is the imaging camera for the near infrared bands between 1.0 and 2.5 micrometer that Arcetri Observatory has designed and built as a common use instrument for the TIRGO telescope (1.5 m diameter, f/20) located at Gornergrat (Switzerland). The scale is 1 sec per pixel, with sky coverage of more than 4 min x 4 min on the NICMOS 3 (256 x 256 pixels, 40 micrometer side) detector array. The optical path is compact enough to be enclosed in a 25.4 cm diameter dewar; the working temperature of detector and optics is 76 K. We give an estimate of performance, in terms of sensitivity with an assigned observing time, along with some preliminary considerations on photometric accuracy.

  4. MWIR COMIC imaging camera for the ADONIS adaptive optics system

    NASA Astrophysics Data System (ADS)

    Feautrier, Philippe; Beuzit, Jean-Luc; Lacombe, Francois; Petmezakis, Panayoti; Geoffray, Herve; Monin, Jean-Louis; Talureau, Bernard; Gigan, Pierre; Hubin, Norbert; Audaire, Luc

    1995-09-01

    A 1-5 micrometers astronomical infrared imaging camera, COMIC, is currently being developed for the ADONIS adaptive optics system, as a collaborative project of Observatoire de Paris and Observatoire de Grenoble under ESO (European Space Observatory) contract. This camera is based on a 128 by 128 HgCdTe/CCD array built by the CEA-LETI-LIR (Grenoble, France). Among its main characteristics, this detector offers a very high storage capacity of 3 106 e- with a total system read-out noise of about 600 e- which makes it particularly optimized for the 3-5 mum. COMIC will be installed in the fall of 1995 at the output focus of the ADONIS AO system on the ESO 3.6-m telescope at La Silla (Chile).

  5. Uncooled detector, optics, and camera development for THz imaging

    NASA Astrophysics Data System (ADS)

    Pope, Timothy; Doucet, Michel; Dupont, Fabien; Marchese, Linda; Tremblay, Bruno; Baldenberger, Georges; Verrault, Sonia; Lamontagne, Frédéric

    2009-05-01

    A prototype THz imaging system based on modified uncooled microbolometer detector arrays, INO MIMICII camera electronics, and a custom f/1 THz optics has been assembled. A variety of new detector layouts and architectures have been designed; the detector THz absorption was optimized via several methods including integration of thin film metallic absorbers, thick film gold black absorbers, and antenna structures. The custom f/1 THz optics is based on high resistivity floatzone silicon with parylene anti-reflection coating matched to the wavelength region of interest. The integrated detector, camera electronics, and optics are combined with a 3 THz quantum cascade laser for initial testing and evaluation. Future work will include the integration of fully optimized detectors and packaging and the evaluation of the achievable NEP with an eye to future applications such as industrial inspection and stand-off detection.

  6. Image-intensifier camera studies of shocked metal surfaces

    SciTech Connect

    Engelke, R.P.; Thurston, R.S.

    1986-01-01

    A high-space-resolution image-intensifier camera with luminance gain of up to 5000 and exposure times as short as 30 ns has been applied to the study of the interaction of posts and welds with strongly shocked metal surfaces, which included super strong steels. The time evolution of a single experiment can be recorded by multiple pulsing of the camera. Phenomena that remain coherent for relatively long durations have been observed. An important feature of the hydrodynamic flow resulting from post-plate interactions is the creation of a wave that propagates outward on the plate; the flow blocks the explosive product gases from escaping through the plate for greater than 10 ..mu..s. Electron beam welds were ineffective in blocking product gases from escaping for even short periods of time.

  7. Image-based Visual Servoing for Nonholonomic Mobile Robots with Central Catadioptric Camera

    E-print Network

    Siena, Università di

    Image-based Visual Servoing for Nonholonomic Mobile Robots with Central Catadioptric Camera Gian for nonholonomic mobile robot equipped with a central cata- dioptric camera. This kind of vision sensor combines by an on-board omnidirec- tional camera. In IBVS the control law is directly designed in the image domain

  8. MECHANICAL ADVANCING HANDLE THAT SIMPLIFIES MINIRHIZOTRON CAMERA REGISTRATION AND IMAGE COLLECTION

    EPA Science Inventory

    Minirkizotrons in conjunction with a minirkizotron video camera system are becoming widely used tools for investigating root production and survical in a variety of ecosystems. Image collection with a minirhizotron camera can be time consuming and tedious particularly when hundre...

  9. Single-quantum dot imaging with a photon counting camera

    PubMed Central

    Michalet, X.; Colyer, R. A.; Antelman, J.; Siegmund, O.H.W.; Tremsin, A.; Vallerga, J.V.; Weiss, S.

    2010-01-01

    The expanding spectrum of applications of single-molecule fluorescence imaging ranges from fundamental in vitro studies of biomolecular activity to tracking of receptors in live cells. The success of these assays has relied on progresses in organic and non-organic fluorescent probe developments as well as improvements in the sensitivity of light detectors. We describe a new type of detector developed with the specific goal of ultra-sensitive single-molecule imaging. It is a wide-field, photon-counting detector providing high temporal and high spatial resolution information for each incoming photon. It can be used as a standard low-light level camera, but also allows access to a lot more information, such as fluorescence lifetime and spatio-temporal correlations. We illustrate the single-molecule imaging performance of our current prototype using quantum dots and discuss on-going and future developments of this detector. PMID:19689323

  10. On image sensor dynamic range utilized by security cameras

    NASA Astrophysics Data System (ADS)

    Johannesson, Anders

    2012-03-01

    The dynamic range is an important quantity used to describe an image sensor. Wide/High/Extended dynamic range is often brought forward as an important feature to compare one device to another. The dynamic range of an image sensor is normally given as a single number, which is often insufficient since a single number will not fully describe the dynamic capabilities of the sensor. A camera is ideally based on a sensor that can cope with the dynamic range of the scene. Otherwise it has to sacrifice some part of the available data. For a security camera the latter may be critical since important objects might be hidden in the sacrificed part of the scene. In this paper we compare the dynamic capabilities of some image sensors utilizing a visual tool. The comparison is based on the use case, common in surveillance, where low contrast objects may appear in any part of a scene that through its uneven illumination, span a high dynamic range. The investigation is based on real sensor data that has been measured in our lab and a synthetic test scene is used to mimic the low contrast objects. With this technique it is possible to compare sensors with different intrinsic dynamic properties as well as some capture techniques used to create an effect of increased dynamic range.

  11. ARNICA: the Arcetri Observatory NICMOS3 imaging camera

    NASA Astrophysics Data System (ADS)

    Lisi, Franco; Baffa, Carlo; Hunt, Leslie K.

    1993-10-01

    ARNICA (ARcetri Near Infrared CAmera) is the imaging camera for the near infrared bands between 1.0 and 2.5 micrometers that Arcetri Observatory has designed and built as a general facility for the TIRGO telescope (1.5 m diameter, f/20) located at Gornergrat (Switzerland). The scale is 1' per pixel, with sky coverage of more than 4' X 4' on the NICMOS 3 (256 X 256 pixels, 40 micrometers side) detector array. The optical path is compact enough to be enclosed in a 25.4 cm diameter dewar; the working temperature is 76 K. The camera is remotely controlled by a 486 PC, connected to the array control electronics via a fiber-optics link. A C-language package, running under MS-DOS on the 486 PC, acquires and stores the frames, and controls the timing of the array. We give an estimate of performance, in terms of sensitivity with an assigned observing time, along with some details on the main parameters of the NICMOS 3 detector.

  12. Volcanic plume characteristics determined using an infrared imaging camera

    NASA Astrophysics Data System (ADS)

    Lopez, T.; Thomas, H. E.; Prata, A. J.; Amigo, A.; Fee, D.; Moriano, D.

    2015-07-01

    Measurements of volcanic emissions (ash and SO2) from small-sized eruptions at three geographically dispersed volcanoes are presented from a novel, multichannel, uncooled imaging infrared camera. Infrared instruments and cameras have been used previously at volcanoes to study lava bodies and to assess plume dynamics using high temperature sources. Here we use spectrally resolved narrowband (~ 0.5-1 ?m bandwidth) imagery to retrieve SO2 and ash slant column densities (g m- 2) and emission rates or fluxes from infrared thermal imagery at close to ambient atmospheric temperatures. The relatively fast sampling (0.1-0.5 Hz) of the multispectral imagery and the fast sampling (~ 1 Hz) of single channel temperature data permit analysis of some aspects of plume dynamics. Estimations of SO2 and ash mass fluxes, and total slant column densities of SO2 and fine ash in individual small explosions from Stromboli (Italy) and Karymsky (Russia), and total SO2 slant column densities and fluxes from Láscar (Chile) volcanoes, are provided. We evaluate the temporal evolution of fine ash particle sizes in ash-rich explosions at Stromboli and Karymsky and use these observations to infer the presence of at least two distinct fine ash modes, with mean radii of < 10 ?m and > 10 ?m. The camera and techniques detailed here provide a tool to quickly and remotely estimate fluxes of fine ash and SO2 gas and characterize eruption size.

  13. Ceres Photometry and Albedo from Dawn Framing Camera Images

    NASA Astrophysics Data System (ADS)

    Schröder, S. E.; Mottola, S.; Keller, H. U.; Li, J.-Y.; Matz, K.-D.; Otto, K.; Roatsch, T.; Stephan, K.; Raymond, C. A.; Russell, C. T.

    2015-10-01

    The Dawn spacecraft is in orbit around dwarf planet Ceres. The onboard Framing Camera (FC) [1] is mapping the surface through a clear filter and 7 narrow-band filters at various observational geometries. Generally, Ceres' appearance in these images is affected by shadows and shading, effects which become stronger for larger solar phase angles, obscuring the intrinsic reflective properties of the surface. By means of photometric modeling we attempt to remove these effects and reconstruct the surface albedo over the full visible wavelength range. Knowledge of the albedo distribution will contribute to our understanding of the physical nature and composition of the surface.

  14. Inflight Calibration of the Lunar Reconnaissance Orbiter Camera Wide Angle Camera

    NASA Astrophysics Data System (ADS)

    Mahanti, P.; Humm, D. C.; Robinson, M. S.; Boyd, A. K.; Stelling, R.; Sato, H.; Denevi, B. W.; Braden, S. E.; Bowman-Cisneros, E.; Brylow, S. M.; Tschimmel, M.

    2015-09-01

    The Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) has acquired more than 250,000 images of the illuminated lunar surface and over 190,000 observations of space and non-illuminated Moon since 1 January 2010. These images, along with images from the Narrow Angle Camera (NAC) and other Lunar Reconnaissance Orbiter instrument datasets are enabling new discoveries about the morphology, composition, and geologic/geochemical evolution of the Moon. Characterizing the inflight WAC system performance is crucial to scientific and exploration results. Pre-launch calibration of the WAC provided a baseline characterization that was critical for early targeting and analysis. Here we present an analysis of WAC performance from the inflight data. In the course of our analysis we compare and contrast with the pre-launch performance wherever possible and quantify the uncertainty related to various components of the calibration process. We document the absolute and relative radiometric calibration, point spread function, and scattered light sources and provide estimates of sources of uncertainty for spectral reflectance measurements of the Moon across a range of imaging conditions.

  15. Image quality assessment of 2-chip color camera in comparison with 1-chip color and 3-chip color cameras in various lighting conditions: initial results

    NASA Astrophysics Data System (ADS)

    Adham Khiabani, Sina; Zhang, Yun; Fathollahi, Fatemeh

    2014-05-01

    A 2-chip color camera, named UNB Super-camera, is introduced in this paper. Its image qualities in different lighting conditions are compared with those of a 1-chip color camera and a 3-chip color camera. The 2-chip color camera contains a high resolution monochrome (panchromatic) sensor and a low resolution color sensor. The high resolution color images of the 2-chip color camera are produced through an image fusion technique: UNB pan-sharp, also named FuzeGo. This fusion technique has been widely used to produce high resolution color satellite images from a high resolution panchromatic image and low resolution multispectral (color) image for a decade. Now, the fusion technique is further extended to produce high resolution color still images and video images from a 2-chip color camera. The initial quality assessments of a research project proved that the light sensitivity, image resolution and color quality of the Super-camera (2-chip camera) is obviously better than those of the same generation 1-chip camera. It is also proven that the image quality of the Super-camera is much better than the same generation 3-chip camera when the light is low, such as in a normal room light condition or darker. However, the resolution of the Super-camera is the same as that of the 3- chip camera, these evaluation results suggest the potential of using 2-chip camera to replace 3-chip camera for capturing high quality color images, which is not only able to lower the cost of camera manufacture but also significantly improving the light sensitivity.

  16. Noise evaluation of Compton camera imaging for proton therapy

    NASA Astrophysics Data System (ADS)

    Ortega, P. G.; Torres-Espallardo, I.; Cerutti, F.; Ferrari, A.; Gillam, J. E.; Lacasta, C.; Llosá, G.; Oliver, J. F.; Sala, P. R.; Solevi, P.; Rafecas, M.

    2015-02-01

    Compton Cameras emerged as an alternative for real-time dose monitoring techniques for Particle Therapy (PT), based on the detection of prompt-gammas. As a consequence of the Compton scattering process, the gamma origin point can be restricted onto the surface of a cone (Compton cone). Through image reconstruction techniques, the distribution of the gamma emitters can be estimated, using cone-surfaces backprojections of the Compton cones through the image space, along with more sophisticated statistical methods to improve the image quality. To calculate the Compton cone required for image reconstruction, either two interactions, the last being photoelectric absorption, or three scatter interactions are needed. Because of the high energy of the photons in PT the first option might not be adequate, as the photon is not absorbed in general. However, the second option is less efficient. That is the reason to resort to spectral reconstructions, where the incoming ? energy is considered as a variable in the reconstruction inverse problem. Jointly with prompt gamma, secondary neutrons and scattered photons, not strongly correlated with the dose map, can also reach the imaging detector and produce false events. These events deteriorate the image quality. Also, high intensity beams can produce particle accumulation in the camera, which lead to an increase of random coincidences, meaning events which gather measurements from different incoming particles. The noise scenario is expected to be different if double or triple events are used, and consequently, the reconstructed images can be affected differently by spurious data. The aim of the present work is to study the effect of false events in the reconstructed image, evaluating their impact in the determination of the beam particle ranges. A simulation study that includes misidentified events (neutrons and random coincidences) in the final image of a Compton Telescope for PT monitoring is presented. The complete chain of detection, from the beam particle entering a phantom to the event classification, is simulated using FLUKA. The range determination is later estimated from the reconstructed image obtained from a two and three-event algorithm based on Maximum Likelihood Expectation Maximization. The neutron background and random coincidences due to a therapeutic-like time structure are analyzed for mono-energetic proton beams. The time structure of the beam is included in the simulations, which will affect the rate of particles entering the detector.

  17. Noise evaluation of Compton camera imaging for proton therapy.

    PubMed

    Ortega, P G; Torres-Espallardo, I; Cerutti, F; Ferrari, A; Gillam, J E; Lacasta, C; Llosá, G; Oliver, J F; Sala, P R; Solevi, P; Rafecas, M

    2015-03-01

    Compton Cameras emerged as an alternative for real-time dose monitoring techniques for Particle Therapy (PT), based on the detection of prompt-gammas. As a consequence of the Compton scattering process, the gamma origin point can be restricted onto the surface of a cone (Compton cone). Through image reconstruction techniques, the distribution of the gamma emitters can be estimated, using cone-surfaces backprojections of the Compton cones through the image space, along with more sophisticated statistical methods to improve the image quality. To calculate the Compton cone required for image reconstruction, either two interactions, the last being photoelectric absorption, or three scatter interactions are needed. Because of the high energy of the photons in PT the first option might not be adequate, as the photon is not absorbed in general. However, the second option is less efficient. That is the reason to resort to spectral reconstructions, where the incoming ? energy is considered as a variable in the reconstruction inverse problem. Jointly with prompt gamma, secondary neutrons and scattered photons, not strongly correlated with the dose map, can also reach the imaging detector and produce false events. These events deteriorate the image quality. Also, high intensity beams can produce particle accumulation in the camera, which lead to an increase of random coincidences, meaning events which gather measurements from different incoming particles. The noise scenario is expected to be different if double or triple events are used, and consequently, the reconstructed images can be affected differently by spurious data. The aim of the present work is to study the effect of false events in the reconstructed image, evaluating their impact in the determination of the beam particle ranges. A simulation study that includes misidentified events (neutrons and random coincidences) in the final image of a Compton Telescope for PT monitoring is presented. The complete chain of detection, from the beam particle entering a phantom to the event classification, is simulated using FLUKA. The range determination is later estimated from the reconstructed image obtained from a two and three-event algorithm based on Maximum Likelihood Expectation Maximization. The neutron background and random coincidences due to a therapeutic-like time structure are analyzed for mono-energetic proton beams. The time structure of the beam is included in the simulations, which will affect the rate of particles entering the detector. PMID:25658644

  18. Real-time viewpoint image synthesis using strips of multi-camera images

    NASA Astrophysics Data System (ADS)

    Date, Munekazu; Takada, Hideaki; Kojima, Akira

    2015-03-01

    A real-time viewpoint image generation method is achieved. Video communications with a high sense of reality are needed to make natural connections between users at different places. One of the key technologies to achieve a sense of high reality is image generation corresponding to an individual user's viewpoint. However, generating viewpoint images requires advanced image processing, which is usually too heavy to use for real-time and low-latency purposes. In this paper we propose a real-time viewpoint image generation method using simple blending of multiple camera images taken at equal horizontal intervals and convergence obtained by using approximate information of an object's depth. An image generated from the nearest camera images is visually perceived as an intermediate viewpoint image due to the visual effect of depth-fused 3D (DFD). If the viewpoint is not on the line of the camera array, a viewpoint image could be generated by region splitting. We made a prototype viewpoint image generation system and achieved real-time full-frame operation for stereo HD videos. The users can see their individual viewpoint image for left-and-right and back-and-forth movement toward the screen. Our algorithm is very simple and promising as a means for achieving video communication with high reality.

  19. DYNAMIC ILM AN APPROACH TO INFRARED-CAMERA BASED DYNAMICAL LIFETIME IMAGING

    E-print Network

    DYNAMIC ILM ­ AN APPROACH TO INFRARED-CAMERA BASED DYNAMICAL LIFETIME IMAGING K. Ramspeck1,2a ), K present a calibration-free dynamic carrier lifetime imaging approach, based on the infrared lifetime is determined ana- lytically from the signal ratio of infrared camera images recorded directly after turning

  20. LROC NAC Photometry as a Tool for Studying Physical and Compositional Properties of the Lunar Surface

    NASA Astrophysics Data System (ADS)

    Clegg, R. N.; Jolliff, B. L.; Boyd, A. K.; Stopar, J. D.; Sato, H.; Robinson, M. S.; Hapke, B. W.

    2014-10-01

    LROC NAC photometry has been used to study the effects of rocket exhaust on lunar soil properties, and here we apply the same photometric methods to place compositional constraints on regions of silicic volcanism and pure anorthosite on the Moon.

  1. Arthropod eye-inspired digital camera with unique imaging characteristics

    NASA Astrophysics Data System (ADS)

    Xiao, Jianliang; Song, Young Min; Xie, Yizhu; Malyarchuk, Viktor; Jung, Inhwa; Choi, Ki-Joong; Liu, Zhuangjian; Park, Hyunsung; Lu, Chaofeng; Kim, Rak-Hwan; Li, Rui; Crozier, Kenneth B.; Huang, Yonggang; Rogers, John A.

    2014-06-01

    In nature, arthropods have a remarkably sophisticated class of imaging systems, with a hemispherical geometry, a wideangle field of view, low aberrations, high acuity to motion and an infinite depth of field. There are great interests in building systems with similar geometries and properties due to numerous potential applications. However, the established semiconductor sensor technologies and optics are essentially planar, which experience great challenges in building such systems with hemispherical, compound apposition layouts. With the recent advancement of stretchable optoelectronics, we have successfully developed strategies to build a fully functional artificial apposition compound eye camera by combining optics, materials and mechanics principles. The strategies start with fabricating stretchable arrays of thin silicon photodetectors and elastomeric optical elements in planar geometries, which are then precisely aligned and integrated, and elastically transformed to hemispherical shapes. This imaging device demonstrates nearly full hemispherical shape (about 160 degrees), with densely packed artificial ommatidia. The number of ommatidia (180) is comparable to those of the eyes of fire ants and bark beetles. We have illustrated key features of operation of compound eyes through experimental imaging results and quantitative ray-tracing-based simulations. The general strategies shown in this development could be applicable to other compound eye devices, such as those inspired by moths and lacewings (refracting superposition eyes), lobster and shrimp (reflecting superposition eyes), and houseflies (neural superposition eyes).

  2. Development of a dual modality imaging system: a combined gamma camera and optical imager.

    PubMed

    Jung, Jin Ho; Choi, Yong; Hong, Key Jo; Min, Byung Jun; Choi, Joon Young; Choe, Yearn Seong; Lee, Kyung-Han; Kim, Byung-Tae

    2009-07-21

    Several groups have reported the development of dual modality Gamma camera/optical imagers, which are useful tools for investigating biological processes in experimental animals. While previously reported dual modality imaging instrumentation usually employed a separated gamma camera and optical imager, we designed a detector using a position sensitive photomultiplier tube (PSPMT) that is capable of imaging both gamma rays and optical photons for combined gamma camera and optical imager. The proposed system consists of a parallel-hole collimator, an array-type crystal and a PSPMT. The top surface of the collimator and array crystals is left open to allow optical photons to reach the PSPMT. Pulse height spectra and planar images were obtained using a Tc-99m source and a green LED to estimate gamma and optical imaging performances. When both gamma rays and optical photon signals were detected, the signal interferences caused by each other signal were evaluated. A mouse phantom and an ICR mouse containing a gamma ray and optical photon source were imaged to assess the imaging capabilities of the system. The sensitivity, energy resolution and spatial resolution of the gamma image acquired using Tc-99m were 1.1 cps/kBq, 26% and 2.1 mm, respectively. The spatial resolution of the optical image acquired with an LED was 3.5 mm. Signal-to-signal interference due to the optical photon signal in the gamma pulse height spectrum was negligible. However, the pulse height spectrum of the optical photon signal was found to be affected by the gamma signal, and was obtained between signals generated by gamma rays with a correction using a veto gate. Gamma ray and optical photon images of the mouse phantom and ICR mouse were successfully obtained using the single detector. The experimental results indicated that both optical photon and gamma ray imaging are feasible using a detector based on the proposed PSPMT. PMID:19556682

  3. Coded-aperture Compton camera for gamma-ray imaging

    NASA Astrophysics Data System (ADS)

    Farber, Aaron M.

    This dissertation describes the development of a novel gamma-ray imaging system concept and presents results from Monte Carlo simulations of the new design. Current designs for large field-of-view gamma cameras suitable for homeland security applications implement either a coded aperture or a Compton scattering geometry to image a gamma-ray source. Both of these systems require large, expensive position-sensitive detectors in order to work effectively. By combining characteristics of both of these systems, a new design can be implemented that does not require such expensive detectors and that can be scaled down to a portable size. This new system has significant promise in homeland security, astronomy, botany and other fields, while future iterations may prove useful in medical imaging, other biological sciences and other areas, such as non-destructive testing. A proof-of-principle study of the new gamma-ray imaging system has been performed by Monte Carlo simulation. Various reconstruction methods have been explored and compared. General-Purpose Graphics-Processor-Unit (GPGPU) computation has also been incorporated. The resulting code is a primary design tool for exploring variables such as detector spacing, material selection and thickness and pixel geometry. The advancement of the system from a simple 1-dimensional simulation to a full 3-dimensional model is described. Methods of image reconstruction are discussed and results of simulations consisting of both a 4 x 4 and a 16 x 16 object space mesh have been presented. A discussion of the limitations and potential areas of further study is also presented.

  4. Extreme ultra-violet movie camera for imaging microsecond time scale magnetic reconnection

    SciTech Connect

    Chai, Kil-Byoung; Bellan, Paul M.

    2013-12-15

    An ultra-fast extreme ultra-violet (EUV) movie camera has been developed for imaging magnetic reconnection in the Caltech spheromak/astrophysical jet experiment. The camera consists of a broadband Mo:Si multilayer mirror, a fast decaying YAG:Ce scintillator, a visible light block, and a high-speed visible light CCD camera. The camera can capture EUV images as fast as 3.3 × 10{sup 6} frames per second with 0.5 cm spatial resolution. The spectral range is from 20 eV to 60 eV. EUV images reveal strong, transient, highly localized bursts of EUV radiation when magnetic reconnection occurs.

  5. Why do the image widths from the various cameras change?

    Atmospheric Science Data Center

    2014-12-08

    ... are different because the focal lengths of the MISR cameras change in relationship to the varying distance to the Earth for the different ... the D, C, B, and off-nadir A cameras are chosen so that each pixel is 275 m wide. However, the nadir A camera uses the same focal length as ...

  6. A low-cost dual-camera imaging system for aerial applicators

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Agricultural aircraft provide a readily available remote sensing platform as low-cost and easy-to-use consumer-grade cameras are being increasingly used for aerial imaging. In this article, we report on a dual-camera imaging system we recently assembled that can capture RGB and near-infrared (NIR) i...

  7. ATR/OTR-SY Tank Camera Purge System and in Tank Color Video Imaging System

    SciTech Connect

    Werry, S.M.

    1995-06-06

    This procedure will document the satisfactory operation of the 101-SY tank Camera Purge System (CPS) and 101-SY in tank Color Camera Video Imaging System (CCVIS). Included in the CPRS is the nitrogen purging system safety interlock which shuts down all the color video imaging system electronics within the 101-SY tank vapor space during loss of nitrogen purge pressure.

  8. Experimental and modeling studies of imaging with curvilinear electronic eye cameras

    E-print Network

    Rogers, John A.

    Experimental and modeling studies of imaging with curvilinear electronic eye cameras Viktor of the imaging properties of planar, hemispherical, and elliptic parabolic electronic eye cameras are compared.-J. Yu, J. B. Geddes 3rd, J. Xiao, S. Wang, Y. Huang, and J. A. Rogers, "A hemispherical electronic eye

  9. Gaze Directed Camera Control for Face Image Acquisition Eric Sommerlade, Ben Benfold and Ian Reid

    E-print Network

    Oxford, University of

    Gaze Directed Camera Control for Face Image Acquisition Eric Sommerlade, Ben Benfold and Ian Reid optimises the capturing of such images by using coarse gaze estimates from a static camera. By considering are in turn a function of the gaze angle. We validate the approach using a combination of simulated situations

  10. A CCD CAMERA-BASED HYPERSPECTRAL IMAGING SYSTEM FOR STATIONARY AND AIRBORNE APPLICATIONS

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper describes a charge coupled device (CCD) camera-based hyperspectral imaging system designed for both stationary and airborne remote sensing applications. The system consists of a high performance digital CCD camera, an imaging spectrograph, an optional focal plane scanner, and a PC comput...

  11. Color camera computed tomography imaging spectrometer for improved spatial-spectral image accuracy

    NASA Technical Reports Server (NTRS)

    Wilson, Daniel W. (Inventor); Bearman, Gregory H. (Inventor); Johnson, William R. (Inventor)

    2011-01-01

    Computed tomography imaging spectrometers ("CTIS"s) having color focal plane array detectors are provided. The color FPA detector may comprise a digital color camera including a digital image sensor, such as a Foveon X3.RTM. digital image sensor or a Bayer color filter mosaic. In another embodiment, the CTIS includes a pattern imposed either directly on the object scene being imaged or at the field stop aperture. The use of a color FPA detector and the pattern improves the accuracy of the captured spatial and spectral information.

  12. Simulation of light-field camera imaging based on ray splitting Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Liu, Bin; Yuan, Yuan; Li, Sai; Shuai, Yong; Tan, He-Ping

    2015-11-01

    As microlens technology matures, studies of structural design and reconstruction algorithm optimization for light-field cameras are increasing. However, few of these studies address numerical physical simulation of the camera, and it is difficult to track lighting technology for forward simulations because of its low efficiency. In this paper, we develop a Monte Carlo method (MCM) based on ray splitting and build a physical model of a light-field camera with a microlens array to simulate its imaging and refocusing processes. The model enables simulation of different imaging modalities, and will be useful for camera structural design and error analysis system construction.

  13. Quality controls for gamma cameras and PET cameras: development of a free open-source ImageJ program

    NASA Astrophysics Data System (ADS)

    Carlier, Thomas; Ferrer, Ludovic; Berruchon, Jean B.; Cuissard, Regis; Martineau, Adeline; Loonis, Pierre; Couturier, Olivier

    2005-04-01

    Acquisition data and treatments for quality controls of gamma cameras and Positron Emission Tomography (PET) cameras are commonly performed with dedicated program packages, which are running only on manufactured computers and differ from each other, depending on camera company and program versions. The aim of this work was to develop a free open-source program (written in JAVA language) to analyze data for quality control of gamma cameras and PET cameras. The program is based on the free application software ImageJ and can be easily loaded on any computer operating system (OS) and thus on any type of computer in every nuclear medicine department. Based on standard parameters of quality control, this program includes 1) for gamma camera: a rotation center control (extracted from the American Association of Physics in Medicine, AAPM, norms) and two uniformity controls (extracted from the Institute of Physics and Engineering in Medicine, IPEM, and National Electronic Manufacturers Association, NEMA, norms). 2) For PET systems, three quality controls recently defined by the French Medical Physicist Society (SFPM), i.e. spatial resolution and uniformity in a reconstructed slice and scatter fraction, are included. The determination of spatial resolution (thanks to the Point Spread Function, PSF, acquisition) allows to compute the Modulation Transfer Function (MTF) in both modalities of cameras. All the control functions are included in a tool box which is a free ImageJ plugin and could be soon downloaded from Internet. Besides, this program offers the possibility to save on HTML format the uniformity quality control results and a warning can be set to automatically inform users in case of abnormal results. The architecture of the program allows users to easily add any other specific quality control program. Finally, this toolkit is an easy and robust tool to perform quality control on gamma cameras and PET cameras based on standard computation parameters, is free, run on any type of computer and will soon be downloadable from the net (http://rsb.info.nih.gov/ij/plugins or http://nucleartoolkit.free.fr).

  14. Three-dimensional imaging with 1.06?m Geiger-mode ladar camera

    NASA Astrophysics Data System (ADS)

    Yuan, Ping; Sudharsanan, Rengarajan; Bai, Xiaogang; McDonald, Paul; Labios, Eduardo; Morris, Bryan; Nicholson, John P.; Stuart, Gary M.; Danny, Harrison; Van Duyne, Stephen; Pauls, Greg; Gaalema, Stephen

    2012-06-01

    Three-dimensional (3D) topographic imaging using Short wavelength infrared (SWIR) Laser Detection and Range (LADAR) systems have been successfully demonstrated on various platforms. LADAR imaging provides coverage down to inch-level fidelity and allows for effective wide-area terrain mapping. Recently Spectrolab has demonstrated a compact 32×32 LADAR camera with single photon-level sensitivity with small size, weight, and power (SWAP) budget. This camera has many special features such as non-uniform bias correction, variable range gate width from 2 microseconds to 6 microseconds, windowing for smaller arrays, and shorted pixel protection. Boeing integrated this camera with a 1.06 ?m pulse laser on various platforms and had demonstrated 3D imaging. In this presentation, the operation details of this camera and 3D imaging demonstration using this camera on various platforms will be presented.

  15. High Performance Imaging Streak Camera for the National Ignition Facility

    SciTech Connect

    Opachich, Y. P.; Kalantar, D.; MacPhee, A.; Holder, J.; Kimbrough, J.; Bell, P. M.; Bradley, D.; Hatch, B.; Brown, C.; Landen, O.; Perfect, B. H.; Guidry, B.; Mead, A.; Charest, M.; Palmer, N.; Homoelle, D.; Browning, D.; Silbernagel, C.; Brienza-Larsen, G.; Griffin, M.; Lee, J. J.; Haugh, M. J.

    2012-01-01

    An x-ray streak camera platform has been characterized and implemented for use at the National Ignition Facility. The camera has been modified to meet the experiment requirements of the National Ignition Campaign and to perform reliably in conditions that produce high EMI. A train of temporal UV timing markers has been added to the diagnostic in order to calibrate the temporal axis of the instrument and the detector efficiency of the streak camera was improved by using a CsI photocathode. The performance of the streak camera has been characterized and is summarized in this paper. The detector efficiency and cathode measurements are also presented.

  16. Lunar TecTonics New LROC images show recent

    E-print Network

    Rhoads, James

    processes respond to erosion SESE New INsIghts: Mouse To eLephanT? Just wait 24 million generations Bes s e s e . a s u . e d uNEWSLETTER OF THE SCHOOL OF EARTH AND SPACE EXPLORATION #12;Editor we ex- plore space and simple steps we can take to keep us moving along a path toward exploring

  17. Fly's Eye camera system: optical imaging using a hexapod platform

    NASA Astrophysics Data System (ADS)

    Jaskó, Attila; Pál, András.; Vida, Krisztián.; Mészáros, László; Csépány, Gergely; Mez?, György

    2014-07-01

    The Fly's Eye Project is a high resolution, high coverage time-domain survey in multiple optical passbands: our goal is to cover the entire visible sky above the 30° horizontal altitude with a cadence of ~3 min. Imaging is going to be performed by 19 wide-field cameras mounted on a hexapod platform resembling a fly's eye. Using a hexapod developed and built by our team allows us to create a highly fault-tolerant instrument that uses the sky as a reference to define its own tracking motion. The virtual axis of the platform is automatically aligned with the Earth's rotational axis; therefore the same mechanics can be used independently from the geographical location of the device. Its enclosure makes it capable of autonomous observing and withstanding harsh environmental conditions. We briefly introduce the electrical, mechanical and optical design concepts of the instrument and summarize our early results, focusing on sidereal tracking. Due to the hexapod design and hence the construction is independent from the actual location, it is considerably easier to build, install and operate a network of such devices around the world.

  18. Modelling of Camera Phone Capture Channel for JPEG Colour Barcode Images

    NASA Astrophysics Data System (ADS)

    Tan, Keng T.; Ong, Siong Khai; Chai, Douglas

    As camera phones have permeated into our everyday lives, two dimensional (2D) barcode has attracted researchers and developers as a cost-effective ubiquitous computing tool. A variety of 2D barcodes and their applications have been developed. Often, only monochrome 2D barcodes are used due to their robustness in an uncontrolled operating environment of camera phones. However, we are seeing an emerging use of colour 2D barcodes for camera phones. Nonetheless, using a greater multitude of colours introduces errors that can negatively affect the robustness of barcode reading. This is especially true when developing a 2D barcode for camera phones which capture and store these barcode images in the baseline JPEG format. This paper present one aspect of the errors introduced by such camera phones by modelling the camera phone capture channel for JPEG colour barcode images.

  19. A New Lunar Atlas: Mapping the Moon with the Wide Angle Camera

    NASA Astrophysics Data System (ADS)

    Speyerer, E.; Robinson, M. S.; Boyd, A.; Sato, H.

    2012-12-01

    The Lunar Reconnaissance Orbiter (LRO) spacecraft launched in June 2009 and began systematically mapping the lunar surface and providing a priceless dataset for the planetary science community and future mission planners. From 20 September 2009 to 11 December 2011, the spacecraft was in a nominal 50 km polar orbit, except for two one-month long periods when a series of spacecraft maneuvers enabled low attitude flyovers (as low as 22 km) of key exploration and scientifically interesting targets. One of the instruments, the Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) [1], captured nearly continuous synoptic views of the illuminated lunar surface. The WAC is a 7-band (321, 360, 415, 566, 604, 643, 689 nm) push frame imager with field of view of 60° in color mode and 90° in monochrome mode. This broad field of view enables the WAC to reimage nearly 50% (at the equator where the orbit tracks our spaced the furthest) of the terrain it imaged in the previous orbit. The visible bands of map projected WAC images have a pixel scale of 100 m, while UV bands have a pixel scale of 400 m due to 4x4 pixel on-chip binning that increases signal-to-noise. The nearly circular polar orbit and short (two hour) orbital periods enable seamless mosaics of broad areas of the surface with uniform lighting and resolution. In March of 2011, the LROC team released the first version of the global monochrome (643nm) morphologic map [2], which was comprised of 15,000 WAC images collected over three periods. With the over 130,000 WAC images collected while the spacecraft was in the 50 km orbit, a new set of mosaics are being produced by the LROC Team and will be released to the Planetary Data Systems. These new maps include an updated morphologic map with an improved set of images (limiting illumination variations and gores due to off-nadir observation of other instruments) and a new photometric correction derived from the LROC WAC dataset. In addition, a higher sun (lower incidence angle) mosaic will also be released. This map has minimal shadows and highlights albedo differences. In addition, seamless regional WAC mosaics acquired under multiple lighting geometries (Sunlight coming from the East, overhead, and West) will also be produced for key areas of interest. These new maps use the latest terrain model (LROC WAC GLD100) [3], updated spacecraft ephemeris provided by the LOLA team [4], and improved WAC distortion model [5] to provide accurate placement of each WAC pixel on the lunar surface. References: [1] Robinson et al. (2010) Space Sci. Rev. [2] Speyerer et al. (2011) LPSC, #2387. [3] Scholten et al. (2012) JGR. [4] Mazarico et al. (2012) J. of Geodesy [5] Speyerer et al. (2012) ISPRS Congress.

  20. Effects of frame rate and image resolution on pulse rate measured using multiple camera imaging photoplethysmography

    NASA Astrophysics Data System (ADS)

    Blackford, Ethan B.; Estepp, Justin R.

    2015-03-01

    Non-contact, imaging photoplethysmography uses cameras to facilitate measurements including pulse rate, pulse rate variability, respiration rate, and blood perfusion by measuring characteristic changes in light absorption at the skin's surface resulting from changes in blood volume in the superficial microvasculature. Several factors may affect the accuracy of the physiological measurement including imager frame rate, resolution, compression, lighting conditions, image background, participant skin tone, and participant motion. Before this method can gain wider use outside basic research settings, its constraints and capabilities must be well understood. Recently, we presented a novel approach utilizing a synchronized, nine-camera, semicircular array backed by measurement of an electrocardiogram and fingertip reflectance photoplethysmogram. Twenty-five individuals participated in six, five-minute, controlled head motion artifact trials in front of a black and dynamic color backdrop. Increasing the input channel space for blind source separation using the camera array was effective in mitigating error from head motion artifact. Herein we present the effects of lower frame rates at 60 and 30 (reduced from 120) frames per second and reduced image resolution at 329x246 pixels (one-quarter of the original 658x492 pixel resolution) using bilinear and zero-order downsampling. This is the first time these factors have been examined for a multiple imager array and align well with previous findings utilizing a single imager. Examining windowed pulse rates, there is little observable difference in mean absolute error or error distributions resulting from reduced frame rates or image resolution, thus lowering requirements for systems measuring pulse rate over sufficient length time windows.

  1. Face acquisition camera design using the NV-IPM image generation tool

    NASA Astrophysics Data System (ADS)

    Howell, Christopher L.; Choi, Hee-Sue; Reynolds, Joseph P.

    2015-05-01

    In this paper, we demonstrate the utility of the Night Vision Integrated Performance Model (NV-IPM) image generation tool by using it to create a database of face images with controlled degradations. Available face recognition algorithms can then be used to directly evaluate camera designs using these degraded images. By controlling camera effects such as blur, noise, and sampling, we can analyze algorithm performance and establish a more complete performance standard for face acquisition cameras. The ability to accurately simulate imagery and directly test with algorithms not only improves the system design process but greatly reduces development cost.

  2. Cloud level winds from the Venus Express Monitoring Camera imaging

    NASA Astrophysics Data System (ADS)

    Khatuntsev, I. V.; Patsaeva, M. V.; Titov, D. V.; Ignatiev, N. I.; Turin, A. V.; Limaye, S. S.; Markiewicz, W. J.; Almeida, M.; Roatsch, Th.; Moissl, R.

    2013-09-01

    Six years of continuous monitoring of Venus by European Space Agency’s Venus Express orbiter provides an opportunity to study dynamics of the atmosphere our neighbor planet. Venus Monitoring Camera (VMC) on-board the orbiter has acquired the longest and the most complete so far set of ultra violet images of Venus. These images enable a study the cloud level circulation by tracking motion of the cloud features. The highly elliptical polar orbit of Venus Express provides optimal conditions for observations of the Southern hemisphere at varying spatial resolution. Out of the 2300 orbits of Venus Express over which the images used in the study cover about 10 Venus years. Out of these, we tracked cloud features in images obtained in 127 orbits by a manual cloud tracking technique and by a digital correlation method in 576 orbits. Total number of wind vectors derived in this work is 45,600 for the manual tracking and 391,600 for the digital method. This allowed us to determine the mean circulation, its long-term and diurnal trends, orbit-to-orbit variations and periodicities. We also present the first results of tracking features in the VMC near-IR images. In low latitudes the mean zonal wind at cloud tops (67 ± 2 km following: Rossow, W.B., Del Genio, A.T., Eichler, T. [1990]. J. Atmos. Sci. 47, 2053-2084) is about 90 m/s with a maximum of about 100 m/s at 40-50°S. Poleward of 50°S the average zonal wind speed decreases with latitude. The corresponding atmospheric rotation period at cloud tops has a maximum of about 5 days at equator, decreases to approximately 3 days in middle latitudes and stays almost constant poleward from 50°S. The mean poleward meridional wind slowly increases from zero value at the equator to about 10 m/s at 50°S and then decreases to zero at the pole. The error of an individual measurement is 7.5-30 m/s. Wind speeds of 70-80 m/s were derived from near-IR images at low latitudes. The VMC observations indicate a long term trend for the zonal wind speed at low latitudes to increase from 85 m/s in the beginning of the mission to 110 m/s by the middle of 2012. VMC UV observations also showed significant short term variations of the mean flow. The velocity difference between consecutive orbits in the region of mid-latitude jet could reach 30 m/s that likely indicates vacillation of the mean flow between jet-like regime and quasi-solid body rotation at mid-latitudes. Fourier analysis revealed periodicities in the zonal circulation at low latitudes. Within the equatorial region, up to 35°S, the zonal wind show an oscillation with a period of 4.1-5 days (4.83 days on average) that is close to the super-rotation period at the equator. The wave amplitude is 4-17 m/s and decreases with latitude, a feature of the Kelvin wave. The VMC observations showed a clear diurnal signature. A minimum in the zonal speed was found close to the noon (11-14 h) and maxima in the morning (8-9 h) and in the evening (16-17 h). The meridional component peaks in the early afternoon (13-15 h) at around 50°S latitude. The minimum of the meridional component is located at low latitudes in the morning (8-11 h). The horizontal divergence of the mean cloud motions associated with the diurnal pattern suggests upwelling motions in the morning at low latitudes and downwelling flow in the afternoon in the cold collar region.

  3. An airborne multispectral imaging system based on two consumer-grade cameras for agricultural remote sensing

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper describes the design and evaluation of an airborne multispectral imaging system based on two identical consumer-grade cameras for agricultural remote sensing. The cameras are equipped with a full-frame complementary metal oxide semiconductor (CMOS) sensor with 5616 × 3744 pixels. One came...

  4. Boston University Computer Science Technical Report No. BUCSTR2011007 Camera Canvas: Image Editing Software for

    E-print Network

    {ckwan, betke}@cs.bu.edu Abstract. We developed Camera Canvas, photo editing and picture drawing software Editing Software for People with Disabilities Christopher Kwan and Margrit Betke Image and Video Computing additional user studies and improving the software based on feedback. Keywords: Assistive Technology, Camera

  5. Sensor Fingerprint Digests for Fast Camera Identification from Geometrically Distorted Images

    E-print Network

    Fridrich, Jessica

    Sensor Fingerprint Digests for Fast Camera Identification from Geometrically Distorted Images,fridrich}@binghamton.edu ABSTRACT In camera identification using sensor fingerprint, it is absolutely essential that the fingerprint to a geometrical trans- formation, fingerprint detection becomes significantly more complicated. Besides

  6. D Hyperspectral Frame Imager Camera Data in Photogrammetric Mosaicking

    NASA Astrophysics Data System (ADS)

    Mäkeläinen, A.; Saari, H.; Hippi, I.; Sarkeala, J.; Soukkamäki, J.

    2013-08-01

    A new 2D hyperspectral frame camera system has been developed by VTT (Technical Research Center of Finland) and Rikola Ltd. It contains frame based and very light camera with RGB-NIR sensor and it is suitable for light weight and cost effective UAV planes. MosaicMill Ltd. has converted the camera data into proper format for photogrammetric processing, and camera's geometrical accuracy and stability are evaluated to guarantee required accuracies for end user applications. MosaicMill Ltd. has also applied its' EnsoMOSAIC technology to process hyperspectral data into orthomosaics. This article describes the main steps and results on applying hyperspectral sensor in orthomosaicking. The most promising results as well as challenges in agriculture and forestry are also described.

  7. A Prediction Method of TV Camera Image for Space Manual-control Rendezvous and Docking

    NASA Astrophysics Data System (ADS)

    Zhen, Huang; Qing, Yang; Wenrui, Wu

    Space manual-control rendezvous and docking (RVD) is a key technology for accomplishing the RVD mission in manned space engineering, especially when automatic control system is out of work. The pilot on chase spacecraft manipulates the hand-stick by the image of target spacecraft captured by TV camera. From the TV image, the relative position and attitude of chase and target spacecrafts can be shown. Therefore, the size, the position, the brightness and the shadow of the target on TV camera are key to guarantee the success of manual-control RVD. A method of predicting the on-orbit TV camera image at different relative positions and light conditions during the process of RVD is discussed. Firstly, the basic principle of capturing the image of cross drone on target spacecraft by TV camera is analyzed theoretically, based which the strategy of manual-control RVD is discussed in detail. Secondly, the relationship between the displayed size or position and the real relative distance of chase and target spacecrafts is presented, the brightness and reflection by the target spacecraft at different light conditions are decribed, the shadow on cross drone caused by the chase or target spacecraft is analyzed. Thirdly, a prediction method of on-orbit TV camera images at certain orbit and light condition is provided, and the characteristics of TV camera image during the RVD is analyzed. Finally, the size, the position, the brightness and the shadow of target spacecraft on TV camera image at typical orbit is simulated. The result, by comparing the simulated images with the real images captured by the TV camera on Shenzhou manned spaceship , shows that the prediction method is reasonable

  8. Imaging Asteroid 4 Vesta Using the Framing Camera

    NASA Technical Reports Server (NTRS)

    Keller, H. Uwe; Nathues, Andreas; Coradini, Angioletta; Jaumann, Ralf; Jorda, Laurent; Li, Jian-Yang; Mittlefehldt, David W.; Mottola, Stefano; Raymond, C. A.; Schroeder, Stefan E.

    2011-01-01

    The Framing Camera (FC) onboard the Dawn spacecraft serves a dual purpose. Next to its central role as a prime science instrument it is also used for the complex navigation of the ion drive spacecraft. The CCD detector with 1024 by 1024 pixels provides the stability for a multiyear mission and its high requirements of photometric accuracy over the wavelength band from 400 to 1000 nm covered by 7 band-pass filters. Vesta will be observed from 3 orbit stages with image scales of 227, 63, and 17 m/px, respectively. The mapping of Vesta s surface with medium resolution will be only completed during the exit phase when the north pole will be illuminated. A detailed pointing strategy will cover the surface at least twice at similar phase angles to provide stereo views for reconstruction of the topography. During approach the phase function of Vesta was determined over a range of angles not accessible from earth. This is the first step in deriving the photometric function of the surface. Combining the topography based on stereo tie points with the photometry in an iterative procedure will disclose details of the surface morphology at considerably smaller scales than the pixel scale. The 7 color filters are well positioned to provide information on the spectral slope in the visible, the depth of the strong pyroxene absorption band, and their variability over the surface. Cross calibration with the VIR spectrometer that extends into the near IR will provide detailed maps of Vesta s surface mineralogy and physical properties. Georeferencing all these observation will result in a coherent and unique data set. During Dawn s approach and capture FC has already demonstrated its performance. The strong variation observed by the Hubble Space Telescope can now be correlated with surface units and features. We will report on results obtained from images taken during survey mode covering the whole illuminated surface. Vesta is a planet-like differentiated body, but its surface gravity and escape velocity are comparable to those of other asteroids and hence much smaller than those of the inner planets or

  9. Design and control of a thermal stabilizing system for a MEMS optomechanical uncooled infrared imaging camera

    E-print Network

    Horowitz, Roberto

    imaging camera Jongeun Choi, Joji Yamaguchi, Simon Morales, Roberto Horowitz* , Yang Zhao, Arunava development by the authors of this paper [2,3]. Infrared (IR) vision is an indispensable technology for night

  10. New opportunities for quality enhancing of images captured by passive THz camera

    NASA Astrophysics Data System (ADS)

    Trofimov, Vyacheslav A.; Trofimov, Vladislav V.

    2014-10-01

    As it is well-known, the passive THz camera allows seeing concealed object without contact with a person and this camera is non-dangerous for a person. Obviously, efficiency of using the passive THz camera depends on its temperature resolution. This characteristic specifies possibilities of the detection for concealed object: minimal size of the object; maximal distance of the detection; image quality. Computer processing of the THz image may lead to many times improving of the image quality without any additional engineering efforts. Therefore, developing of modern computer code for its application to THz images is urgent problem. Using appropriate new methods one may expect such temperature resolution which will allow to see banknote in pocket of a person without any real contact. Modern algorithms for computer processing of THz images allow also to see object inside the human body using a temperature trace on the human skin. This circumstance enhances essentially opportunity of passive THz camera applications for counterterrorism problems. We demonstrate opportunities, achieved at present time, for the detection both of concealed objects and of clothes components due to using of computer processing of images captured by passive THz cameras, manufactured by various companies. Another important result discussed in the paper consists in observation of both THz radiation emitted by incandescent lamp and image reflected from ceramic floorplate. We consider images produced by THz passive cameras manufactured by Microsemi Corp., and ThruVision Corp., and Capital Normal University (Beijing, China). All algorithms for computer processing of the THz images under consideration in this paper were developed by Russian part of author list. Keywords: THz wave, passive imaging camera, computer processing, security screening, concealed and forbidden objects, reflected image, hand seeing, banknote seeing, ceramic floorplate, incandescent lamp.

  11. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization

    NASA Astrophysics Data System (ADS)

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj

    2015-03-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.

  12. A 58 x 62 pixel Si:Ga array camera for 5 - 14 micron astronomical imaging

    NASA Technical Reports Server (NTRS)

    Gezari, D. Y.; Folz, W. C.; Woods, L. A.; Wooldridge, J. B.

    1989-01-01

    A new infrared array camera system has been successfully applied to high background 5 - 14 micron astronomical imaging photometry observations, using a hybrid 58 x 62 pixel Si:Ga array detector. The off-axis reflective optical design incorporating a parabolic camera mirror, circular variable filter wheel, and cold aperture stop produces diffraction-limited images with negligible spatial distortion and minimum thermal background loading. The camera electronic system architecture is divided into three subsystems: (1) high-speed analog front end, including 2-channel preamp module, array address timing generator, bias power suppies, (2) two 16 bit, 3 microsec per conversion A/D converters interfaced to an arithmetic array processor, and (3) an LSI 11/73 camera control and data analysis computer. The background-limited observational noise performance of the camera at the NASA/IRTF telescope is NEFD (1 sigma) = 0.05 Jy/pixel min exp 1/2.

  13. Myocardial Perfusion Imaging with a Solid State Camera: Simulation of a Very Low Dose Imaging Protocol

    PubMed Central

    Nakazato, Ryo; Berman, Daniel S.; Hayes, Sean W.; Fish, Mathews; Padgett, Richard; Xu, Yuan; Lemley, Mark; Baavour, Rafael; Roth, Nathaniel; Slomka, Piotr J.

    2012-01-01

    High sensitivity dedicated cardiac systems cameras provide an opportunity to lower injected doses for SPECT myocardial perfusion imaging (MPI), but the exact limits for lowering doses have not been determined. List mode data acquisition allows for reconstruction of various fractions of acquired counts, allowing a simulation of gradually lower administered dose. We aimed to determine the feasibility of very low dose MPI by exploring the minimal count level in the myocardium for accurate MPI. Methods Seventy nine patients were studied (mean body mass index 30.0 ± 6.6, range 20.2–54.0 kg/m2) who underwent 1-day standard dose 99mTc-sestamibi exercise or adenosine rest/stress MPI for clinical indications employing a Cadmium Zinc Telluride dedicated cardiac camera. Imaging time was 14-min with 803 ± 200 MBq (21.7 ± 5.4mCi) of 99mTc injected at stress. To simulate clinical scans with lower dose at that imaging time, we reframed the list-mode raw data to have count fractions of the original scan. Accordingly, 6 stress equivalent datasets were reconstructed corresponding to each fraction of the original scan. Automated QPS/QGS software was used to quantify total perfusion deficit (TPD) and ejection fraction (EF) for all 553 datasets. Minimal acceptable count was determined based on previous report with repeatability of same-day same-injection Anger camera studies. Pearson correlation coefficients and SD of differences with TPD for all scans were calculated. Results The correlations of quantitative perfusion and function analysis were excellent for both global and regional analysis on all simulated low-counts scans (all r ?0.95, p<0.0001). Minimal acceptable count was determined to be 1.0 million counts for the left ventricular region. At this count level, SD of differences was 1.7% for TPD and 4.2% for EF. This count level would correspond to a 92.5 MBq (2.5 mCi) injected dose for the 14 min acquisition. Conclusion 1.0 million myocardial count images appear to be sufficient to maintain excellent agreement quantitative perfusion and function parameters as compared to those determined from 8.0 million count images. With a dedicated cardiac camera, these images could be obtained over 10 minutes with an effective radiation dose of less than 1 mSv without significant sacrifice in accuracy. PMID:23321457

  14. Application of spatial frequency response as a criterion for evaluating thermal imaging camera performance

    NASA Astrophysics Data System (ADS)

    Lock, Andrew; Amon, Francine

    2008-04-01

    Police, firefighters, and emergency medical personnel are examples of first responders that are utilizing thermal imaging cameras in a very practical way every day. However, few performance metrics have been developed to assist first responders in evaluating the performance of thermal imaging technology. This paper describes one possible metric for evaluating spatial resolution using an application of Spatial Frequency Response (SFR) calculations for thermal imaging. According to ISO 12233, the SFR is defined as the integrated area below the Modulation Transfer Function (MTF) curve derived from the discrete Fourier transform of a camera image representing a knife-edge target. This concept is modified slightly for use as a quantitative analysis of the camera's performance by integrating the area between the MTF curve and the camera's characteristic nonuniformity, or noise floor, determined at room temperature. The resulting value, which is termed the Effective SFR, can then be compared with a spatial resolution value obtained from human perception testing of task specific situations to determine the acceptability of the performance of thermal imaging cameras. The testing procedures described herein are being developed as part of a suite of tests for possible inclusion into a performance standard on thermal imaging cameras for first responders.

  15. The Lunar Student Imaging Project (LSIP): Bringing the Excitement of Lunar Exploration to Students Using LRO Mission Data

    NASA Astrophysics Data System (ADS)

    Taylor, W. L.; Roberts, D.; Burnham, R.; Robinson, M. S.

    2009-12-01

    In June 2009, NASA launched the Lunar Reconnaissance Orbiter (LRO) - the first mission in NASA's Vision for Space Exploration, a plan to return to the Moon and then to travel to Mars and beyond. LRO is equipped with seven instruments including the Lunar Reconnaissance Orbiter Camera (LROC), a system of two narrow-angle cameras and one wide-angle camera, controlled by scientists in the School of Earth and Space Exploration at Arizona State University. The orbiter will have a one-year primary mission in a 50 km polar orbit. The measurements from LROC will uncover much-needed information about potential landing sites and will help generate a meter scale map of the lunar surface. With support from NASA Goddard Space Flight Center, the LROC Science Operations Center and the ASU Mars Education Program, have partnered to develop an inquiry-based student program, the Lunar Student Imaging Project (LSIP). Based on the nationally recognized, Mars Student Imaging Project (MSIP), LSIP uses cutting-edge NASA content and remote sensing data to involve students in authentic lunar exploration. This program offers students (grades 5-14) immersive experiences where they can: 1) target images of the lunar surface, 2) interact with NASA planetary scientists, mission engineers and educators, and 3) gain access to NASA curricula and materials developed to enhance STEM learning. Using a project based learning model, students drive their own research and learn first hand what it’s like to do real planetary science. The LSIP curriculum contains a resource manual and program guide (including lunar feature identification charts, classroom posters, and lunar exploration time line) and a series of activities covering image analysis, relative age dating and planetary comparisons. LSIP will be based upon the well-tested MSIP model, and will encompass onsite as well as distance learning components.

  16. Be Foil "Filter Knee Imaging" NSTX Plasma with Fast Soft X-ray Camera

    SciTech Connect

    B.C. Stratton; S. von Goeler; D. Stutman; K. Tritz; L.E. Zakharov

    2005-08-08

    A fast soft x-ray (SXR) pinhole camera has been implemented on the National Spherical Torus Experiment (NSTX). This paper presents observations and describes the Be foil Filter Knee Imaging (FKI) technique for reconstructions of a m/n=1/1 mode on NSTX. The SXR camera has a wide-angle (28{sup o}) field of view of the plasma. The camera images nearly the entire diameter of the plasma and a comparable region in the vertical direction. SXR photons pass through a beryllium foil and are imaged by a pinhole onto a P47 scintillator deposited on a fiber optic faceplate. An electrostatic image intensifier demagnifies the visible image by 6:1 to match it to the size of the charge-coupled device (CCD) chip. A pair of lenses couples the image to the CCD chip.

  17. Digital-image capture system for the IR camera used in Alcator C-Mod

    SciTech Connect

    Maqueda, R. J.; Wurden, G. A.; Terry, J. L.; Gaffke, J.

    2001-01-01

    An infrared imaging system, based on an Amber Radiance 1 infrared camera, is used at Alcator C-Mod to measure the surface temperatures in the lower divertor region. Due to the supra-linear dependence of the thermal radiation with temperature it is important to make use of the 12-bit digitization of the focal plane array of the Amber camera and not be limited by the 8 bits inherent to the video signal. It is also necessary for the image capture device (i.e., fast computer) to be removed from the high magnetic field environment surrounding the experiment. Finally, the coupling between the digital camera output and the capture device should be nonconductive for isolation purposes (i.e., optical coupling). A digital video remote camera interface (RCI) coupled to a PCI bus fiber optic interface board is used to accomplish this task. Using this PCI-RCI system, the 60 Hz images from the Amber Radiance 1 camera, each composed of 256x256 pixels and 12 bits/pixel, are captured by a Windows NT computer. An electrical trigger signal is given directly to the RCI module to synchronize the image stream with the experiment. The RCI can be programmed from the host computer to work with a variety of digital cameras, including the Amber Radiance 1 camera.

  18. Estimating Camera Pose from a Single Urban Ground-View Omnidirectional Image and a 2D Building Outline Map

    E-print Network

    Cham, Tat Jen

    for humans, the system returned a top-30 rank- ing for correct matches out of 3600 camera pose hypotheses (0Estimating Camera Pose from a Single Urban Ground-View Omnidirectional Image and a 2D Building, England Abstract A framework is presented for estimating the pose of a camera based on images extracted

  19. Joint estimation of high resolution images and depth maps from light field cameras

    NASA Astrophysics Data System (ADS)

    Ohashi, Kazuki; Takahashi, Keita; Fujii, Toshiaki

    2014-03-01

    Light field cameras are attracting much attention as tools for acquiring 3D information of a scene through a single camera. The main drawback of typical lenselet-based light field cameras is the limited resolution. This limitation comes from the structure where a microlens array is inserted between the sensor and the main lens. The microlens array projects 4D light field on a single 2D image sensor at the sacrifice of the resolution; the angular resolution and the position resolution trade-off under the fixed resolution of the image sensor. This fundamental trade-off remains after the raw light field image is converted to a set of sub-aperture images. The purpose of our study is to estimate a higher resolution image from low resolution sub-aperture images using a framework of super-resolution reconstruction. In this reconstruction, these sub-aperture images should be registered as accurately as possible. This registration is equivalent to depth estimation. Therefore, we propose a method where super-resolution and depth refinement are performed alternatively. Most of the process of our method is implemented by image processing operations. We present several experimental results using a Lytro camera, where we increased the resolution of a sub-aperture image by three times horizontally and vertically. Our method can produce clearer images compared to the original sub-aperture images and the case without depth refinement.

  20. Matching Images to Models: Camera Calibration for 3-D Surface Reconstruction

    NASA Technical Reports Server (NTRS)

    Morris, Robin D.; Smelyanskiy, Vadim N.; Cheeseman. Peter C.; Norvig, Peter (Technical Monitor)

    2001-01-01

    In a previous paper we described a system which recursively recovers a super-resolved three dimensional surface model from a set of images of the surface. In that paper we assumed that the camera calibration for each image was known. In this paper we solve two problems. Firstly, if an estimate of the surface is already known, the problem is to calibrate a new image relative to the existing surface model. Secondly, if no surface estimate is available, the relative camera calibration between the images in the set must be estimated. This will allow an initial surface model to be estimated. Results of both types of estimation are given.

  1. Correction method for fisheye image based on the virtual small-field camera.

    PubMed

    Huang, Fuyu; Shen, Xueju; Wang, Qun; Zhou, Bing; Hu, Wengang; Shen, Hongbin; Li, Li

    2013-05-01

    A distortion correction method for a fisheye image is proposed based on the virtual small-field (SF) camera. The correction experiment is carried out, and a comparison is made between the proposed method and the conventional global correction method. From the experimental results, the corrected image by this method satisfies the law of perspective projection, and the image looks as if it was captured by an SF camera with the optical axis pointing at the corrected center. This method eliminates the phenomena of center compression, edge stretch, and field loss, and the image character is more obvious, which benefits the afterward target detection and information extraction. PMID:23632495

  2. Using camera calibration to apply digital image correlation outside the laboratory

    NASA Astrophysics Data System (ADS)

    Liang, Zhenning; Yin, Bo; Dai, Xin; Mo, Jinqiu; Wang, Shigang

    2013-12-01

    An innovative single-camera two-dimensional digital image correlation (DIC) technique based on camera self-calibration is developed for use in the field where specialized fixed setups are not practical. The technique only requires attaching a planar calibration cover to the specimen surface and capturing images of the specimen from different orientations before and after deformation. A camera calibration procedure allows the camera to be freely repositioned without fixed mounts or known configurations, after which displacements are calculated with DIC. Computer simulated, random speckle images are used to test the proposed technique and good results are reported. Compared with the classical techniques which require precise fixed setups, the proposed technique is easier to use and more flexible, advancing the DIC beyond the laboratory into the real world.

  3. UCXp camera imaging principle and key technologies of data post-processing

    NASA Astrophysics Data System (ADS)

    Yuan, Fangyan; Li, Guoqing; Zuo, Zhengli; Liu, Jianmin; Wu, Liang; Yu, Xiaoping; Zhao, Haitao

    2014-03-01

    The large format digital aerial camera product UCXp was introduced into the Chinese market in 2008, the image consists of 17310 columns and 11310 rows with a pixel size of 6 mm. The UCXp camera has many advantages compared with the same generation camera, with multiple lenses exposed almost at the same time and no oblique lens. The camera has a complex imaging process whose principle will be detailed in this paper. On the other hand, the UCXp image post-processing method, including data pre-processing and orthophoto production, will be emphasized in this article. Based on the data of new Beichuan County, this paper will describe the data processing and effects.

  4. Theta rotation and serial registration of light microscopical images using a novel camera rotating device.

    PubMed

    Duerstock, Bradley S; Cirillo, John; Rajwa, Bartek

    2010-06-01

    An electromechanical video camera coupler was developed to rotate a light microscope field of view (FOV) in real time without the need to physically rotate the stage or specimen. The device, referred to as the Camera Thetarotator, rotated microscopical views 240 degrees to assist microscopists to orient specimens within the FOV prior to image capture. The Camera Thetarotator eliminated the effort and artifacts created when rotating photomicrographs using conventional graphics software. The Camera Thetarotator could also be used to semimanually register a dataset of histological sections for three-dimensional (3D) reconstruction by superimposing the transparent, real-time FOV to the previously captured section in the series. When compared to Fourier-based software registration, alignment of serial sections using the Camera Thetarotator was more exact, resulting in more accurate 3D reconstructions with no computer-generated null space. When software-based registration was performed after prealigning sections with the Camera Thetarotator, registration was further enhanced. The Camera Thetarotator expanded microscopical viewing and digital photomicrography and provided a novel, accurate registration method for 3D reconstruction. The Camera Thetarotator would also be useful for performing automated microscopical functions necessary for telemicroscopy, high-throughput image acquisition and analysis, and other light microscopy applications. PMID:20233497

  5. Development of CCD Cameras for Soft X-ray Imaging at the National Ignition Facility

    SciTech Connect

    Teruya, A. T.; Palmer, N. E.; Schneider, M. B.; Bell, P. M.; Sims, G.; Toerne, K.; Rodenburg, K.; Croft, M.; Haugh, M. J.; Charest, M. R.; Romano, E. D.; Jacoby, K. D.

    2013-09-01

    The Static X-Ray Imager (SXI) is a National Ignition Facility (NIF) diagnostic that uses a CCD camera to record time-integrated X-ray images of target features such as the laser entrance hole of hohlraums. SXI has two dedicated positioners on the NIF target chamber for viewing the target from above and below, and the X-ray energies of interest are 870 eV for the “soft” channel and 3 – 5 keV for the “hard” channels. The original cameras utilize a large format back-illuminated 2048 x 2048 CCD sensor with 24 micron pixels. Since the original sensor is no longer available, an effort was recently undertaken to build replacement cameras with suitable new sensors. Three of the new cameras use a commercially available front-illuminated CCD of similar size to the original, which has adequate sensitivity for the hard X-ray channels but not for the soft. For sensitivity below 1 keV, Lawrence Livermore National Laboratory (LLNL) had additional CCDs back-thinned and converted to back-illumination for use in the other two new cameras. In this paper we describe the characteristics of the new cameras and present performance data (quantum efficiency, flat field, and dynamic range) for the front- and back-illuminated cameras, with comparisons to the original cameras.

  6. Proposal for real-time terahertz imaging system with palm-size terahertz camera and compact quantum cascade laser

    E-print Network

    Oda, Naoki

    This paper describes a real-time terahertz (THz) imaging system, using the combination of a palm-size THz camera with a compact quantum cascade laser (QCL). The THz camera contains a 320x240 microbolometer focal plane array ...

  7. Suite of proposed imaging performance metrics and test methods for fire service thermal imaging cameras

    NASA Astrophysics Data System (ADS)

    Amon, Francine; Lock, Andrew; Bryner, Nelson

    2008-04-01

    The use of thermal imaging cameras (TIC) by the fire service is increasing as fire fighters become more aware of the value of these tools. The National Fire Protection Association (NFPA) is currently developing a consensus standard for design and performance requirements for TIC as used by the fire service. This standard will include performance requirements for TIC design robustness and image quality. The National Institute of Standards and Technology facilitates this process by providing recommendations for science-based performance metrics and test methods to the NFPA technical committee charged with the development of this standard. A suite of imaging performance metrics and test methods based on the harsh operating environment and limitations of use particular to the fire service has been proposed for inclusion in the standard. The performance metrics include large area contrast, effective temperature range, spatial resolution, nonuniformity, and thermal sensitivity. Test methods to measure TIC performance for these metrics are in various stages of development. An additional procedure, image recognition, has also been developed to facilitate the evaluation of TIC design robustness. The pass/fail criteria for each of these imaging performance metrics are derived from perception tests in which image contrast, brightness, noise, and spatial resolution are degraded to the point that users can no longer consistently perform tasks involving TIC due to poor image quality.

  8. Exploring the feasibility of iris recognition for visible spectrum iris images obtained using smartphone camera

    NASA Astrophysics Data System (ADS)

    Trokielewicz, Mateusz; Bartuzi, Ewelina; Michowska, Katarzyna; Andrzejewska, Antonina; Selegrat, Monika

    2015-09-01

    In the age of modern, hyperconnected society that increasingly relies on mobile devices and solutions, implementing a reliable and accurate biometric system employing iris recognition presents new challenges. Typical biometric systems employing iris analysis require expensive and complicated hardware. We therefore explore an alternative way using visible spectrum iris imaging. This paper aims at answering several questions related to applying iris biometrics for images obtained in the visible spectrum using smartphone camera. Can irides be successfully and effortlessly imaged using a smartphone's built-in camera? Can existing iris recognition methods perform well when presented with such images? The main advantage of using near-infrared (NIR) illumination in dedicated iris recognition cameras is good performance almost independent of the iris color and pigmentation. Are the images obtained from smartphone's camera of sufficient quality even for the dark irides? We present experiments incorporating simple image preprocessing to find the best visibility of iris texture, followed by a performance study to assess whether iris recognition methods originally aimed at NIR iris images perform well with visible light images. To our best knowledge this is the first comprehensive analysis of iris recognition performance using a database of high-quality images collected in visible light using the smartphones flashlight together with the application of commercial off-the-shelf (COTS) iris recognition methods.

  9. Analysis of polarimetric image by full stokes vector imaging camera for retrieval of target polarization in underwater environment

    NASA Astrophysics Data System (ADS)

    Gu, Yalong; Carrizo, Carlos; El-Habashi, Ahmed; Gilerson, Alexander

    2015-05-01

    Polarized image of underwater light field contains rich information of and the targets strongly affected by the water inherent optical properties. We present a comprehensive analysis of the polarimetric images of a manmade underwater target with known polarization properties acquired by a full Stokes vector imaging camera in underwater environment. The effects of the camera's parameters such as numerical aperture and orientation are evaluated. With the knowledge acquired in the analysis of such a forward polarimetric imaging process, a method for retrieval of the inherent optical properties of the water and the target polarization is explored.

  10. Wide-area surveillance with multiple cameras using distributed compressive imaging

    NASA Astrophysics Data System (ADS)

    Huff, Christopher; Muise, Robert

    2011-04-01

    In order to image a large area with a required resolution, a traditional camera would have to scan a smaller field-of-view until the entire area of interest is covered, thus losing persistence. Using a large sensor would result in high bandwidth data streams along with expensive and heavy equipment. Ideally, one would like to sense (or measure) a large number of pixels with a very limited set of measurements. In such a scenario the theory of compressive sensing may be put to use. A single sensor compressive imager for the wide area surveillance problem has been postulated and shown to be effective in detecting moving targets in a wide area. In this paper we look at the compressive imaging problem by assuming we have multiple cameras at our disposal. We show that we can get significant benefit in image reconstruction from multiple cameras measuring overlapped fields-of-view without any intra-camera communications and under significant transmission bandwidth constraints. We also show analysis and experiments which suggest that we can register these multiple cameras given only the random projective measurements from each camera.

  11. Optimal geometrical configuration of a double-scattering compton camera for maximum imaging resolution and sensitivity

    NASA Astrophysics Data System (ADS)

    Seo, Hee; Lee, Se Hyung; Kim, Chan Hyeong; An, So Hyun; Lee, Ju Hahn; Lee, Chun Sik

    2008-06-01

    A novel type of Compton camera, called a double-scattering Compton imager (DOCI), is under development for nuclear medicine and molecular imaging applications. Two plane-type position-sensitive semiconductor detectors are employed as the scatterer detectors, and a 3?×3? cylindrical NaI(Tl) scintillation detector is employed as the absorber detector. This study determined the optimal geometrical configuration of these component detectors to maximize the performance of the Compton camera in imaging resolution and sensitivity. To that end, the Compton camera was simulated very realistically, with the GEANT4 detector simulation toolkit, including various detector characteristics such as energy resolution, spatial resolution, energy discrimination, and Doppler energy broadening. According to our simulation results, the Compton camera is expected to show its maximum performance when the two scatterer detectors are positioned in parallel, with ˜8 cm of separation. The Compton camera will show the maximum performance also when the gamma-ray energy is about 500 keV, which suggests that the Compton camera is a suitable device to image the distribution of the positron emission tomography (PET) isotopes in the human body.

  12. High-frame-rate intensified fast optically shuttered TV cameras with selected imaging applications

    SciTech Connect

    Yates, G.J.; King, N.S.P.

    1994-08-01

    This invited paper focuses on high speed electronic/electro-optic camera development by the Applied Physics Experiments and Imaging Measurements Group (P-15) of Los Alamos National Laboratory`s Physics Division over the last two decades. The evolution of TV and image intensifier sensors and fast readout fast shuttered cameras are discussed. Their use in nuclear, military, and medical imaging applications are presented. Several salient characteristics and anomalies associated with single-pulse and high repetition rate performance of the cameras/sensors are included from earlier studies to emphasize their effects on radiometric accuracy of electronic framing cameras. The Group`s test and evaluation capabilities for characterization of imaging type electro-optic sensors and sensor components including Focal Plane Arrays, gated Image Intensifiers, microchannel plates, and phosphors are discussed. Two new unique facilities, the High Speed Solid State Imager Test Station (HSTS) and the Electron Gun Vacuum Test Chamber (EGTC) arc described. A summary of the Group`s current and developmental camera designs and R&D initiatives are included.

  13. MOSAIC. [Mosaicked Optical Self-scanned Array Imaging Camera for high resolution UV space astronomy missions

    NASA Technical Reports Server (NTRS)

    Williams, J. T.; Weistrop, D.

    1984-01-01

    The Mosaicked Optical Self-scanned Array Imaging Camera (MOSAIC), is a 50 sq cm active area detector system encompassing 5.76 million picture elements. This camera is being developed for use in astronomical instrumentation requiring the use of large area imaging detectors with high resolution photon counting capability in the space ultraviolet. This paper gives a descriptive outline of the MOSAIC camera system including: the 100 mm diameter microchannel plate intensifier, the 3 x 3 array of 800 x 800 pixel charge coupled devices, the signal processing and buffer storage subsystem, the electronic support and control subsystem, and the data processing subsystem. Performance characteristics of the camera and its subsystems are presented, based on system analysis.

  14. The Nimbus image dissector camera system - An evaluation of its meteorological applications.

    NASA Technical Reports Server (NTRS)

    Sabatini, R. R.

    1971-01-01

    Brief description of the electronics and operation of the Nimbus image dissector camera system (IDCS). The geometry and distortions of the IDCS are compared to the conventional AVCS camera on board the operational ITOS and ESSA satellites. The unique scanning of the IDCS provides for little distortion of the image, making it feasible to use a strip grid for the IDCS received in real time by local APT stations. The dynamic range of the camera favors the white end (high reflectance) of the gray scale. Thus, the camera is good for detecting cloud structure and ice features through brightness changes. Examples of cloud features, ice, and snow-covered land are presented. Land features, on the other hand, show little contrast. The 2600 x 2600 km coverage by the IDCS is adequate for the early detection of weather systems which may affect the local area. An example of IDCS coverage obtained by an APT station in midlatitudes is presented.

  15. Microchannel plate pinhole camera for 20 to 100 keV x-ray imaging

    SciTech Connect

    Wang, C.L.; Leipelt, G.R.; Nilson, D.G.

    1984-10-03

    We present the design and construction of a sensitive pinhole camera for imaging suprathermal x-rays. Our device is a pinhole camera consisting of four filtered pinholes and microchannel plate electron multiplier for x-ray detection and signal amplification. We report successful imaging of 20, 45, 70, and 100 keV x-ray emissions from the fusion targets at our Novette laser facility. Such imaging reveals features of the transport of hot electrons and provides views deep inside the target.

  16. The Mast Cameras and Mars Descent Imager (MARDI) for the 2009 Mars Science Laboratory

    NASA Technical Reports Server (NTRS)

    Malin, M. C.; Bell, J. F.; Cameron, J.; Dietrich, W. E.; Edgett, K. S.; Hallet, B.; Herkenhoff, K. E.; Lemmon, M. T.; Parker, T. J.; Sullivan, R. J.

    2005-01-01

    Based on operational experience gained during the Mars Exploration Rover (MER) mission, we proposed and were selected to conduct two related imaging experiments: (1) an investigation of the geology and short-term atmospheric vertical wind profile local to the Mars Science Laboratory (MSL) landing site using descent imaging, and (2) a broadly-based scientific investigation of the MSL locale employing visible and very near infra-red imaging techniques from a pair of mast-mounted, high resolution cameras. Both instruments share a common electronics design, a design also employed for the MSL Mars Hand Lens Imager (MAHLI) [1]. The primary differences between the cameras are in the nature and number of mechanisms and specific optics tailored to each camera s requirements.

  17. Camera simulation engine enables efficient system optimization for super-resolution imaging

    NASA Astrophysics Data System (ADS)

    Fullerton, Stephanie; Bennett, Keith; Toda, Eiji; Takahashi, Teruo

    2012-02-01

    Quantitative fluorescent imaging requires optimization of the complete optical system, from the sample to the detector. Such considerations are especially true for precision localization microscopy such as PALM and (d)STORM where the precision of the result is limited by the noise in both the optical and detection systems. Here, we present a Camera Simulation Engine (CSE) that allows comparison of imaging results from CCD, CMOS and EM-CCD cameras under various sample conditions and can accurately validate the quality of precision localization algorithms and camera performance. To achieve these results, the CSE incorporates the following parameters: 1) Sample conditions including optical intensity, wavelength, optical signal shot noise, and optical background shot noise; 2) Camera specifications including QE, pixel size, dark current, read noise, EM-CCD excess noise; 3) Camera operating conditions such as exposure, binning and gain. A key feature of the CSE is that, from a single image (either real or simulated "ideal") we generate a stack of statistically realistic images. We have used the CSE to validate experimental data showing that certain current scientific CMOS technology outperforms EM-CCD in most super-resolution scenarios. Our results support using the CSE to efficiently and methodically select cameras for quantitative imaging applications. Furthermore, the CSE can be used to robustly compare and evaluate new algorithms for data analysis and image reconstruction. These uses of the CSE are particularly relevant to super-resolution precision localization microscopy and provide a faster, simpler and more cost effective means of system optimization, especially camera selection.

  18. Optical characterization of UV multispectral imaging cameras for SO2 plume measurements

    NASA Astrophysics Data System (ADS)

    Stebel, K.; Prata, F.; Dauge, F.; Durant, A.; Amigo, A.,

    2012-04-01

    Only a few years ago spectral imaging cameras for SO2 plume monitoring were developed for remote sensing of volcanic plumes. We describe the development from a first camera using a single filter in the absorption band of SO2 to more advanced systems using several filters and an integrated spectrometer. The first system was based on the Hamamatsu C8484 UV camera (1344 x 1024 pixels) with high quantum efficiency in the UV region from 280 nm onward. At the heart of the second UV camera system, EnviCam, is a cooled Alta U47 camera, equipped with two on-band (310 and 315 nm) and two off-band (325 and 330 nm) filters. The third system utilizes again the uncooled Hamamatsu camera for faster sampling (~10 Hz) and a four-position filter-wheel equipped with two 10 nm filters centered at 310 and 330 nm, a UV broadband view and a blackened plate for dark-current measurement. Both cameras have been tested with lenses with different focal lengths. A co-aligned spectrometer provides a ~0.3nm resolution spectrum within the field-of-view of the camera. We describe the ground-based imaging cameras systems developed and utilized at our Institute. Custom made cylindrical quartz calibration cells with 50 mm diameter, to cover the entire field of view of the camera optics, are filled with various amounts of gaseous SO2 (typically between 100 and 1500 ppm•m). They are used for calibration and characterization of the cameras in the laboratory. We report about the procedures for monitoring and analyzing SO2 path-concentration and fluxes. This includes a comparison of the calibration in the atmosphere using the SO2 cells versus the SO2 retrieval from the integrated spectrometer. The first UV cameras have been used to monitor ship emissions (Ny-Ålesund, Svalbard and Genova, Italy). The second generation of cameras were first tested for industrial stack monitoring during a field campaign close to the Rovinari (Romania) power plant in September 2010, revealing very high SO2 emissions (> 1000 ppm•m). The second generation cameras are now used by students from several universities in Romania. The newest system has been tested for volcanic plume monitoring at Turrialba, Costa Rica in January, 2011, at Merapi volcani, Indonesia in February 2011, at Lascar volcano in Chile in July 2011 and at Etna/Stromboli (Italy) in November 2011. Retrievals from some of these campaigns will be presented.

  19. Model-based image reconstruction of chemiluminescence using a plenoptic 2.0 camera

    E-print Network

    Fessler, Jeffrey A.

    : combustion in transparent engine cylinder 3 / 29 #12;Motivation Tomographic reconstruction of 3D camera measurement y. MBIR components: 3D object model (basis coefficients) x - Image voxel, basis-based image reconstruction (MBIR) Overall goal: reconstruct 3D chemiluminescence pattern x from plenoptic

  20. A high-resolution airborne four-camera imaging system for agricultural remote sensing

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper describes the design and testing of an airborne multispectral digital imaging system for remote sensing applications. The system consists of four high resolution charge coupled device (CCD) digital cameras and a ruggedized PC equipped with a frame grabber and image acquisition software. T...

  1. Hybrid Compton camera/coded aperture imaging system

    DOEpatents

    Mihailescu, Lucian (Livermore, CA); Vetter, Kai M. (Alameda, CA)

    2012-04-10

    A system in one embodiment includes an array of radiation detectors; and an array of imagers positioned behind the array of detectors relative to an expected trajectory of incoming radiation. A method in another embodiment includes detecting incoming radiation with an array of radiation detectors; detecting the incoming radiation with an array of imagers positioned behind the array of detectors relative to a trajectory of the incoming radiation; and performing at least one of Compton imaging using at least the imagers and coded aperture imaging using at least the imagers. A method in yet another embodiment includes detecting incoming radiation with an array of imagers positioned behind an array of detectors relative to a trajectory of the incoming radiation; and performing Compton imaging using at least the imagers.

  2. Flexible camera applications of an advanced uncooled microbolometer thermal imaging core

    NASA Astrophysics Data System (ADS)

    Rumbaugh, Roy N.; Pongratz, Simon; Breen, Tom; Wickman, Heather; Klug, Ron; Gess, Aaron; Hays, John; Bastian, Jonathan; Hall, Greg; Arion, Tim; Owens, John; Siviter, David

    2004-04-01

    Since its introduction less than a year ago, many camera products and end-user applications have benefited from upgrading to the revolutionary BAE Systems MicroIRTM SCC500TM Standard Camera Core. This flexible, multi-resolution, uncooled, vanadium oxide (VOx) microbolometer based imaging engine is delivering higher performance at a lower price to diverse applications with more unique requirements than previous generations of engines. These applications include firefighting, surveillance, security, navigarion, weapon sight, missile, space, automotive and many others. This paper highlights several cameras, systems, and their applictiaons to illustrate some of the real-world uses and benefits of these products.

  3. Constructing a Database from Multiple 2D Images for Camera Pose Estimation and Robot Localization

    NASA Technical Reports Server (NTRS)

    Wolf, Michael; Ansar, Adnan I.; Brennan, Shane; Clouse, Daniel S.; Padgett, Curtis W.

    2012-01-01

    The LMDB (Landmark Database) Builder software identifies persistent image features (landmarks) in a scene viewed multiple times and precisely estimates the landmarks 3D world positions. The software receives as input multiple 2D images of approximately the same scene, along with an initial guess of the camera poses for each image, and a table of features matched pair-wise in each frame. LMDB Builder aggregates landmarks across an arbitrarily large collection of frames with matched features. Range data from stereo vision processing can also be passed to improve the initial guess of the 3D point estimates. The LMDB Builder aggregates feature lists across all frames, manages the process to promote selected features to landmarks, and iteratively calculates the 3D landmark positions using the current camera pose estimations (via an optimal ray projection method), and then improves the camera pose estimates using the 3D landmark positions. Finally, it extracts image patches for each landmark from auto-selected key frames and constructs the landmark database. The landmark database can then be used to estimate future camera poses (and therefore localize a robotic vehicle that may be carrying the cameras) by matching current imagery to landmark database image patches and using the known 3D landmark positions to estimate the current pose.

  4. Periscope-camera system for visible and infrared imaging diagnostics on TFTR

    SciTech Connect

    Medley, S.S.; Dimock, D.L.; Hayes, S.; Long, D.; Lowrence, J.L.; Mastrocola, V.; Renda, G.; Ulrickson, M.; Young, K.M.

    1985-05-01

    An optical diagnostic consisting of a periscope which relays images of the torus interior to an array of cameras is used on the Tokamak Fusion Test Reactor (TFTR) to view plasma discharge phenomena and inspect vacuum vessel internal structures in both visible and near-infrared wavelength regions. Three periscopes view through 20-cm-diameter fused-silica windows which are spaced around the torus midplane to provide a viewing coverage of approximately 75% of the vacuum vessel internal surface area. The periscopes have f/8 optics and motor-driven controls for focusing, magnification selection (5/sup 0/, 20/sup 0/, and 60/sup 0/ field of view), elevation and azimuth setting, mast rotation, filter selection, iris aperture, and viewing port selection. The four viewing ports on each periscope are equipped with multiple imaging devices which include: (1) an inspection eyepiece, (2) standard (RCA TC2900) and fast (RETICON) framing rate television cameras, (3) a PtSi CCD infrared imaging camera, (4) a 35 mm Nikon F3 still camera, or (5) a 16 mm Locam II movie camera with variable framing up to 500 fps. Operation of the periscope-camera system is controlled either locally or remotely through a computer-CAMAC interface. A description of the equipment and examples of its application are presented.

  5. Review of the characteristics of 384x288 pixel THz camera for see-through imaging

    NASA Astrophysics Data System (ADS)

    Marchese, Linda; Terroux, Marc; Genereux, Francis; Tremblay, Bruno; Bolduc, Martin; Bergeron, Alain

    2013-10-01

    Terahertz is a field in constant expansion. Multiple applications are foreseen including see-through imaging. To develop deployable systems, real-time two-dimensional cameras are needed rather than monopixel detectors or linear arrays that require mechanical scanning systems. INO has recently developed a real-time (video rate) 384x288 THz camera exhibiting excellent sensitivity and low noise levels. The core of the THz imager is the 35 ?m pitch detector array that is based on INO's uncooled VOx microbolometer technology and fabricated in INO's clean room. A standard ceramic package is used for final packaging. The detector FPA is finally sealed with a high resistivity float zone silicon (HRFZ-Si) window having an anti-reflective coating consisting of thick Parylene, which the thickness of which depends on the required optimization wavelength. The FPA is mounted on an INO IRXCAM core giving a passive THz camera assembly. The additional THz objective consists of a refractive 44 mm focal length F/1 THz lens. In this paper, a review of the characteristics of the THz camera at is performed. The sensitivity of the camera at various THz wavelengths is presented along with examples of the resolution obtained with the IRXCAM-384-THz camera core. See-through imaging results are also presented.

  6. Methods for a fusion of optical coherence tomography and stereo camera image data

    NASA Astrophysics Data System (ADS)

    Bergmeier, Jan; Kundrat, Dennis; Schoob, Andreas; Kahrs, Lüder A.; Ortmaier, Tobias

    2015-03-01

    This work investigates combination of Optical Coherence Tomography and two cameras, observing a microscopic scene. Stereo vision provides realistic images, but is limited in terms of penetration depth. Optical Coherence Tomography (OCT) enables access to subcutaneous structures, but 3D-OCT volume data do not give the surgeon a familiar view. The extension of the stereo camera setup with OCT imaging combines the benefits of both modalities. In order to provide the surgeon with a convenient integration of OCT into the vision interface, we present an automated image processing analysis of OCT and stereo camera data as well as combined imaging as augmented reality visualization. Therefore, we care about OCT image noise, perform segmentation as well as develop proper registration objects and methods. The registration between stereo camera and OCT results in a Root Mean Square error of 284 ?m as average of five measurements. The presented methods are fundamental for fusion of both imaging modalities. Augmented reality is shown as application of the results. Further developments lead to fused visualization of subcutaneous structures, as information of OCT images, into stereo vision.

  7. Demonstration of three-dimensional imaging based on handheld Compton camera

    NASA Astrophysics Data System (ADS)

    Kishimoto, A.; Kataoka, J.; Nishiyama, T.; Taya, T.; Kabuki, S.

    2015-11-01

    Compton cameras are potential detectors that are capable of performing measurements across a wide energy range for medical imaging applications, such as in nuclear medicine and ion beam therapy. In previous work, we developed a handheld Compton camera to identify environmental radiation hotspots. This camera consists of a 3D position-sensitive scintillator array and multi-pixel photon counter arrays. In this work, we reconstructed the 3D image of a source via list-mode maximum likelihood expectation maximization and demonstrated the imaging performance of the handheld Compton camera. Based on both the simulation and the experiments, we confirmed that multi-angle data acquisition of the imaging region significantly improved the spatial resolution of the reconstructed image in the direction vertical to the detector. The experimental spatial resolutions in the X, Y, and Z directions at the center of the imaging region were 6.81 mm ± 0.13 mm, 6.52 mm ± 0.07 mm and 6.71 mm ± 0.11 mm (FWHM), respectively. Results of multi-angle data acquisition show the potential of reconstructing 3D source images.

  8. Three-dimensional imaging of carbonyl sulfide and ethyl iodide photodissociation using the pixel imaging mass spectrometry camera

    NASA Astrophysics Data System (ADS)

    Amini, K.; Blake, S.; Brouard, M.; Burt, M. B.; Halford, E.; Lauer, A.; Slater, C. S.; Lee, J. W. L.; Vallance, C.

    2015-10-01

    The Pixel Imaging Mass Spectrometry (PImMS) camera is used in proof-of-principle three-dimensional imaging experiments on the photodissociation of carbonyl sulfide and ethyl iodide at wavelengths around 230 nm and 245 nm, respectively. Coupling the PImMS camera with DC-sliced velocity-map imaging allows the complete three-dimensional Newton sphere of photofragment ions to be recorded on each laser pump-probe cycle with a timing precision of 12.5 ns, yielding velocity resolutions along the time-of-flight axis of around 6%-9% in the applications presented.

  9. Retinal axial focusing and multi-layer imaging with a liquid crystal adaptive optics camera

    NASA Astrophysics Data System (ADS)

    Liu, Rui-Xue; Zheng, Xian-Liang; Li, Da-Yu; Xia, Ming-Liang; Hu, Li-Fa; Cao, Zhao-Liang; Mu, Quan-Quan; Xuan, Li

    2014-09-01

    With the help of adaptive optics (AO) technology, cellular level imaging of living human retina can be achieved. Aiming to reduce distressing feelings and to avoid potential drug induced diseases, we attempted to image retina with dilated pupil and froze accommodation without drugs. An optimized liquid crystal adaptive optics camera was adopted for retinal imaging. A novel eye stared system was used for stimulating accommodation and fixating imaging area. Illumination sources and imaging camera kept linkage for focusing and imaging different layers. Four subjects with diverse degree of myopia were imaged. Based on the optical properties of the human eye, the eye stared system reduced the defocus to less than the typical ocular depth of focus. In this way, the illumination light can be projected on certain retina layer precisely. Since that the defocus had been compensated by the eye stared system, the adopted 512 × 512 liquid crystal spatial light modulator (LC-SLM) corrector provided the crucial spatial fidelity to fully compensate high-order aberrations. The Strehl ratio of a subject with -8 diopter myopia was improved to 0.78, which was nearly close to diffraction-limited imaging. By finely adjusting the axial displacement of illumination sources and imaging camera, cone photoreceptors, blood vessels and nerve fiber layer were clearly imaged successfully.

  10. Validation of 3D surface imaging in breath-hold radiotherapy for breast cancer: one central camera unit versus three camera units

    NASA Astrophysics Data System (ADS)

    Alderliesten, Tanja; Betgen, Anja; van Vliet-Vroegindeweij, Corine; Remeijer, Peter

    2013-03-01

    In this work we investigated the benefit of the use of two lateral camera units additional to a central camera unit for 3D surface imaging for image guidance in deep-inspiration breath-hold (DIBH) radiotherapy by comparison with cone-beam computed tomography (CBCT). Ten patients who received DIBH radiotherapy after breast-conserving surgery were included. The performance of surface imaging using one and three camera units was compared to using CBCT for setup verification. Breast-surface registrations were performed for CBCT as well as for 3D surfaces, captured concurrently with CBCT, to planning CT. The resulting setup errors were compared with linear regression analysis. For the differences between setup errors an assessment of the group mean, systematic error, random error, and 95% limits of agreement was made. Correlations between derived surface-imaging [one camera unit;three camera units] and CBCT setup errors were: R2=[0.67;0.75], [0.76;0.87], [0.88;0.91] in left-right, cranio-caudal, and anterior-posterior direction, respectively. Group mean, systematic and random errors were slightly smaller (sub-millimeter differences) and the limits of agreement were 0.10 to 0.25cm tighter when using three camera units compared with one. For the majority of the data, the use of three camera units compared with one resulted in setup errors more similar to the CBCT derived setup errors for the craniocaudal and anterior-posterior directions (p<0.01, Wilcoxon-signed-ranks test). This study shows a better correlation and agreement between 3D surface imaging and CBCT when three camera units are used instead of one and further outlines the conditions under which the benefit of using three camera units is significant.

  11. Enhancing swimming pool safety by the use of range-imaging cameras

    NASA Astrophysics Data System (ADS)

    Geerardyn, D.; Boulanger, S.; Kuijk, M.

    2015-05-01

    Drowning is the cause of death of 372.000 people, each year worldwide, according to the report of November 2014 of the World Health Organization.1 Currently, most swimming pools only use lifeguards to detect drowning people. In some modern swimming pools, camera-based detection systems are nowadays being integrated. However, these systems have to be mounted underwater, mostly as a replacement of the underwater lighting. In contrast, we are interested in range imaging cameras mounted on the ceiling of the swimming pool, allowing to distinguish swimmers at the surface from drowning people underwater, while keeping the large field-of-view and minimizing occlusions. However, we have to take into account that the water surface of a swimming pool is not a flat, but mostly rippled surface, and that the water is transparent for visible light, but less transparent for infrared or ultraviolet light. We investigated the use of different types of 3D cameras to detect objects underwater at different depths and with different amplitudes of surface perturbations. Specifically, we performed measurements with a commercial Time-of-Flight camera, a commercial structured-light depth camera and our own Time-of-Flight system. Our own system uses pulsed Time-of-Flight and emits light of 785 nm. The measured distances between the camera and the object are influenced through the perturbations on the water surface. Due to the timing of our Time-of-Flight camera, our system is theoretically able to minimize the influence of the reflections of a partially-reflecting surface. The combination of a post image-acquisition filter compensating for the perturbations and the use of a light source with shorter wavelengths to enlarge the depth range can improve the current commercial cameras. As a result, we can conclude that low-cost range imagers can increase swimming pool safety, by inserting a post-processing filter and the use of another light source.

  12. Dynamic imaging with high resolution time-of-flight pet camera - TOFPET I

    SciTech Connect

    Mullani, N.A.; Bristow, D.; Gaeta, J.; Gould, K.L.; Hartz, R.K.; Philipe, E.A.; Wong, W.H.; Yerian, K.

    1984-02-01

    One of the major design goals of the TOFPET I positron camera was to produce a high resolution whole body positron camera capable of dynamically imaging an organ such as the heart. TOFPET I is now nearing completion and preliminary images have been obtained to assess its dynamic and three dimensional imaging capabilities. Multiple gated images of the uptake of Rubidium in the dog heart and three dimensional surface displays of the distribution of the Rubidium-82 in the myocardium have been generated to demonstrate the three dimensional imaging properties. Fast dynamic images of the first pass of a bolus of radio-tracer through the heart have been collected with 4 second integration time and 50% gating (2 second equivalent integration time) with 18 mCi of Rb-82.

  13. A 5-18 micron array camera for high-background astronomical imaging

    NASA Technical Reports Server (NTRS)

    Gezari, Daniel Y.; Folz, Walter C.; Woods, Lawrence A.; Varosi, Frank

    1992-01-01

    A new infrared array camera system using a Hughes/SBRC 58 x 62 pixel hybrid Si:Ga array detector has been successfully applied to high-background 5-18-micron astronomical imaging observations. The off-axis reflective optical system minimizes thermal background loading and produces diffraction-limited images with negligible spatial distortion. The noise equivalent flux density (NEFD) of the camera at 10 microns on the 3.0-m NASA/Infrared Telescope Facility with broadband interference filters and 0.26 arcsec pixel is NEFD = 0.01 Jy/sq rt min per pixel (1sigma), and it operates at a frame rate of 30 Hz with no compromise in observational efficiency. The electronic and optical design of the camera, its photometric characteristics, examples of observational results, and techniques for successful array imaging in a high- background astronomical application are discussed.

  14. Achieving thermography with a thermal security camera using uncooled amorphous silicon microbolometer image sensors

    NASA Astrophysics Data System (ADS)

    Wang, Yu-Wei; Tesdahl, Curtis; Owens, Jim; Dorn, David

    2012-06-01

    Advancements in uncooled microbolometer technology over the last several years have opened up many commercial applications which had been previously cost prohibitive. Thermal technology is no longer limited to the military and government market segments. One type of thermal sensor with low NETD which is available in the commercial market segment is the uncooled amorphous silicon (?-Si) microbolometer image sensor. Typical thermal security cameras focus on providing the best image quality by auto tonemaping (contrast enhancing) the image, which provides the best contrast depending on the temperature range of the scene. While this may provide enough information to detect objects and activities, there are further benefits of being able to estimate the actual object temperatures in a scene. This thermographic ability can provide functionality beyond typical security cameras by being able to monitor processes. Example applications of thermography[2] with thermal camera include: monitoring electrical circuits, industrial machinery, building thermal leaks, oil/gas pipelines, power substations, etc...[3][5] This paper discusses the methodology of estimating object temperatures by characterizing/calibrating different components inside a thermal camera utilizing an uncooled amorphous silicon microbolometer image sensor. Plots of system performance across camera operating temperatures will be shown.

  15. Vehicle occupancy detection camera position optimization using design of experiments and standard image references

    NASA Astrophysics Data System (ADS)

    Paul, Peter; Hoover, Martin; Rabbani, Mojgan

    2013-03-01

    Camera positioning and orientation is important to applications in domains such as transportation since the objects to be imaged vary greatly in shape and size. In a typical transportation application that requires capturing still images, inductive loops buried in the ground or laser trigger sensors are used when a vehicle reaches the image capture zone to trigger the image capture system. The camera in such a system is in a fixed position pointed at the roadway and at a fixed orientation. Thus the problem is to determine the optimal location and orientation of the camera when capturing images from a wide variety of vehicles. Methods from Design for Six Sigma, including identifying important parameters and noise sources and performing systematically designed experiments (DOE) can be used to determine an effective set of parameter settings for the camera position and orientation under these conditions. In the transportation application of high occupancy vehicle lane enforcement, the number of passengers in the vehicle is to be counted. Past work has described front seat vehicle occupant counting using a camera mounted on an overhead gantry looking through the front windshield in order to capture images of vehicle occupants. However, viewing rear seat passengers is more problematic due to obstructions including the vehicle body frame structures and seats. One approach is to view the rear seats through the side window. In this situation the problem of optimally positioning and orienting the camera to adequately capture the rear seats through the side window can be addressed through a designed experiment. In any automated traffic enforcement system it is necessary for humans to be able to review any automatically captured digital imagery in order to verify detected infractions. Thus for defining an output to be optimized for the designed experiment, a human defined standard image reference (SIR) was used to quantify the quality of the line-of-sight to the rear seats of the vehicle. The DOE-SIR method was exercised for determining the optimal camera position and orientation for viewing vehicle rear seats over a variety of vehicle types. The resulting camera geometry was used on public roadway image capture resulting in over 95% acceptable rear seat images for human viewing.

  16. Imaging with depth extension: where are the limits in fixed- focus cameras?

    NASA Astrophysics Data System (ADS)

    Bakin, Dmitry; Keelan, Brian

    2008-08-01

    The integration of novel optics designs, miniature CMOS sensors, and powerful digital processing into a single imaging module package is driving progress in handset camera systems in terms of performance, size (thinness) and cost. The miniature cameras incorporating high resolution sensors and fixed-focus Extended Depth of Field (EDOF) optics allow close-range reading of printed material (barcode patterns, business cards), while providing high quality imaging in more traditional applications. These cameras incorporate modified optics and digital processing to recover the soft-focus images and restore sharpness over a wide range of object distances. The effects a variety of parameters of the imaging module on the EDOF range were analyzed for a family of high resolution CMOS modules. The parameters include various optical properties of the imaging lens, and the characteristics of the sensor. The extension factors for the EDOF imaging module were defined in terms of an improved absolute resolution in object space while maintaining focus at infinity. This definition was applied for the purpose of identifying the minimally resolvable object details in mobile cameras with bar-code reading feature.

  17. Development of a handheld fluorescence imaging camera for intraoperative sentinel lymph node mapping.

    PubMed

    Szyc, ?ukasz; Bonifer, Stefanie; Walter, Alfred; Jagemann, Uwe; Grosenick, Dirk; Macdonald, Rainer

    2015-05-01

    We present a compact fluorescence imaging system developed for real-time sentinel lymph node mapping. The device uses two near-infrared wavelengths to record fluorescence and anatomical images with a single charge-coupled device camera. Experiments on lymph node and tissue phantoms confirmed that the amount of dye in superficial lymph nodes can be better estimated due to the absorption correction procedure integrated in our device. Because of the camera head's small size and low weight, all accessible regions of tissue can be reached without the need for any adjustments. PMID:25585232

  18. Three-dimensional camera capturing 360° directional image for natural three-dimensional display

    NASA Astrophysics Data System (ADS)

    Hanuma, Hisaki; Takaki, Yasuhiro

    2005-11-01

    Natural three-dimensional images can be produced by displaying a large number of directional images with directional rays. Directional images are orthographic projections of a three-dimensional object and are displayed with nearly parallel rays. We have already constructed 64-directional, 72-directional, and 128-directional natural three-dimensional displays whose angle sampling pitches of horizontal ray direction are 0.34°, 0,38°, and 0.25°, respectively. In this study we constructed a rotating camera system to capture 360° three-dimensional information of an actual object. An object is located at the center of rotation and a camera mounted at the end of an arm is rotated around an object. A large number of images are captured from different horizontal directions with a small rotation angle interval. Because captured images are perspective projections of an object, directional images are generated by interpolating the captured images. The 360° directional image consists of 1,059, 947, and 1,565 directional images corresponding to the three different displays. When the number of captured images is about ~ 4,000, the directional images can be generated without the image interpolation so that correct directional images are obtained. The degradation of the generated 360° directional image depending on the number of captured images is evaluated. The results show that the PSNR is higher than 35 dB when more than 400 images are captured. With the 360° directional image, the three-dimensional images can be interactively rotated on the three-dimensional display. The data sizes of the 360° directional images are 233 MB, 347 MB, and 344 MB, respectively. Because the directional images for adjacent horizontal directions are very similar, 360° directional image can be compressed using the conventional movie compression algorithms. We used H.264 CODEC and achieved the compression ratio 1.5 % with PSNR > 35 dB.

  19. A mobile phone-based retinal camera for portable wide field imaging.

    PubMed

    Maamari, Robi N; Keenan, Jeremy D; Fletcher, Daniel A; Margolis, Todd P

    2014-04-01

    Digital fundus imaging is used extensively in the diagnosis, monitoring and management of many retinal diseases. Access to fundus photography is often limited by patient morbidity, high equipment cost and shortage of trained personnel. Advancements in telemedicine methods and the development of portable fundus cameras have increased the accessibility of retinal imaging, but most of these approaches rely on separate computers for viewing and transmission of fundus images. We describe a novel portable handheld smartphone-based retinal camera capable of capturing high-quality, wide field fundus images. The use of the mobile phone platform creates a fully embedded system capable of acquisition, storage and analysis of fundus images that can be directly transmitted from the phone via the wireless telecommunication system for remote evaluation. PMID:24344230

  20. A CCD-Based Video Camera For Medical X-Ray Imaging

    NASA Astrophysics Data System (ADS)

    Snoeren, Rudolph M.

    1989-05-01

    Using an anamorphic imaging technique, the solid state image sensor can replace the vacuum pick-up tube in medical X-ray diagnostics, at least for the standard quality fluoroscopic application. A video camera is described in which, by optical compression, the circular output window of an image intensifier is imaged as an ellipse on the 3:4 image rectangle of a CCD sensor. The original shape is restored by electronics later. Information transfer is maximized this way: the total entrance field of the image intensifier is available, at the same time the maximum number of pixels of a high resolution CCD image sensor is used, there is also an increase of the sensor illuminance by 4/3 and the aliasing effects are minimized. Imaging descriptors such as Modulation transfer, noise generation and transfer are given in comparison with a Plumbicon-based camera, as well as shape transfer, luminance transfer and aliasing. A method is given to isolate the basic noise components of the sensor. A short description of the camera optics and electronics is given.

  1. Shading correction of camera captured document image with depth map information

    NASA Astrophysics Data System (ADS)

    Wu, Chyuan-Tyng; Allebach, Jan P.

    2015-01-01

    Camera modules have become more popular in consumer electronics and office products. As a consequence, people have many opportunities to use a camera-based device to record a hardcopy document in their daily lives. However, it is easy to let undesired shading into the captured document image through the camera. Sometimes, this non-uniformity may degrade the readability of the contents. In order to mitigate this artifact, some solutions have been developed. But most of them are only suitable for particular types of documents. In this paper, we introduce a content-independent and shape-independent method that will lessen the shading effects in captured document images. We want to reconstruct the image such that the result will look like a document image captured under a uniform lighting source. Our method utilizes the 3D depth map of the document surface and a look-up table strategy. We will first discuss the model and the assumptions that we used for the approach. Then, the process of creating and utilizing the look-up table will be described in the paper. We implement this algorithm with our prototype 3D scanner, which also uses a camera module to capture a 2D image of the object. Some experimental results will be presented to show the effectiveness of our method. Both flat and curved surface document examples will be included.

  2. GNSS Carrier Phase Integer Ambiguity Resolution with Camera and Satellite images

    NASA Astrophysics Data System (ADS)

    Henkel, Patrick

    2015-04-01

    Ambiguity Resolution is the key to high precision position and attitude determination with GNSS. However, ambiguity resolution of kinematic receivers becomes challenging in environments with substantial multipath, limited satellite availability and erroneous cycle slip corrections. There is a need for other sensors, e.g. inertial sensors that allow an independent prediction of the position. The change of the predicted position over time can then be used for cycle slip detection and correction. In this paper, we provide a method to improve the initial ambiguity resolution for RTK and PPP with vision-based position information. Camera images are correlated with geo-referenced aerial/ satellite images to obtain an independent absolute position information. This absolute position information is then coupled with the GNSS and INS measurements in an extended Kalman filter to estimate the position, velocity, acceleration, attitude, angular rates, code multipath and biases of the accelerometers and gyroscopes. The camera and satellite images are matched based on some characteristic image points (e.g. corners of street markers). We extract these characteristic image points from the camera images by performing the following steps: An inverse mapping (homogenous projection) is applied to transform the camera images from the driver's perspective to bird view. Subsequently, we detect the street markers by performing (a) a color transformation and reduction with adaptive brightness correction to focus on relevant features, (b) a subsequent morphological operation to enhance the structure recognition, (c) an edge and corner detection to extract feature points, and (d) a point matching of the corner points with a template to recognize the street markers. We verified the proposed method with two low-cost u-blox LEA 6T GPS receivers, the MPU9150 from Invensense, the ASCOS RTK corrections and a PointGrey camera. The results show very precise and seamless position and attitude estimates in an urban environment with substantial multipath.

  3. The image pretreatment based on the FPGA inside digital CCD camera

    NASA Astrophysics Data System (ADS)

    Tian, Rui; Liu, Yan-ying

    2009-07-01

    In a space project, a digital CCD camera which can image more clearly in the 1 Lux light environment has been asked to design . The CCD sensor ICX285AL produced by SONY Co.Ltd has been used in the CCD camera. The FPGA (Field Programmable Gate Array) chip XQR2V1000 has been used as a timing generator and a signal processor inside the CCD camera. But in the low-light environment, two kinds of random noise become apparent because of the improving of CCD camera's variable gain, one is dark current noise in the image background, the other is vertical transfer noise. The real time method for eliminating noise based on FPGA inside the CCD camera would be introduced. The causes and characteristics of the random noise have been analyzed. First, several ideas for eliminating dark current noise had been motioned; then they were emulated by VC++ in order to compare their speed and effect; Gauss filter has been chosen because of the filtering effect. The vertical transfer vertical noise has the character that the vertical noise points have regular ordinate in the image two-dimensional coordinates; and the performance of the noise is fixed, the gray value of the noise points is 16-20 less than the surrounding pixels. According to these characters, local median filter has been used to clear up the vertical noise. Finally, these algorithms had been transplanted into the FPGA chip inside the CCD camera. A large number of experiments had proved that the pretreatment has better real-time features. The pretreatment makes the digital CCD camera improve the signal-to-noise ratio of 3-5dB in the low-light environment.

  4. Multi-camera: interactive rendering of abstract digital images 

    E-print Network

    Smith, Jeffrey Statler

    2004-09-30

    with permission of Robert Schiffhauer. ............................ 6 2 Cubist Self-Portrait by Robert Schiffhauer. Image used with per- mission of Robert Schiffhauer. ...................... 8 3 P-197-K, acrylic on canvas, 136 cm x 136 cm, 1977, Collection Daimler...

  5. High-speed two-camera imaging pyrometer for mapping fireball temperatures.

    PubMed

    Densmore, John M; Homan, Barrie E; Biss, Matthew M; McNesby, Kevin L

    2011-11-20

    A high-speed imaging pyrometer was developed to investigate the behavior of flames and explosive events. The instrument consists of two monochrome high-speed Phantom v7.3 m cameras made by Vision Research Inc. arranged so that one lens assembly collects light for both cameras. The cameras are filtered at 700 or 900 nm with a 10 nm bandpass. The high irradiance produced by blackbody emission combined with variable shutter time and f-stop produces properly exposed images. The wavelengths were chosen with the expected temperatures in mind, and also to avoid any molecular or atomic gas phase emission. Temperatures measured using this pyrometer of exploded TNT charges are presented. PMID:22108886

  6. Low-complexity camera digital signal imaging for video document projection system

    NASA Astrophysics Data System (ADS)

    Hsia, Shih-Chang; Tsai, Po-Shien

    2011-04-01

    We present high-performance and low-complexity algorithms for real-time camera imaging applications. The main functions of the proposed camera digital signal processing (DSP) involve color interpolation, white balance, adaptive binary processing, auto gain control, and edge and color enhancement for video projection systems. A series of simulations demonstrate that the proposed method can achieve good image quality while keeping computation cost and memory requirements low. On the basis of the proposed algorithms, the cost-effective hardware core is developed using Verilog HDL. The prototype chip has been verified with one low-cost programmable device. The real-time camera system can achieve 1270 × 792 resolution with the combination of extra components and can demonstrate each DSP function.

  7. Efficiency of the Human Observer in LROC Studies

    E-print Network

    the test image g belongs to. This task is similar to a (J +1)-class classification problem (also known of measurement noise with mean zero. For class H0, the test image g consists of background image data Hb plus zero-mean noise, but has no signal. For class Hj, g also contains signal image data Hsj with signal

  8. Eyegaze Detection from Monocular Camera Image for Eyegaze Communication System

    NASA Astrophysics Data System (ADS)

    Ohtera, Ryo; Horiuchi, Takahiko; Kotera, Hiroaki

    An eyegaze interface is one of the key technologies as an input device in the ubiquitous-computing society. In particular, an eyegaze communication system is very important and useful for severely handicapped users such as quadriplegic patients. Most of the conventional eyegaze tracking algorithms require specific light sources, equipment and devices. In this study, a simple eyegaze detection algorithm is proposed using a single monocular video camera. The proposed algorithm works under the condition of fixed head pose, but slight movement of the face is accepted. In our system, we assume that all users have the same eyeball size based on physiological eyeball models. However, we succeed to calibrate the physiologic movement of the eyeball center depending on the gazing direction by approximating it as a change in the eyeball radius. In the gaze detection stage, the iris is extracted from a captured face frame by using the Hough transform. Then, the eyegaze angle is derived by calculating the Euclidean distance of the iris centers between the extracted frame and a reference frame captured in the calibration process. We apply our system to an eyegaze communication interface, and verified the performance through key typing experiments with a visual keyboard on display.

  9. Chandra High Resolution Camera Imaging of GRS 1758-258

    E-print Network

    W. A. Heindl; D. M. Smith

    2002-08-19

    We observed the "micro-quasar" GRS 1758-258 four times with Chandra. Two HRC-I observations were made in 2000 September-October spanning an intermediate-to-hard spectral transition (identified with RXTE). Another HRC-I and an ACIS/HETG observation were made in 2001 March following a hard-to-soft transition to a very low flux state. Based on the three HRC images and the HETG zero order image, the accurate position (J2000) of the X-ray source is RA = 18h 01m 12.39s, Dec = -25d 44m 36.1s (90% confidence radius = 0".45), consistent with the purported variable radio counterpart. All three HRC images are consistent with GRS 1758-258 being a point source, indicating that any bright jet is less than ~1 light-month in projected length, assuming a distance of 8.5 kpc.

  10. Camera Pose Estimation and Reconstruction from Image Profiles under Circular Motion

    E-print Network

    Wong, Kenneth K.Y.

    Camera Pose Estimation and Reconstruction from Image Profiles under Circular Motion Paulo R. S as two epipolar tangencies are needed), restriction to linear motion [18] (whereas circular motion addresses the problem of motion estimation and reconstruc- tion of 3D models from profiles of an object

  11. A multiple-plate, multiple-pinhole camera for X-ray gamma-ray imaging

    NASA Technical Reports Server (NTRS)

    Hoover, R. B.

    1971-01-01

    Plates with identical patterns of precisely aligned pinholes constitute lens system which, when rotated about optical axis, produces continuous high resolution image of small energy X-ray or gamma ray source. Camera has applications in radiation treatment and nuclear medicine.

  12. Hyperspectral imaging using a color camera and its application for pathogen detection

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper reports the results of a feasibility study for the development of a hyperspectral image recovery (reconstruction) technique using a RGB color camera and regression analysis in order to detect and classify colonies of foodborne pathogens. The target bacterial pathogens were the six represe...

  13. REVIEW OF SCIENTIFIC INSTRUMENTS 84, 123504 (2013) Extreme ultra-violet movie camera for imaging microsecond time scale

    E-print Network

    Bellan, Paul M.

    2013-01-01

    REVIEW OF SCIENTIFIC INSTRUMENTS 84, 123504 (2013) Extreme ultra-violet movie camera for imaging November 2013; published online 12 December 2013) An ultra-fast extreme ultra-violet (EUV) movie camera has-resolved fast imaging of extreme ultra-violet (EUV) and soft x-ray radiation is useful to understand mag- netic

  14. Research on the affect of differential-images technique to the resolution of infrared spatial camera

    NASA Astrophysics Data System (ADS)

    Jin, Guang; An, Yuan; Qi, Yingchun; Hu, Fusheng

    2007-12-01

    The optical system of infrared spatial camera adopts bigger relative aperture and bigger pixel size on focal plane element. These make the system have bulky volume and low resolution. The potential of the optical systems can not be exerted adequately. So, one method for improving resolution of infrared spatial camera based on multi-frame difference-images is introduced in the dissertation. The method uses more than one detectors to acquire several difference images, and then reconstructs a new high-resolution image from these images through the relationship of pixel grey value. The technique of difference-images that uses more than two detectors is researched, and it can improve the resolution 2.5 times in theory. The relationship of pixel grey value between low-resolution difference-images and high-resolution image is found by analyzing the energy of CCD sampling, a general relationship between the enhanced times of the resolution of the detected figure with differential method and the least count of CCD that will be used to detect figure is given. Based on the research of theory, the implementation process of utilizing difference-images technique to improve the resolution of the figure was simulated used Matlab software by taking a personality image as the object, and the software can output the result as an image. The result gotten from the works we have finished proves that the technique is available in high-resolution image reconstruction. The resolution of infrared spatial camera can be improved evidently when holding the size of optical structure or using big size detector by applying for difference image technique. So the technique has a high value in optical remote fields.

  15. Infrared Array Camera (IRAC) Imaging of the Lockman Hole

    E-print Network

    Huang, J. S.; Barmby, P.; Fazio, G. G.; Willner, S. P.; Wilson, Graham Wallace; Rigopoulou, D.; Alonso-Herrero, A.; Dole, H.; Egami, E.; Le Floc'h, E.; Papovich, C.; Perez-Gonzalez, P. G.; Rigby, J.; Engelbracht, C. W.; Gordon, K.; Hines, D.; Rieke, M.; Rieke, G. H.; Meisenheimer, K.; Miyazaki, S.

    2004-09-05

    IRAC imaging of a 4farcm7 × 4farcm7 area in the Lockman Hole detected over 400 galaxies in the IRAC 3.6 and 4.5 ?m bands, 120 in the 5.8 ?m band, and 80 in the 8.0 ?m band in 30 minutes of observing time. Color-color diagrams suggest that about half...

  16. Film cameras or digital sensors? The challenge ahead for aerial imaging

    USGS Publications Warehouse

    Light, D.L.

    1996-01-01

    Cartographic aerial cameras continue to play the key role in producing quality products for the aerial photography business, and specifically for the National Aerial Photography Program (NAPP). One NAPP photograph taken with cameras capable of 39 lp/mm system resolution can contain the equivalent of 432 million pixels at 11 ??m spot size, and the cost is less than $75 per photograph to scan and output the pixels on a magnetic storage medium. On the digital side, solid state charge coupled device linear and area arrays can yield quality resolution (7 to 12 ??m detector size) and a broader dynamic range. If linear arrays are to compete with film cameras, they will require precise attitude and positioning of the aircraft so that the lines of pixels can be unscrambled and put into a suitable homogeneous scene that is acceptable to an interpreter. Area arrays need to be much larger than currently available to image scenes competitive in size with film cameras. Analysis of the relative advantages and disadvantages of the two systems show that the analog approach is more economical at present. However, as arrays become larger, attitude sensors become more refined, global positioning system coordinate readouts become commonplace, and storage capacity becomes more affordable, the digital camera may emerge as the imaging system for the future. Several technical challenges must be overcome if digital sensors are to advance to where they can support mapping, charting, and geographic information system applications.

  17. The iQID camera: An ionizing-radiation quantum imaging detector

    PubMed Central

    Miller, Brian W.; Gregory, Stephanie J.; Fuller, Erin S.; Barrett, Harrison H.; Barber, H. Bradford; Furenlid, Lars R.

    2015-01-01

    We have developed and tested a novel, ionizing-radiation Quantum Imaging Detector (iQID). This scintillation-based detector was originally developed as a high-resolution gamma-ray imager, called BazookaSPECT, for use in single-photon emission computed tomography (SPECT). Recently, we have investigated the detector’s response and imaging potential with other forms of ionizing radiation including alpha, neutron, beta, and fission fragment particles. The confirmed response to this broad range of ionizing radiation has prompted its new title. The principle operation of the iQID camera involves coupling a scintillator to an image intensifier. The scintillation light generated by particle interactions is optically amplified by the intensifier and then re-imaged onto a CCD/CMOS camera sensor. The intensifier provides sufficient optical gain that practically any CCD/CMOS camera can be used to image ionizing radiation. The spatial location and energy of individual particles are estimated on an event-by-event basis in real time using image analysis algorithms on high-performance graphics processing hardware. Distinguishing features of the iQID camera include portability, large active areas, excellent detection efficiency for charged particles, and high spatial resolution (tens of microns). Although modest, iQID has energy resolution that is sufficient to discriminate between particles. Additionally, spatial features of individual events can be used for particle discrimination. An important iQID imaging application that has recently been developed is real-time, single-particle digital autoradiography. We present the latest results and discuss potential applications. PMID:26166921

  18. High frame rate CCD cameras with fast optical shutters for military and medical imaging applications

    SciTech Connect

    King, N.S.P.; Albright, K.; Jaramillo, S.A.; McDonald, T.E.; Yates, G.J.; Turko, B.T.

    1994-09-01

    Los Alamos National Laboratory has designed and prototyped high-frame rate intensified/shuttered Charge-Coupled-Device (CCD) cameras capable of operating at kilohertz frame rates (non-interlaced mode) with optical shutters capable of acquiring nanosecond-to-microsecond exposures each frame. These cameras utilize an Interline Transfer CCD, Loral Fairchild CCD-222 with 244 {times} 380 pixels operated at pixel rates approaching 100 Mhz. Initial prototype designs demonstrated single-port serial readout rates exceeding 3.97 Kilohertz with greater than 51p/mm spatial resolution at shutter speeds as short as 5ns. Readout was achieved by using a truncated format of 128 {times} 128 pixels by partial masking of the CCD and then subclocking the array at approximately 65Mhz pixel rate. Shuttering was accomplished with a proximity focused microchannel plate (MCP) image intensifier (MCPII) that incorporated a high strip current MCP and a design modification for high-speed stripline gating geometry to provide both fast shuttering and high repetition rate capabilities. Later camera designs use a close-packed quadruple head geometry fabricated using an array of four separate CCDs (pseudo 4-port device). This design provides four video outputs with optional parallel or time-phased sequential readout modes. The quad head format was designed with flexibility for coupling to various image intensifier configurations, including individual intensifiers for each CCD imager, a single intensifier with fiber optic or lens/prism coupled fanout of the input image to be shared by the four CCD imagers or a large diameter phosphor screen of a gateable framing type intensifier for time sequential relaying of a complete new input image to each CCD imager. Camera designs and their potential use in ongoing military and medical time-resolved imaging applications are discussed.

  19. The iQID Camera An Ionizing-Radiation Quantum Imaging Detector

    SciTech Connect

    Miller, Brian W.; Gregory, Stephanie J.; Fuller, Erin S.; Barrett, Harrison H.; Barber, Bradford H.; Furenlid, Lars R.

    2014-06-11

    Abstract We have developed and tested a novel, ionizing-radiation Quantum Imaging Detector (iQID). This scintillation-based detector was originally developed as a high-resolution gamma-ray imager, called BazookaSPECT, for use in single-photon emission computed tomography (SPECT). Recently, we have investigated the detectors response and imaging potential with other forms of ionizing radiation including alpha, neutron, beta, and fission fragment particles. The detector’s response to a broad range of ionizing radiation has prompted its new title. The principle operation of the iQID camera involves coupling a scintillator to an image intensifier. The scintillation light generated particle interactions is optically amplified by the intensifier and then re-imaged onto a CCD/CMOS camera sensor. The intensifier provides sufficient optical gain that practically any CCD/CMOS camera can be used to image ionizing radiation. Individual particles are identified and their spatial position (to sub-pixel accuracy) and energy are estimated on an event-by-event basis in real time using image analysis algorithms on high-performance graphics processing hardware. Distinguishing features of the iQID camera include portability, large active areas, high sensitivity, and high spatial resolution (tens of microns). Although modest, iQID has energy resolution that is sufficient to discrimate between particles. Additionally, spatial features of individual events can be used for particle discrimination. An important iQID imaging application that has recently been developed is single-particle, real-time digital autoradiography. We present the latest results and discuss potential applications.

  20. The iQID camera: An ionizing-radiation quantum imaging detector

    NASA Astrophysics Data System (ADS)

    Miller, Brian W.; Gregory, Stephanie J.; Fuller, Erin S.; Barrett, Harrison H.; Bradford Barber, H.; Furenlid, Lars R.

    2014-12-01

    We have developed and tested a novel, ionizing-radiation Quantum Imaging Detector (iQID). This scintillation-based detector was originally developed as a high-resolution gamma-ray imager, called BazookaSPECT, for use in single-photon emission computed tomography (SPECT). Recently, we have investigated the detector's response and imaging potential with other forms of ionizing radiation including alpha, neutron, beta, and fission fragment particles. The confirmed response to this broad range of ionizing radiation has prompted its new title. The principle operation of the iQID camera involves coupling a scintillator to an image intensifier. The scintillation light generated by particle interactions is optically amplified by the intensifier and then re-imaged onto a CCD/CMOS camera sensor. The intensifier provides sufficient optical gain that practically any CCD/CMOS camera can be used to image ionizing radiation. The spatial location and energy of individual particles are estimated on an event-by-event basis in real time using image analysis algorithms on high-performance graphics processing hardware. Distinguishing features of the iQID camera include portability, large active areas, excellent detection efficiency for charged particles, and high spatial resolution (tens of microns). Although modest, iQID has energy resolution that is sufficient to discriminate between particles. Additionally, spatial features of individual events can be used for particle discrimination. An important iQID imaging application that has recently been developed is real-time, single-particle digital autoradiography. We present the latest results and discuss potential applications.

  1. 3D motion artifact compenstation in CT image with depth camera

    NASA Astrophysics Data System (ADS)

    Ko, Youngjun; Baek, Jongduk; Shim, Hyunjung

    2015-02-01

    Computed tomography (CT) is a medical imaging technology that projects computer-processed X-rays to acquire tomographic images or the slices of specific organ of body. A motion artifact caused by patient motion is a common problem in CT system and may introduce undesirable artifacts in CT images. This paper analyzes the critical problems in motion artifacts and proposes a new CT system for motion artifact compensation. We employ depth cameras to capture the patient motion and account it for the CT image reconstruction. In this way, we achieve the significant improvement in motion artifact compensation, which is not possible by previous techniques.

  2. Real-time integral imaging system with handheld light field camera

    NASA Astrophysics Data System (ADS)

    Jeong, Youngmo; Kim, Jonghyun; Yeom, Jiwoon; Lee, Byoungho

    2014-11-01

    Our objective is to construct real-time pickup and display in integral imaging system with handheld light field camera. A micro lens array and high frame rate charge-coupled device (CCD) are used to implement handheld light field camera, and a simple lens array and a liquid crystal (LC) display panel are used to reconstruct three-dimensional (3D) images in real-time. Handheld light field camera is implemented by adding the micro lens array on CCD sensor. Main lens, which is mounted on CCD sensor, is used to capture the scene. To make the elemental image in real-time, pixel mapping algorithm is applied. With this algorithm, not only pseudoscopic problem can be solved, but also user can change the depth plane of the displayed 3D images in real-time. For real-time high quality 3D video generation, a high resolution and high frame rate CCD and LC display panel are used in proposed system. Experiment and simulation results are presented to verify our proposed system. As a result, 3D image is captured and reconstructed in real-time through integral imaging system.

  3. Mathematical Problems of Thermoacoustic and Compton Camera Imaging 

    E-print Network

    Georgieva-Hristova, Yulia Nekova

    2011-10-21

    is the inversion of the cone transform. We present three methods for inversion of this transform in two dimensions. Numer- ical examples of reconstructions by these methods are also provided. Lastly, we turn to a problem of significance in homeland security... . . . . . . . . 47 3.1.1. The Cone Transform . . . . . . . . . . . . . . . . . 49 3.1.2. Reconstruction Techniques in Compton Cam- era Imaging . . . . . . . . . . . . . . . . . . . . . . 51 3.1.2.1. Algebraic Reconstruction Techniques . . . . . 51 3...

  4. The trustworthy digital camera: Restoring credibility to the photographic image

    NASA Technical Reports Server (NTRS)

    Friedman, Gary L.

    1994-01-01

    The increasing sophistication of computers has made digital manipulation of photographic images, as well as other digitally-recorded artifacts such as audio and video, incredibly easy to perform and increasingly difficult to detect. Today, every picture appearing in newspapers and magazines has been digitally altered to some degree, with the severity varying from the trivial (cleaning up 'noise' and removing distracting backgrounds) to the point of deception (articles of clothing removed, heads attached to other people's bodies, and the complete rearrangement of city skylines). As the power, flexibility, and ubiquity of image-altering computers continues to increase, the well-known adage that 'the photography doesn't lie' will continue to become an anachronism. A solution to this problem comes from a concept called digital signatures, which incorporates modern cryptographic techniques to authenticate electronic mail messages. 'Authenticate' in this case means one can be sure that the message has not been altered, and that the sender's identity has not been forged. The technique can serve not only to authenticate images, but also to help the photographer retain and enforce copyright protection when the concept of 'electronic original' is no longer meaningful.

  5. MONICA: A Compact, Portable Dual Gamma Camera System for Mouse Whole-Body Imaging

    PubMed Central

    Xi, Wenze; Seidel, Jurgen; Karkareka, John W.; Pohida, Thomas J.; Milenic, Diane E.; Proffitt, James; Majewski, Stan; Weisenberger, Andrew G.; Green, Michael V.; Choyke, Peter L.

    2009-01-01

    Introduction We describe a compact, portable dual-gamma camera system (named “MONICA” for MObile Nuclear Imaging CAmeras) for visualizing and analyzing the whole-body biodistribution of putative diagnostic and therapeutic single photon emitting radiotracers in animals the size of mice. Methods Two identical, miniature pixelated NaI(Tl) gamma cameras were fabricated and installed “looking up” through the tabletop of a compact portable cart. Mice are placed directly on the tabletop for imaging. Camera imaging performance was evaluated with phantoms and field performance was evaluated in a weeklong In-111 imaging study performed in a mouse tumor xenograft model. Results Tc-99m performance measurements, using a photopeak energy window of 140 keV ± 10%, yielded the following results: spatial resolution (FWHM at 1-cm), 2.2-mm; sensitivity, 149 cps/MBq (5.5 cps/?Ci); energy resolution (FWHM), 10.8%; count rate linearity (count rate vs. activity), r2 = 0.99 for 0–185 MBq (0–5 mCi) in the field-of-view (FOV); spatial uniformity, < 3% count rate variation across the FOV. Tumor and whole-body distributions of the In-111 agent were well visualized in all animals in 5-minute images acquired throughout the 168-hour study period. Conclusion Performance measurements indicate that MONICA is well suited to whole-body single photon mouse imaging. The field study suggests that inter-device communications and user-oriented interfaces included in the MONICA design facilitate use of the system in practice. We believe that MONICA may be particularly useful early in the (cancer) drug development cycle where basic whole-body biodistribution data can direct future development of the agent under study and where logistical factors, e.g. limited imaging space, portability, and, potentially, cost are important. PMID:20346864

  6. ITEM—QM solutions for EM problems in image reconstruction exemplary for the Compton Camera

    NASA Astrophysics Data System (ADS)

    Pauli, J.; Pauli, E.-M.; Anton, G.

    2002-08-01

    Imaginary time expectation maximation (ITEM), a new algorithm for expectation maximization problems based on the quantum mechanics energy minimalization via imaginary (euclidian) time evolution is presented. Both ( the algorithm as well as the implementation ( http://www.johannes-pauli.de/item/index.html) are published under the terms of General GNU public License ( http://www.gnu.org/copyleft/gpl.html). Due to its generality ITEM is applicable to various image reconstruction problems like CT, PET, SPECT, NMR, Compton Camera, tomosynthesis as well as any other energy minimization problem. The choice of the optimal ITEM Hamiltonian is discussed and numerical results are presented for the Compton Camera.

  7. Wide Field Camera 3: A Powerful New Imager for the Hubble Space Telescope

    NASA Technical Reports Server (NTRS)

    Kimble, Randy

    2008-01-01

    Wide Field Camera 3 (WFC3) is a powerful UV/visible/near-infrared camera in development for installation into the Hubble Space Telescope during upcoming Servicing Mission 4. WFC3 provides two imaging channels. The UVIS channel incorporates a 4096 x 4096 pixel CCD focal plane with sensitivity from 200 to 1000 nm. The IR channel features a 1024 x 1024 pixel HgCdTe focal plane covering 850 to 1700 nm. We report here on the design of the instrument, the performance of its flight detectors, results of the ground test and calibration program, and the plans for the Servicing Mission installation and checkout.

  8. A portable device for small animal SPECT imaging in clinical gamma-cameras

    NASA Astrophysics Data System (ADS)

    Aguiar, P.; Silva-Rodríguez, J.; González-Castaño, D. M.; Pino, F.; Sánchez, M.; Herranz, M.; Iglesias, A.; Lois, C.; Ruibal, A.

    2014-07-01

    Molecular imaging is reshaping clinical practice in the last decades, providing practitioners with non-invasive ways to obtain functional in-vivo information on a diversity of relevant biological processes. The use of molecular imaging techniques in preclinical research is equally beneficial, but spreads more slowly due to the difficulties to justify a costly investment dedicated only to animal scanning. An alternative for lowering the costs is to repurpose parts of old clinical scanners to build new preclinical ones. Following this trend, we have designed, built, and characterized the performance of a portable system that can be attached to a clinical gamma-camera to make a preclinical single photon emission computed tomography scanner. Our system offers an image quality comparable to commercial systems at a fraction of their cost, and can be used with any existing gamma-camera with just an adaptation of the reconstruction software.

  9. Individual camera identification using correlation of fixed pattern noise in image sensors.

    PubMed

    Kurosawa, Kenji; Kuroki, Kenro; Akiba, Norimitsu

    2009-05-01

    This paper presents results of experiments related to individual video camera identification using a correlation coefficient of fixed pattern noise (FPN) in image sensors. Five color charge-coupled device (CCD) modules of the same brand were examined. Images were captured using a 12-bit monochrome video capture board and stored in a personal computer. For each module, 100 frames were captured. They were integrated to obtain FPN. The results show that a specific CCD module was distinguished among the five modules by analyzing the normalized correlation coefficient. The temporal change of the correlation coefficient during several days had only a negligible effect on identifying the modules. Furthermore, a positive relation was found between the correlation coefficient of the same modules and the number of frames that were used for image integration. Consequently, precise individual camera identification is enhanced by acquisition of as many frames as possible. PMID:19302379

  10. Validation of spectral sky radiance derived from all-sky camera images - a case study

    NASA Astrophysics Data System (ADS)

    Tohsing, K.; Schrempf, M.; Riechelmann, S.; Seckmeyer, G.

    2014-07-01

    Spectral sky radiance (380-760 nm) is derived from measurements with a hemispherical sky imager (HSI) system. The HSI consists of a commercial compact CCD (charge coupled device) camera equipped with a fish-eye lens and provides hemispherical sky images in three reference bands such as red, green and blue. To obtain the spectral sky radiance from these images, non-linear regression functions for various sky conditions have been derived. The camera-based spectral sky radiance was validated using spectral sky radiance measured with a CCD spectroradiometer. The spectral sky radiance for complete distribution over the hemisphere between both instruments deviates by less than 20% at 500 nm for all sky conditions and for zenith angles less than 80°. The reconstructed spectra of the wavelengths 380-760 nm between both instruments at various directions deviate by less than 20% for all sky conditions.

  11. Validation of spectral sky radiance derived from all-sky camera images - a case study

    NASA Astrophysics Data System (ADS)

    Tohsing, K.; Schrempf, M.; Riechelmann, S.; Seckmeyer, G.

    2014-01-01

    Spectral sky radiance (380-760 nm) is derived from measurements with a Hemispherical Sky Imager (HSI) system. The HSI consists of a commercial compact CCD (charge coupled device) camera equipped with a fish-eye lens and provides hemispherical sky images in three reference bands such as red, green and blue. To obtain the spectral sky radiance from these images non-linear regression functions for various sky conditions have been derived. The camera-based spectral sky radiance was validated by spectral sky radiance measured with a CCD spectroradiometer. The spectral sky radiance for complete distribution over the hemisphere between both instruments deviates by less than 20% at 500 nm for all sky conditions and for zenith angles less than 80°. The reconstructed spectra of the wavelength 380 nm to 760 nm between both instruments at various directions deviate by less then 20% for all sky conditions.

  12. Advanced High-Speed Framing Camera Development for Fast, Visible Imaging Experiments

    SciTech Connect

    Amy Lewis, Stuart Baker, Brian Cox, Abel Diaz, David Glass, Matthew Martin

    2011-05-11

    The advances in high-voltage switching developed in this project allow a camera user to rapidly vary the number of output frames from 1 to 25. A high-voltage, variable-amplitude pulse train shifts the deflection location to the new frame location during the interlude between frames, making multiple frame counts and locations possible. The final deflection circuit deflects to five different frame positions per axis, including the center position, making for a total of 25 frames. To create the preset voltages, electronically adjustable {+-}500 V power supplies were chosen. Digital-to-analog converters provide digital control of the supplies. The power supplies are clamped to {+-}400 V so as not to exceed the voltage ratings of the transistors. A field-programmable gated array (FPGA) receives the trigger signal and calculates the combination of plate voltages for each frame. The interframe time and number of frames are specified by the user, but are limited by the camera electronics. The variable-frame circuit shifts the plate voltages of the first frame to those of the second frame during the user-specified interframe time. Designed around an electrostatic image tube, a framing camera images the light present during each frame (at the photocathode) onto the tube’s phosphor. The phosphor persistence allows the camera to display multiple frames on the phosphor at one time. During this persistence, a CCD camera is triggered and the analog image is collected digitally. The tube functions by converting photons to electrons at the negatively charged photocathode. The electrons move quickly toward the more positive charge of the phosphor. Two sets of deflection plates skew the electron’s path in horizontal and vertical (x axis and y axis, respectively) directions. Hence, each frame’s electrons bombard the phosphor surface at a controlled location defined by the voltages on the deflection plates. To prevent the phosphor from being exposed between frames, the image tube is gated off between exposures.

  13. Engineering performance of IRIS2 infrared imaging camera and spectrograph

    NASA Astrophysics Data System (ADS)

    Churilov, Vladimir; Dawson, John; Smith, Greg A.; Waller, Lew; Whittard, John D.; Haynes, Roger; Lankshear, Allan; Ryder, Stuart D.; Tinney, Chris G.

    2004-09-01

    IRIS2, the infrared imager and spectrograph for the Cassegrain focus of the Anglo Australian Telescope, has been in service since October 2001. IRIS2 incorporated many novel features, including multiple cryogenic multislit masks, a dual chambered vacuum vessel (the smaller chamber used to reduce thermal cycle time required to change sets of multislit masks), encoded cryogenic wheel drives with controlled backlash, a deflection compensating structure, and use of teflon impregnated hard anodizing for gear lubrication at low temperatures. Other noteworthy features were: swaged foil thermal link terminations, the pupil imager, the detector focus mechanism, phased getter cycling to prevent detector contamination, and a flow-through LN2 precooling system. The instrument control electronics was designed to allow accurate positioning of the internal mechanisms with minimal generation of heat. The detector controller was based on the AAO2 CCD controller, adapted for use on the HAWAII1 detector (1024 x 1024 pixels) and is achieving low noise and high performance. We describe features of the instrument design, the problems encountered and the development work required to bring them into operation, and their performance in service.

  14. Stereo Imaging Velocimetry Technique Using Standard Off-the-Shelf CCD Cameras

    NASA Technical Reports Server (NTRS)

    McDowell, Mark; Gray, Elizabeth

    2004-01-01

    Stereo imaging velocimetry is a fluid physics technique for measuring three-dimensional (3D) velocities at a plurality of points. This technique provides full-field 3D analysis of any optically clear fluid or gas experiment seeded with tracer particles. Unlike current 3D particle imaging velocimetry systems that rely primarily on laser-based systems, stereo imaging velocimetry uses standard off-the-shelf charge-coupled device (CCD) cameras to provide accurate and reproducible 3D velocity profiles for experiments that require 3D analysis. Using two cameras aligned orthogonally, we present a closed mathematical solution resulting in an accurate 3D approximation of the observation volume. The stereo imaging velocimetry technique is divided into four phases: 3D camera calibration, particle overlap decomposition, particle tracking, and stereo matching. Each phase is explained in detail. In addition to being utilized for space shuttle experiments, stereo imaging velocimetry has been applied to the fields of fluid physics, bioscience, and colloidal microscopy.

  15. Camera-Based Lock-in and Heterodyne Carrierographic Photoluminescence Imaging of Crystalline Silicon Wafers

    NASA Astrophysics Data System (ADS)

    Sun, Q. M.; Melnikov, A.; Mandelis, A.

    2015-06-01

    Carrierographic (spectrally gated photoluminescence) imaging of a crystalline silicon wafer using an InGaAs camera and two spread super-bandgap illumination laser beams is introduced in both low-frequency lock-in and high-frequency heterodyne modes. Lock-in carrierographic images of the wafer up to 400 Hz modulation frequency are presented. To overcome the frame rate and exposure time limitations of the camera, a heterodyne method is employed for high-frequency carrierographic imaging which results in high-resolution near-subsurface information. The feasibility of the method is guaranteed by the typical superlinearity behavior of photoluminescence, which allows one to construct a slow enough beat frequency component from nonlinear mixing of two high frequencies. Intensity-scan measurements were carried out with a conventional single-element InGaAs detector photocarrier radiometry system, and the nonlinearity exponent of the wafer was found to be around 1.7. Heterodyne images of the wafer up to 4 kHz have been obtained and qualitatively analyzed. With the help of the complementary lock-in and heterodyne modes, camera-based carrierographic imaging in a wide frequency range has been realized for fundamental research and industrial applications toward in-line nondestructive testing of semiconductor materials and devices.

  16. Measuring the image quality of digital-camera sensors by a ping-pong ball

    NASA Astrophysics Data System (ADS)

    Pozo, Antonio M.; Rubiño, Manuel; Castro, José J.; Salas, Carlos; Pérez-Ocón, Francisco

    2014-07-01

    In this work, we present a low-cost experimental setup to evaluate the image quality of digital-camera sensors, which can be implemented in undergraduate and postgraduate teaching. The method consists of evaluating the modulation transfer function (MTF) of digital-camera sensors by speckle patterns using a ping-pong ball as a diffuser, with two handmade circular apertures acting as input and output ports, respectively. To specify the spatial-frequency content of the speckle pattern, it is necessary to use an aperture; for this, we made a slit in a piece of black cardboard. First, the MTF of a digital-camera sensor was calculated using the ping-pong ball and the handmade slit, and then the MTF was calculated using an integrating sphere and a high-quality steel slit. Finally, the results achieved with both experimental setups were compared, showing a similar MTF in both cases.

  17. Hyperspectral imaging using a color camera and its application for pathogen detection

    NASA Astrophysics Data System (ADS)

    Yoon, Seung-Chul; Shin, Tae-Sung; Heitschmidt, Gerald W.; Lawrence, Kurt C.; Park, Bosoon; Gamble, Gary

    2015-02-01

    This paper reports the results of a feasibility study for the development of a hyperspectral image recovery (reconstruction) technique using a RGB color camera and regression analysis in order to detect and classify colonies of foodborne pathogens. The target bacterial pathogens were the six representative non-O157 Shiga-toxin producing Escherichia coli (STEC) serogroups (O26, O45, O103, O111, O121, and O145) grown in Petri dishes of Rainbow agar. The purpose of the feasibility study was to evaluate whether a DSLR camera (Nikon D700) could be used to predict hyperspectral images in the wavelength range from 400 to 1,000 nm and even to predict the types of pathogens using a hyperspectral STEC classification algorithm that was previously developed. Unlike many other studies using color charts with known and noise-free spectra for training reconstruction models, this work used hyperspectral and color images, separately measured by a hyperspectral imaging spectrometer and the DSLR color camera. The color images were calibrated (i.e. normalized) to relative reflectance, subsampled and spatially registered to match with counterpart pixels in hyperspectral images that were also calibrated to relative reflectance. Polynomial multivariate least-squares regression (PMLR) was previously developed with simulated color images. In this study, partial least squares regression (PLSR) was also evaluated as a spectral recovery technique to minimize multicollinearity and overfitting. The two spectral recovery models (PMLR and PLSR) and their parameters were evaluated by cross-validation. The QR decomposition was used to find a numerically more stable solution of the regression equation. The preliminary results showed that PLSR was more effective especially with higher order polynomial regressions than PMLR. The best classification accuracy measured with an independent test set was about 90%. The results suggest the potential of cost-effective color imaging using hyperspectral image classification algorithms for rapidly differentiating pathogens in agar plates.

  18. Strategies for registering range images from unknown camera positions

    NASA Astrophysics Data System (ADS)

    Bernardini, Fausto; Rushmeier, Holly E.

    2000-03-01

    We describe a project to construct a 3D numerical model of Michelangelo's Florentine Pieta to be used in a study of the sculpture. Here we focus on the registration of the range images used to construct the model. The major challenge was the range of length scales involved. A resolution of 1 mm or less required for the 2.25 m tall piece. To achieve this resolution, we could only acquire an area of 20 by 20 cm per scan. A total of approximately 700 images were required. Ideally, a tracker would be attached to the scanner to record position and pose. The use of a tracker was not possible in the field. Instead, we used a crude-to-fine approach to registering the meshes to one another. The crudest level consisted of pairwise manual registration, aided by texture maps containing laser dots that were projected onto the sculpture. This crude alignment was refined by an automatic registration of laser dot centers. In this phase, we found that consistency constraints on dot matches were essential to obtaining accurate results. The laser dot alignment was refined by an automatic registration of laser dot centers. In this phase, we found that consistency constraints on dot matches were essential to obtaining accurate results. The laser dot alignment was further refined using a variation of the ICP algorithm developed by Besl and McKay. In the application of ICP to global registration, we developed a method to avoid one class of local minima by finding a set of points, rather than the single point, that matches each candidate point.

  19. Camera selection for real-time in vivo radiation treatment verification systems using Cherenkov imaging

    SciTech Connect

    Andreozzi, Jacqueline M. Glaser, Adam K.; Zhang, Rongxiao; Jarvis, Lesley A.; Gladstone, David J.; Pogue, Brian W.

    2015-02-15

    Purpose: To identify achievable camera performance and hardware needs in a clinical Cherenkov imaging system for real-time, in vivo monitoring of the surface beam profile on patients, as novel visual information, documentation, and possible treatment verification for clinicians. Methods: Complementary metal-oxide-semiconductor (CMOS), charge-coupled device (CCD), intensified charge-coupled device (ICCD), and electron multiplying-intensified charge coupled device (EM-ICCD) cameras were investigated to determine Cherenkov imaging performance in a clinical radiotherapy setting, with one emphasis on the maximum supportable frame rate. Where possible, the image intensifier was synchronized using a pulse signal from the Linac in order to image with room lighting conditions comparable to patient treatment scenarios. A solid water phantom irradiated with a 6 MV photon beam was imaged by the cameras to evaluate the maximum frame rate for adequate Cherenkov detection. Adequate detection was defined as an average electron count in the background-subtracted Cherenkov image region of interest in excess of 0.5% (327 counts) of the 16-bit maximum electron count value. Additionally, an ICCD and an EM-ICCD were each used clinically to image two patients undergoing whole-breast radiotherapy to compare clinical advantages and limitations of each system. Results: Intensifier-coupled cameras were required for imaging Cherenkov emission on the phantom surface with ambient room lighting; standalone CMOS and CCD cameras were not viable. The EM-ICCD was able to collect images from a single Linac pulse delivering less than 0.05 cGy of dose at 30 frames/s (fps) and pixel resolution of 512 × 512, compared to an ICCD which was limited to 4.7 fps at 1024 × 1024 resolution. An intensifier with higher quantum efficiency at the entrance photocathode in the red wavelengths [30% quantum efficiency (QE) vs previous 19%] promises at least 8.6 fps at a resolution of 1024 × 1024 and lower monetary cost than the EM-ICCD. Conclusions: The ICCD with an intensifier better optimized for red wavelengths was found to provide the best potential for real-time display (at least 8.6 fps) of radiation dose on the skin during treatment at a resolution of 1024 × 1024.

  20. Single camera system for multi-wavelength fluorescent imaging in the heart.

    PubMed

    Yamanaka, Takeshi; Arafune, Tatsuhiko; Shibata, Nitaro; Honjo, Haruo; Kamiya, Kaichiro; Kodama, Itsuo; Sakuma, Ichiro

    2012-01-01

    Optical mapping has been a powerful method to measure the cardiac electrophysiological phenomenon such as membrane potential(V(m)), intracellular calcium(Ca(2+)), and the other electrophysiological parameters. To measure two parameters simultaneously, the dual mapping system using two cameras is often used. However, the method to measure more than three parameters does not exist. To exploit the full potential of fluorescence imaging, an innovative method to measure multiple, more than three parameters is needed. In this study, we present a new optical mapping system which records multiple parameters using a single camera. Our system consists of one camera, custom-made optical lens units, and a custom-made filter wheel. The optical lens units is designed to focus the fluorescence light at filter position, and form an image on camera's sensor. To obtain optical signals with high quality, efficiency of light collection was carefully discussed in designing the optical system. The developed optical system has object space numerical aperture(NA) 0.1, and image space NA 0.23. The filter wheel was rotated by a motor, which allows filter switching corresponding with needed fluorescence wavelength. The camera exposure and filter switching were synchronized by phase locked loop, which allow this system to record multiple fluorescent signals frame by frame alternately. To validate the performance of this system, we performed experiments to observe V(m) and Ca(2+) dynamics simultaneously (frame rate: 125fps) with Langendorff perfused rabbit heart. Firstly, we applied basic stimuli to the heart base (cycle length: 500ms), and observed planer wave. The waveforms of V(m) and Ca(2+) show the same upstroke synchronized with cycle length of pacing. In addition, we recorded V(m) and Ca(2+) signals during ventricular fibrillation induced by burst pacing. According to these experiments, we showed the efficacy and availability of our method for cardiac electrophysiological research. PMID:23366735

  1. Real-time analysis of laser beams by simultaneous imaging on a single camera chip

    NASA Astrophysics Data System (ADS)

    Piehler, S.; Boley, M.; Abdou Ahmed, M.; Graf, T.

    2015-03-01

    The fundamental parameters of a laser beam, such as the exact position and size of the focus or the beam quality factor M² yield vital information both for laser developers and end-users. However, each of these parameters can significantly change on a short time scale due to thermally induced effects in the processing optics or in the laser source itself, leading to process instabilities and non-reproducible results. In order to monitor the transient behavior of these effects, we have developed a camera-based measurement system, which enables full laser beam characterization in online. A novel monolithic beam splitter has been designed which generates a 2D array of images on a single camera chip, each of which corresponds to an intensity cross section of the beam along the propagation axis separated by a well-defined spacing. Thus, using the full area of the camera chip, a large number of measurement planes is achieved, leading to a measurement range sufficient for a full beam characterization conforming to ISO 11146 for a broad range of beam parameters of the incoming beam. The exact beam diameters in each plane are derived by calculation of the 2nd order intensity moments of the individual intensity slices. The processing time needed to carry out both the background filtering and the image processing operations for the full analysis of a single camera image is in the range of a few milliseconds. Hence, the measurement frequency of our system is mainly limited by the frame-rate of the camera.

  2. Portable, stand-off spectral imaging camera for detection of effluents and residues

    NASA Astrophysics Data System (ADS)

    Goldstein, Neil; St. Peter, Benjamin; Grot, Jonathan; Kogan, Michael; Fox, Marsha; Vujkovic-Cvijin, Pajo; Penny, Ryan; Cline, Jason

    2015-06-01

    A new, compact and portable spectral imaging camera, employing a MEMs-based encoded imaging approach, has been built and demonstrated for detection of hazardous contaminants including gaseous effluents and solid-liquid residues on surfaces. The camera is called the Thermal infrared Reconfigurable Analysis Camera for Effluents and Residues (TRACER). TRACER operates in the long wave infrared and has the potential to detect a wide variety of materials with characteristic spectral signatures in that region. The 30 lb. camera is tripod mounted and battery powered. A touch screen control panel provides a simple user interface for most operations. The MEMS spatial light modulator is a Texas Instruments Digital Microarray Array with custom electronics and firmware control. Simultaneous 1D-spatial and 1Dspectral dimensions are collected, with the second spatial dimension obtained by scanning the internal spectrometer slit. The sensor can be configured to collect data in several modes including full hyperspectral imagery using Hadamard multiplexing, panchromatic thermal imagery, and chemical-specific contrast imagery, switched with simple user commands. Matched filters and other analog filters can be generated internally on-the-fly and applied in hardware, substantially reducing detection time and improving SNR over HSI software processing, while reducing storage requirements. Results of preliminary instrument evaluation and measurements of flame exhaust are presented.

  3. Initial Observations by the MRO Mars Color Imager and Context Camera

    NASA Astrophysics Data System (ADS)

    Malin, M. C.

    2006-12-01

    The Mars Color Imager (MARCI) on MRO is a copy of the wide angle instrument flown on the unsuccessful Mars Climate Orbiter. It consists of two optical systems (visible and ultraviolet) projecting images onto a single CCD detector. The field of view of the optics is 180 degrees cross-track, sufficient to image limb-to-limb even when the MRO spacecraft is pointing off-nadir by 20 degrees. MARCI can image in two UV (260 and 320 ±15 nm) and five visible (425, 550, 600, 650, and 750 nm, ±25 nm) channels. The visible channels have a nadir scale of about 900 m, and a limb scale of just under 5 km; the UV channels are summed to 7-8 km nadir scale. Daily global observations, consisting of 12 terminator to terminator, limb-to-limb swaths are used to monitor meteorological conditions, clouds, dust storms, and ozone concentration (a surrogate for water). During high data rate periods, MARCI can reproduce the Mariner 9 global image mosaic every day. The Context Camera (CTX) acquires 30 km wide, 6 m/pixel images, and is new camera derived from the MCO medium angle MARCI. Its primary purpose is to provide spatial context for MRO instruments with higher resolution, or more limited fields of view. Scientifically, CTX acquires images that are nearly as high in spatial resolution as the Mars Orbiter Camera aboard MGS. CTX can cover about 9 percent of Mars, but stereoscopic coverage, overlap for mosaics, and re-imaging locations to search for changes will reduce this coverage significantly.

  4. High-resolution image digitizing through 12x3-bit RGB-filtered CCD camera

    NASA Astrophysics Data System (ADS)

    Cheng, Andrew Y. S.; Pau, Michael C. Y.

    1996-09-01

    A high resolution computer-controlled CCD image capturing system is developed by using a 12 bits 1024 by 1024 pixels CCD camera and motorized RGB filters to grasp an image with color depth up to 36 bits. The filters distinguish the major components of color and collect them separately while the CCD camera maintains the spatial resolution and detector filling factor. The color separation can be done optically rather than electronically. The operation is simply by placing the capturing objects like color photos, slides and even x-ray transparencies under the camera system, the necessary parameters such as integration time, mixing level and light intensity are automatically adjusted by an on-line expert system. This greatly reduces the restrictions of the capturing species. This unique approach can save considerable time for adjusting the quality of image, give much more flexibility of manipulating captured object even if it is a 3D object with minimal setup fixers. In addition, cross sectional dimension of a 3D capturing object can be analyzed by adapting a fiber optic ring light source. It is particularly useful in non-contact metrology of a 3D structure. The digitized information can be stored in an easily transferable format. Users can also perform a special LUT mapping automatically or manually. Applications of the system include medical images archiving, printing quality control, 3D machine vision, and etc.

  5. First responder thermal imaging cameras: establishment of representative performance testing conditions

    NASA Astrophysics Data System (ADS)

    Amon, Francine; Hamins, Anthony; Rowe, Justin

    2006-04-01

    Thermal imaging cameras are rapidly becoming integral equipment for first responders for use in structure fires and other emergencies. Currently there are no standardized performance metrics or test methods available to the users and manufacturers of these instruments. The Building and Fire Research Laboratory (BFRL) at the National Institute of Standards and Technology is conducting research to establish test conditions that best represent the environment in which these cameras are used. First responders may use thermal imagers for field operations ranging from fire attack and search/rescue in burning structures, to hot spot detection in overhaul activities, to detecting the location of hazardous materials. In order to develop standardized performance metrics and test methods that capture the harsh environment in which these cameras may be used, information has been collected from the literature, and from full-scale tests that have been conducted at BFRL. Initial experimental work has focused on temperature extremes and the presence of obscuring media such as smoke. In full-scale tests, thermal imagers viewed a target through smoke, dust, and steam, with and without flames in the field of view. The fuels tested were hydrocarbons (methanol, heptane, propylene, toluene), wood, upholstered cushions, and carpeting with padding. Gas temperatures, CO, CO II, and O II volume fraction, emission spectra, and smoke concentrations were measured. Simple thermal bar targets and a heated mannequin fitted in firefighter gear were used as targets. The imagers were placed at three distances from the targets, ranging from 3 m to 12 m.

  6. Portable retinal imaging for eye disease screening using a consumer-grade digital camera

    NASA Astrophysics Data System (ADS)

    Barriga, Simon; Larichev, Andrey; Zamora, Gilberto; Soliz, Peter

    2012-03-01

    The development of affordable means to image the retina is an important step toward the implementation of eye disease screening programs. In this paper we present the i-RxCam, a low-cost, hand-held, retinal camera for widespread applications such as tele-retinal screening for eye diseases like diabetic retinopathy (DR), glaucoma, and age-related ocular diseases. Existing portable retinal imagers do not meet the requirements of a low-cost camera with sufficient technical capabilities (field of view, image quality, portability, battery power, and ease-of-use) to be distributed widely to low volume clinics, such as the offices of single primary care physicians serving rural communities. The i-RxCam uses a Nikon D3100 digital camera body. The camera has a CMOS sensor with 14.8 million pixels. We use a 50mm focal lens that gives a retinal field of view of 45 degrees. The internal autofocus can compensate for about 2D (diopters) of focusing error. The light source is an LED produced by Philips with a linear emitting area that is transformed using a light pipe to the optimal shape at the eye pupil, an annulus. To eliminate corneal reflex we use a polarization technique in which the light passes through a nano-wire polarizer plate. This is a novel type of polarizer featuring high polarization separation (contrast ratio of more than 1000) and very large acceptance angle (>45 degrees). The i-RxCam approach will yield a significantly more economical retinal imaging device that would allow mass screening of the at-risk population.

  7. Imaging microscopic structures in pathological retinas using a flood-illumination adaptive optics retinal camera

    NASA Astrophysics Data System (ADS)

    Viard, Clément; Nakashima, Kiyoko; Lamory, Barbara; Pâques, Michel; Levecq, Xavier; Château, Nicolas

    2011-03-01

    This research is aimed at characterizing in vivo differences between healthy and pathological retinal tissues at the microscopic scale using a compact adaptive optics (AO) retinal camera. Tests were performed in 120 healthy eyes and 180 eyes suffering from 19 different pathological conditions, including age-related maculopathy (ARM), glaucoma and rare diseases such as inherited retinal dystrophies. Each patient was first examined using SD-OCT and infrared SLO. Retinal areas of 4°x4° were imaged using an AO flood-illumination retinal camera based on a large-stroke deformable mirror. Contrast was finally enhanced by registering and averaging rough images using classical algorithms. Cellular-resolution images could be obtained in most cases. In ARM, AO images revealed granular contents in drusen, which were invisible in SLO or OCT images, and allowed the observation of the cone mosaic between drusen. In glaucoma cases, visual field was correlated to changes in cone visibility. In inherited retinal dystrophies, AO helped to evaluate cone loss across the retina. Other microstructures, slightly larger in size than cones, were also visible in several retinas. AO provided potentially useful diagnostic and prognostic information in various diseases. In addition to cones, other microscopic structures revealed by AO images may also be of interest in monitoring retinal diseases.

  8. Improving estimates of leaf area index by processing RAW images in upward-pointing-digital cameras

    NASA Astrophysics Data System (ADS)

    Jeon, S.; Ryu, Y.

    2013-12-01

    Leaf Area Index (LAI) measurement using upward-pointing digital camera in the forest floor has gained great attentions due to the feasibility of measuring LAI continuously at high accuracy. However, using upward-pointing digital camera could underestimate LAI when photos are exposed to excessive light conditions which make leaves near the sky in the photo disappeared. Processing RAW images could reduce possibility of LAI underestimation. This study aims to develop RAW image processing and compare RAW-derived LAI to JPEG-derived LAI. Digital photos have been automatically taken three times per day (0.5, 1, 1.5 hours before sunset) in both RAW and JPEG formats at Gwangreung deciduous and evergreen forests in South Korea. We used blue channel of RAW images to quantify gap fraction, then LAI. LAI estimates from JPEG and RAW images do not show substantial differences in the deciduous forest. However, LAI derived from RAW images at evergreen forest where forest floor is fairly dark even in daytime shows substantially less noise and greater values than JPEG-derived LAI. This study concludes that LAI estimates should be derived from RAW images for more accurate measurement of LAI.

  9. COMPACT CdZnTe-BASED GAMMA CAMERA FOR PROSTATE CANCER IMAGING

    SciTech Connect

    CUI, Y.; LALL, T.; TSUI, B.; YU, J.; MAHLER, G.; BOLOTNIKOV, A.; VASKA, P.; DeGERONIMO, G.; O'CONNOR, P.; MEINKEN, G.; JOYAL, J.; BARRETT, J.; CAMARDA, G.; HOSSAIN, A.; KIM, K.H.; YANG, G.; POMPER, M.; CHO, S.; WEISMAN, K.; SEO, Y.; BABICH, J.; LaFRANCE, N.; AND JAMES, R.B.

    2011-10-23

    In this paper, we discuss the design of a compact gamma camera for high-resolution prostate cancer imaging using Cadmium Zinc Telluride (CdZnTe or CZT) radiation detectors. Prostate cancer is a common disease in men. Nowadays, a blood test measuring the level of prostate specific antigen (PSA) is widely used for screening for the disease in males over 50, followed by (ultrasound) imaging-guided biopsy. However, PSA tests have a high false-positive rate and ultrasound-guided biopsy has a high likelihood of missing small cancerous tissues. Commercial methods of nuclear medical imaging, e.g. PET and SPECT, can functionally image the organs, and potentially find cancer tissues at early stages, but their applications in diagnosing prostate cancer has been limited by the smallness of the prostate gland and the long working distance between the organ and the detectors comprising these imaging systems. CZT is a semiconductor material with wide band-gap and relatively high electron mobility, and thus can operate at room temperature without additional cooling. CZT detectors are photon-electron direct-conversion devices, thus offering high energy-resolution in detecting gamma rays, enabling energy-resolved imaging, and reducing the background of Compton-scattering events. In addition, CZT material has high stopping power for gamma rays; for medical imaging, a few-mm-thick CZT material provides adequate detection efficiency for many SPECT radiotracers. Because of these advantages, CZT detectors are becoming popular for several SPECT medical-imaging applications. Most recently, we designed a compact gamma camera using CZT detectors coupled to an application-specific-integrated-circuit (ASIC). This camera functions as a trans-rectal probe to image the prostate gland from a distance of only 1-5 cm, thus offering higher detection efficiency and higher spatial resolution. Hence, it potentially can detect prostate cancers at their early stages. The performance tests of this camera have been completed. The results show better than 6-mm resolution at a distance of 1 cm. Details of the test results are discussed in this paper.

  10. Compact CdZnTe-based gamma camera for prostate cancer imaging

    NASA Astrophysics Data System (ADS)

    Cui, Yonggang; Lall, Terry; Tsui, Benjamin; Yu, Jianhua; Mahler, George; Bolotnikov, Aleksey; Vaska, Paul; De Geronimo, Gianluigi; O'Connor, Paul; Meinken, George; Joyal, John; Barrett, John; Camarda, Giuseppe; Hossain, Anwar; Kim, Ki Hyun; Yang, Ge; Pomper, Marty; Cho, Steve; Weisman, Ken; Seo, Youngho; Babich, John; LaFrance, Norman; James, Ralph B.

    2011-06-01

    In this paper, we discuss the design of a compact gamma camera for high-resolution prostate cancer imaging using Cadmium Zinc Telluride (CdZnTe or CZT) radiation detectors. Prostate cancer is a common disease in men. Nowadays, a blood test measuring the level of prostate specific antigen (PSA) is widely used for screening for the disease in males over 50, followed by (ultrasound) imaging-guided biopsy. However, PSA tests have a high falsepositive rate and ultrasound-guided biopsy has a high likelihood of missing small cancerous tissues. Commercial methods of nuclear medical imaging, e.g. PET and SPECT, can functionally image the organs, and potentially find cancer tissues at early stages, but their applications in diagnosing prostate cancer has been limited by the smallness of the prostate gland and the long working distance between the organ and the detectors comprising these imaging systems. CZT is a semiconductor material with wide band-gap and relatively high electron mobility, and thus can operate at room temperature without additional cooling. CZT detectors are photon-electron direct-conversion devices, thus offering high energy-resolution in detecting gamma rays, enabling energy-resolved imaging, and reducing the background of Compton-scattering events. In addition, CZT material has high stopping power for gamma rays; for medical imaging, a few-mm-thick CZT material provides adequate detection efficiency for many SPECT radiotracers. Because of these advantages, CZT detectors are becoming popular for several SPECT medical-imaging applications. Most recently, we designed a compact gamma camera using CZT detectors coupled to an application-specific-integratedcircuit (ASIC). This camera functions as a trans-rectal probe to image the prostate gland from a distance of only 1-5 cm, thus offering higher detection efficiency and higher spatial resolution. Hence, it potentially can detect prostate cancers at their early stages. The performance tests of this camera have been completed. The results show better than 6-mm resolution at a distance of 1 cm. Details of the test results are discussed in this paper.

  11. Boston University Computer Science Technical Report No. BUCS-TR-2011-007 Camera Canvas: Image Editing Software for

    E-print Network

    {ckwan, betke}@cs.bu.edu Abstract. We developed Camera Canvas, photo editing and picture drawing software Editing Software for People with Disabilities Christopher Kwan and Margrit Betke Image and Video Computing additional user studies and improving the software based on feedback. Keywords: Assistive Technology, Camera

  12. Diffuse reflection imaging of sub-epidermal tissue haematocrit using a simple RGB camera

    NASA Astrophysics Data System (ADS)

    Leahy, Martin J.; O'Doherty, Jim; McNamara, Paul; Henricson, Joakim; Nilsson, Gert E.; Anderson, Chris; Sjoberg, Folke

    2007-05-01

    This paper describes the design and evaluation of a novel easy to use, tissue viability imaging system (TiVi). The system is based on the methods of diffuse reflectance spectroscopy and polarization spectroscopy. The technique has been developed as an alternative to current imaging technology in the area of microcirculation imaging, most notably optical coherence tomography (OCT) and laser Doppler perfusion imaging (LDPI). The system is based on standard digital camera technology, and is sensitive to red blood cells (RBCs) in the microcirculation. Lack of clinical acceptance of both OCT and LDPI fuels the need for an objective, simple, reproducible and portable imaging method that can provide accurate measurements related to stimulus vasoactivity in the microvasculature. The limitations of these technologies are discussed in this paper. Uses of the Tissue Viability system include skin care products, drug development, and assessment spatial and temporal aspects of vasodilation (erythema) and vasoconstriction (blanching).

  13. Advances In The Image Sensor: The Critical Element In The Performance Of Cameras

    NASA Astrophysics Data System (ADS)

    Narabu, Tadakuni

    2011-01-01

    Digital imaging technology and digital imaging products are advancing at a rapid pace. The progress of digital cameras has been particularly impressive. Image sensors now have smaller pixel size, a greater number of pixels, higher sensitivity, lower noise and a higher frame rate. Picture resolution is a function of the number of pixels of the image sensor. The more pixels there are, the smaller each pixel, but the sensitivity and the charge-handling capability of each pixel can be maintained or even be increased by raising the quantum efficiency and the saturation capacity of the pixel per unit area. Sony's many technologies can be successfully applied to CMOS Image Sensor manufacturing toward sub-2.0 um pitch pixel and beyond.

  14. Camera model and calibration process for high-accuracy digital image metrology of inspection planes

    NASA Astrophysics Data System (ADS)

    Correia, Bento A. B.; Dinis, Joao

    1998-10-01

    High accuracy digital image based metrology must rely on an integrated model of image generation that is able to consider simultaneously the geometry of the camera vs. object positioning, and the conversion of the optical image on the sensor into an electronic digital format. In applications of automated visual inspection involving the analysis of approximately plane objects these models are generally simplified in order to facilitate the process of camera calibration. In this context, the lack of rigor in the determination of the intrinsic parameters in such models is particularly relevant. Aiming at the high accuracy metrology of contours of objects lying on an analysis plane, and involving sub-pixel measurements, this paper presents a three-stage camera model that includes an extrinsic component of perspective distortion and the intrinsic components of radial lens distortion and sensor misalignment. The later two factors are crucial in applications of machine vision that rely on the use of low cost optical components. A polynomial model for the negative radial lens distortion of wide field of view CCTV lenses is also established.

  15. Intercomparison of SO2 camera systems for imaging volcanic gas plumes

    USGS Publications Warehouse

    Kern, Christoph; Lübcke, Peter; Bobrowski, Nicole; Campion, Robin; Mori, Toshiya; Smekens, Jean-Francois; Stebel, Kerstin; Tamburello, Giancarlo; Burton, Mike; Platt, Ulrich; Prata, Fred

    2015-01-01

    SO2 camera systems are increasingly being used to image volcanic gas plumes. The ability to derive SO2 emission rates directly from the acquired imagery at high time resolution allows volcanic process studies that incorporate other high time-resolution datasets. Though the general principles behind the SO2 camera have remained the same for a number of years, recent advances in CCD technology and an improved understanding of the physics behind the measurements have driven a continuous evolution of the camera systems. Here we present an intercomparison of seven different SO2 cameras. In the first part of the experiment, the various technical designs are compared and the advantages and drawbacks of individual design options are considered. Though the ideal design was found to be dependent on the specific application, a number of general recommendations are made. Next, a time series of images recorded by all instruments at Stromboli Volcano (Italy) is compared. All instruments were easily able to capture SO2 clouds emitted from the summit vents. Quantitative comparison of the SO2 load in an individual cloud yielded an intra-instrument precision of about 12%. From the imagery, emission rates were then derived according to each group's standard retrieval process. A daily average SO2 emission rate of 61 ± 10 t/d was calculated. Due to differences in spatial integration methods and plume velocity determination, the time-dependent progression of SO2 emissions varied significantly among the individual systems. However, integration over distinct degassing events yielded comparable SO2 masses. Based on the intercomparison data, we find an approximate 1-sigma precision of 20% for the emission rates derived from the various SO2 cameras. Though it may still be improved in the future, this is currently within the typical accuracy of the measurement and is considered sufficient for most applications.

  16. Can We Trust the Use of Smartphone Cameras in Clinical Practice? Laypeople Assessment of Their Image Quality

    PubMed Central

    Boissin, Constance; Fleming, Julian; Wallis, Lee; Hasselberg, Marie

    2015-01-01

    Abstract Background: Smartphone cameras are rapidly being introduced in medical practice, among other devices for image-based teleconsultation. Little is known, however, about the actual quality of the images taken, which is the object of this study. Materials and Methods: A series of nonclinical objects (from three broad categories) were photographed by a professional photographer using three smartphones (iPhone® 4 [Apple, Cupertino, CA], Samsung [Suwon, Korea] Galaxy S2, and BlackBerry® 9800 [BlackBerry Ltd., Waterloo, ON, Canada]) and a digital camera (Canon [Tokyo, Japan] Mark II). In a Web survey a convenience sample of 60 laypeople “blind” to the types of camera assessed the quality of the photographs, individually and best overall. We then measured how each camera scored by object category and as a whole and whether a camera ranked best using a Mann–Whitney U test for 2×2 comparisons. Results: There were wide variations between and within categories in the quality assessments for all four cameras. The iPhone had the highest proportion of images individually evaluated as good, and it also ranked best for more objects compared with other cameras, including the digital one. The ratings of the Samsung or the BlackBerry smartphone did not significantly differ from those of the digital camera. Conclusions: Whereas one smartphone camera ranked best more often, all three smartphones obtained results at least as good as those of the digital camera. Smartphone cameras can be a substitute for digital cameras for the purposes of medical teleconsulation. PMID:26076033

  17. Intraoperative Imaging Guidance for Sentinel Node Biopsy in Melanoma Using a Mobile Gamma Camera

    SciTech Connect

    Dengel, Lynn T; Judy, Patricia G; Petroni, Gina R; Smolkin, Mark E; Rehm, Patrice K; Majewski, Stan; Williams, Mark B

    2011-04-01

    The objective is to evaluate the sensitivity and clinical utility of intraoperative mobile gamma camera (MGC) imaging in sentinel lymph node biopsy (SLNB) in melanoma. The false-negative rate for SLNB for melanoma is approximately 17%, for which failure to identify the sentinel lymph node (SLN) is a major cause. Intraoperative imaging may aid in detection of SLN near the primary site, in ambiguous locations, and after excision of each SLN. The present pilot study reports outcomes with a prototype MGC designed for rapid intraoperative image acquisition. We hypothesized that intraoperative use of the MGC would be feasible and that sensitivity would be at least 90%. From April to September 2008, 20 patients underwent Tc99 sulfur colloid lymphoscintigraphy, and SLNB was performed with use of a conventional fixed gamma camera (FGC), and gamma probe followed by intraoperative MGC imaging. Sensitivity was calculated for each detection method. Intraoperative logistical challenges were scored. Cases in which MGC provided clinical benefit were recorded. Sensitivity for detecting SLN basins was 97% for the FGC and 90% for the MGC. A total of 46 SLN were identified: 32 (70%) were identified as distinct hot spots by preoperative FGC imaging, 31 (67%) by preoperative MGC imaging, and 43 (93%) by MGC imaging pre- or intraoperatively. The gamma probe identified 44 (96%) independent of MGC imaging. The MGC provided defined clinical benefit as an addition to standard practice in 5 (25%) of 20 patients. Mean score for MGC logistic feasibility was 2 on a scale of 1-9 (1 = best). Intraoperative MGC imaging provides additional information when standard techniques fail or are ambiguous. Sensitivity is 90% and can be increased. This pilot study has identified ways to improve the usefulness of an MGC for intraoperative imaging, which holds promise for reducing false negatives of SLNB for melanoma.

  18. Experiences Supporting the Lunar Reconnaissance Orbiter Camera: the Devops Model

    NASA Astrophysics Data System (ADS)

    Licht, A.; Estes, N. M.; Bowman-Cisnesros, E.; Hanger, C. D.

    2013-12-01

    Introduction: The Lunar Reconnaissance Orbiter Camera (LROC) Science Operations Center (SOC) is responsible for instrument targeting, product processing, and archiving [1]. The LROC SOC maintains over 1,000,000 observations with over 300 TB of released data. Processing challenges compound with the acquisition of over 400 Gbits of observations daily creating the need for a robust, efficient, and reliable suite of specialized software. Development Environment: The LROC SOC's software development methodology has evolved over time. Today, the development team operates in close cooperation with the systems administration team in a model known in the IT industry as DevOps. The DevOps model enables a highly productive development environment that facilitates accomplishment of key goals within tight schedules[2]. The LROC SOC DevOps model incorporates industry best practices including prototyping, continuous integration, unit testing, code coverage analysis, version control, and utilizing existing open source software. Scientists and researchers at LROC often prototype algorithms and scripts in a high-level language such as MATLAB or IDL. After the prototype is functionally complete the solution is implemented as production ready software by the developers. Following this process ensures that all controls and requirements set by the LROC SOC DevOps team are met. The LROC SOC also strives to enhance the efficiency of the operations staff by way of weekly presentations and informal mentoring. Many small scripting tasks are assigned to the cognizant operations personnel (end users), allowing for the DevOps team to focus on more complex and mission critical tasks. In addition to leveraging open source software the LROC SOC has also contributed to the open source community by releasing Lunaserv [3]. Findings: The DevOps software model very efficiently provides smooth software releases and maintains team momentum. Scientists prototyping their work has proven to be very efficient as developers do not need to spend time iterating over small changes. Instead, these changes are realized in early prototypes and implemented before the task is seen by developers. The development practices followed by the LROC SOC DevOps team help facilitate a high level of software quality that is necessary for LROC SOC operations. Application to the Scientific Community: There is no replacement for having software developed by professional developers. While it is beneficial for scientists to write software, this activity should be seen as prototyping, which is then made production ready by professional developers. When constructed properly, even a small development team has the ability to increase the rate of software development for a research group while creating more efficient, reliable, and maintainable products. This strategy allows scientists to accomplish more, focusing on teamwork, rather than software development, which may not be their primary focus. 1. Robinson et al. (2010) Space Sci. Rev. 150, 81-124 2. DeGrandis. (2011) Cutter IT Journal. Vol 24, No. 8, 34-39 3. Estes, N.M.; Hanger, C.D.; Licht, A.A.; Bowman-Cisneros, E.; Lunaserv Web Map Service: History, Implementation Details, Development, and Uses, http://adsabs.harvard.edu/abs/2013LPICo1719.2609E.

  19. Decision Strategies that Maximize the Area Under the LROC Curve

    E-print Network

    systems or data processing methods. An important task is detection of a lesion or defect, heretofore, IEEE and Gene Gindi, Senior Member, IEEE Abstract-- For the 2-class detection problem (signal ab- sent AUC, the area under the ROC curve. The AUC-optimizing property makes it a valuable tool in imaging

  20. The facsimile camera - Its potential as a planetary lander imaging system

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Katzberg, S. J.; Kelly, W. L.

    1973-01-01

    The facsimile camera is an optical-mechanical scanning device which is an attractive candidate for planetary lander imaging systems and has been selected for the Viking/Mars mission because of its light weight, small size, and low power requirement. Other advantages are that it can provide good radiometric and photogrammetric accuracy because the complete field of view is scanned with a single photodetector located on or near the optical axis of the objective lens. In addition, this device has the potential capability of multispectral imaging and spectrometric measurements.

  1. Electron-tracking Compton gamma-ray camera for small animal and phantom imaging

    NASA Astrophysics Data System (ADS)

    Kabuki, Shigeto; Kimura, Hiroyuki; Amano, Hiroo; Nakamoto, Yuji; Kubo, Hidetoshi; Miuchi, Kentaro; Kurosawa, Shunsuke; Takahashi, Michiaki; Kawashima, Hidekazu; Ueda, Masashi; Okada, Tomohisa; Kubo, Atsushi; Kunieda, Etuso; Nakahara, Tadaki; Kohara, Ryota; Miyazaki, Osamu; Nakazawa, Tetsuo; Shirahata, Takashi; Yamamoto, Etsuji; Ogawa, Koichi; Togashi, Kaori; Saji, Hideo; Tanimori, Toru

    2010-11-01

    We have developed an electron-tracking Compton camera (ETCC) for medical use. Our ETCC has a wide energy dynamic range (200-1300 keV) and wide field of view (3 sr), and thus has potential for advanced medical use. To evaluate the ETCC, we imaged the head (brain) and bladder of mice that had been administered with F-18-FDG. We also imaged the head and thyroid gland of mice using double tracers of F-18-FDG and I-131 ions.

  2. The core of the nearby S0 galaxy NGC 7457 imaged with the HST planetary camera

    SciTech Connect

    Lauer, T.R.; Faber, S.M.; Holtzman, J.A.; Baum, W.A.; Currie, D.G.; Ewald, S.P.; Groth, E.J.; Hester, J.J.; Kelsall, T. Lick Observatory, Santa Cruz, CA Lowell Observatory, Flagstaff, AZ Washington Univ., Seattle Maryland Univ., College Park Space Telescope Science Institute, Baltimore, MD Princeton Univ., NJ California Institute of Technology, Pasadena NASA, Goddard Space Flight Center, Greenbelt, MD )

    1991-03-01

    A brief analysis is presented of images of the nearby S0 galaxy NGC 7457 obtained with the HST Planetary Camera. While the galaxy remains unresolved with the HST, the images reveal that any core most likely has r(c) less than 0.052 arcsec. The light distribution is consistent with a gamma = -1.0 power law inward to the resolution limit, with a possible stellar nucleus with luminosity of 10 million solar. This result represents the first observation outside the Local Group of a galaxy nucleus at this spatial resolution, and it suggests that such small, high surface brightness cores may be common. 20 refs.

  3. Preliminary results from a single-photon imaging X-ray charge coupled device /CCD/ camera

    NASA Technical Reports Server (NTRS)

    Griffiths, R. E.; Polucci, G.; Mak, A.; Murray, S. S.; Schwartz, D. A.; Zombeck, M. V.

    1981-01-01

    A CCD camera is described which has been designed for single-photon X-ray imaging in the 1-10 keV energy range. Preliminary results are presented from the front-side illuminated Fairchild CCD 211, which has been shown to image well at 3 keV. The problem of charge-spreading above 4 keV is discussed by analogy with a similar problem at infrared wavelengths. The total system noise is discussed and compared with values obtained by other CCD users.

  4. EPR-based ghost imaging using a single-photon-sensitive camera

    E-print Network

    Reuben S. Aspden; Daniel S. Tasca; Robert W. Boyd; Miles J. Padgett

    2013-08-05

    Correlated-photon imaging, popularly known as ghost imaging, is a technique whereby an image is formed from light that has never interacted with the object. In ghost imaging experiments two correlated light fields are produced. One of these fields illuminates the object, and the other field is measured by a spatially resolving detector. In the quantum regime, these correlated light fields are produced by entangled photons created by spontaneous parametric down-conversion. To date, all correlated-photon ghost-imaging experiments have scanned a single-pixel detector through the field of view to obtain the spatial information. However, scanning leads to a poor sampling efficiency, which scales inversely with the number of pixels, N, in the image. In this work we overcome this limitation by using a time-gated camera to record the single-photon events across the full scene. We obtain high-contrast images, 90%, in either the image plane or the far-field of the photon pair source, taking advantage of the EPR-like correlations in position and momentum of the photon pairs. Our images contain a large number of modes, >500, creating opportunities in low-light-level imaging and in quantum information processing.

  5. Determining 3D flow fields via multi-camera light field imaging.

    PubMed

    Truscott, Tadd T; Belden, Jesse; Nielson, Joseph R; Daily, David J; Thomson, Scott L

    2013-01-01

    In the field of fluid mechanics, the resolution of computational schemes has outpaced experimental methods and widened the gap between predicted and observed phenomena in fluid flows. Thus, a need exists for an accessible method capable of resolving three-dimensional (3D) data sets for a range of problems. We present a novel technique for performing quantitative 3D imaging of many types of flow fields. The 3D technique enables investigation of complicated velocity fields and bubbly flows. Measurements of these types present a variety of challenges to the instrument. For instance, optically dense bubbly multiphase flows cannot be readily imaged by traditional, non-invasive flow measurement techniques due to the bubbles occluding optical access to the interior regions of the volume of interest. By using Light Field Imaging we are able to reparameterize images captured by an array of cameras to reconstruct a 3D volumetric map for every time instance, despite partial occlusions in the volume. The technique makes use of an algorithm known as synthetic aperture (SA) refocusing, whereby a 3D focal stack is generated by combining images from several cameras post-capture (1). Light Field Imaging allows for the capture of angular as well as spatial information about the light rays, and hence enables 3D scene reconstruction. Quantitative information can then be extracted from the 3D reconstructions using a variety of processing algorithms. In particular, we have developed measurement methods based on Light Field Imaging for performing 3D particle image velocimetry (PIV), extracting bubbles in a 3D field and tracking the boundary of a flickering flame. We present the fundamentals of the Light Field Imaging methodology in the context of our setup for performing 3DPIV of the airflow passing over a set of synthetic vocal folds, and show representative results from application of the technique to a bubble-entraining plunging jet. PMID:23486112

  6. Development of a portable 3CCD camera system for multispectral imaging of biological samples.

    PubMed

    Lee, Hoyoung; Park, Soo Hyun; Noh, Sang Ha; Lim, Jongguk; Kim, Moon S

    2014-01-01

    Recent studies have suggested the need for imaging devices capable of multispectral imaging beyond the visible region, to allow for quality and safety evaluations of agricultural commodities. Conventional multispectral imaging devices lack flexibility in spectral waveband selectivity for such applications. In this paper, a recently developed portable 3CCD camera with significant improvements over existing imaging devices is presented. A beam-splitter prism assembly for 3CCD was designed to accommodate three interference filters that can be easily changed for application-specific multispectral waveband selection in the 400 to 1000 nm region. We also designed and integrated electronic components on printed circuit boards with firmware programming, enabling parallel processing, synchronization, and independent control of the three CCD sensors, to ensure the transfer of data without significant delay or data loss due to buffering. The system can stream 30 frames (3-waveband images in each frame) per second. The potential utility of the 3CCD camera system was demonstrated in the laboratory for detecting defect spots on apples. PMID:25350510

  7. Development of a Portable 3CCD Camera System for Multispectral Imaging of Biological Samples

    PubMed Central

    Lee, Hoyoung; Park, Soo Hyun; Noh, Sang Ha; Lim, Jongguk; Kim, Moon S.

    2014-01-01

    Recent studies have suggested the need for imaging devices capable of multispectral imaging beyond the visible region, to allow for quality and safety evaluations of agricultural commodities. Conventional multispectral imaging devices lack flexibility in spectral waveband selectivity for such applications. In this paper, a recently developed portable 3CCD camera with significant improvements over existing imaging devices is presented. A beam-splitter prism assembly for 3CCD was designed to accommodate three interference filters that can be easily changed for application-specific multispectral waveband selection in the 400 to 1000 nm region. We also designed and integrated electronic components on printed circuit boards with firmware programming, enabling parallel processing, synchronization, and independent control of the three CCD sensors, to ensure the transfer of data without significant delay or data loss due to buffering. The system can stream 30 frames (3-waveband images in each frame) per second. The potential utility of the 3CCD camera system was demonstrated in the laboratory for detecting defect spots on apples. PMID:25350510

  8. Evaluating the capability of time-of-flight cameras for accurately imaging a cyclically loaded beam

    NASA Astrophysics Data System (ADS)

    Lahamy, Hervé; Lichti, Derek; El-Badry, Mamdouh; Qi, Xiaojuan; Detchev, Ivan; Steward, Jeremy; Moravvej, Mohammad

    2015-05-01

    Time-of-flight cameras are used for diverse applications ranging from human-machine interfaces and gaming to robotics and earth topography. This paper aims at evaluating the capability of the Mesa Imaging SR4000 and the Microsoft Kinect 2.0 time-of-flight cameras for accurately imaging the top surface of a concrete beam subjected to fatigue loading in laboratory conditions. Whereas previous work has demonstrated the success of such sensors for measuring the response at point locations, the aim here is to measure the entire beam surface in support of the overall objective of evaluating the effectiveness of concrete beam reinforcement with steel fibre reinforced polymer sheets. After applying corrections for lens distortions to the data and differencing images over time to remove systematic errors due to internal scattering, the periodic deflections experienced by the beam have been estimated for the entire top surface of the beam and at witness plates attached. The results have been assessed by comparison with measurements from highly-accurate laser displacement transducers. This study concludes that both the Microsoft Kinect 2.0 and the Mesa Imaging SR4000s are capable of sensing a moving surface with sub-millimeter accuracy once the image distortions have been modeled and removed.

  9. Improvement of a snapshot spectroscopic retinal multi-aperture imaging camera

    NASA Astrophysics Data System (ADS)

    Lemaillet, Paul; Lompado, Art; Ramella-Roman, Jessica C.

    2009-02-01

    Measurement of oxygen saturation has proved to give important information about the eye health and the onset of eye pathologies such as Diabetic Retinopathy. Recently, we have presented a multi-aperture system enabling snapshot acquisition of human fundus images at six different wavelengths. In our setup a commercial fundus ophthalmoscope was interfaced with the multi-aperture system to acquire spectroscopic sensitive images of the retina vessel, thus enabling assessment of the oxygen saturation in the retina. Snapshot spectroscopic acquisition is meant to minimize the effects of eye movements. Higher measurement accuracy can be achieved by increasing the number of wavelengths at which the fundus images are taken. In this study we present an improvement of our setup by introducing an other multi-aperture camera that enables us to take snapshot images of the fundus at nine different wavelengths. Careful consideration is taken to improve image transfer by measuring the optical properties of the fundus camera used in the setup and modeling the optical train in Zemax.

  10. Performance of CID camera X-ray imagers at NIF in a harsh neutron environment

    SciTech Connect

    Palmer, N. E.; Schneider, M. B.; Bell, P. M.; Piston, K. W.; Moody, J. D.; James, D. L.; Ness, R. A.; Haugh, M. J.; Lee, J. J.; Romano, E. D.

    2013-09-01

    Charge-injection devices (CIDs) are solid-state 2D imaging sensors similar to CCDs, but their distinct architecture makes CIDs more resistant to ionizing radiation.1–3 CID cameras have been used extensively for X-ray imaging at the OMEGA Laser Facility4,5 with neutron fluences at the sensor approaching 109 n/cm2 (DT, 14 MeV). A CID Camera X-ray Imager (CCXI) system has been designed and implemented at NIF that can be used as a rad-hard electronic-readout alternative for time-integrated X-ray imaging. This paper describes the design and implementation of the system, calibration of the sensor for X-rays in the 3 – 14 keV energy range, and preliminary data acquired on NIF shots over a range of neutron yields. The upper limit of neutron fluence at which CCXI can acquire useable images is ~ 108 n/cm2 and there are noise problems that need further improvement, but the sensor has proven to be very robust in surviving high yield shots (~ 1014 DT neutrons) with minimal damage.

  11. Digital image measurement of specimen deformation based on CCD cameras and Image J software: an application to human pelvic biomechanics

    NASA Astrophysics Data System (ADS)

    Jia, Yongwei; Cheng, Liming; Yu, Guangrong; Lou, Yongjian; Yu, Yan; Chen, Bo; Ding, Zuquan

    2008-03-01

    A method of digital image measurement of specimen deformation based on CCD cameras and Image J software was developed. This method was used to measure the biomechanics behavior of human pelvis. Six cadaveric specimens from the third lumbar vertebra to the proximal 1/3 part of femur were tested. The specimens without any structural abnormalities were dissected of all soft tissue, sparing the hip joint capsules and the ligaments of the pelvic ring and floor. Markers with black dot on white background were affixed to the key regions of the pelvis. Axial loading from the proximal lumbar was applied by MTS in the gradient of 0N to 500N, which simulated the double feet standing stance. The anterior and lateral images of the specimen were obtained through two CCD cameras. Based on Image J software, digital image processing software, which can be freely downloaded from the National Institutes of Health, digital 8-bit images were processed. The procedure includes the recognition of digital marker, image invert, sub-pixel reconstruction, image segmentation, center of mass algorithm based on weighted average of pixel gray values. Vertical displacements of S1 (the first sacral vertebrae) in front view and micro-angular rotation of sacroiliac joint in lateral view were calculated according to the marker movement. The results of digital image measurement showed as following: marker image correlation before and after deformation was excellent. The average correlation coefficient was about 0.983. According to the 768 × 576 pixels image (pixel size 0.68mm × 0.68mm), the precision of the displacement detected in our experiment was about 0.018 pixels and the comparatively error could achieve 1.11\\perthou. The average vertical displacement of S1 of the pelvis was 0.8356+/-0.2830mm under vertical load of 500 Newtons and the average micro-angular rotation of sacroiliac joint in lateral view was 0.584+/-0.221°. The load-displacement curves obtained from our optical measure system matched the clinical results. Digital image measurement of specimen deformation based on CCD cameras and Image J software has good perspective for application in biomechanical research, which has the advantage of simple optical setup, no-contact, high precision, and no special requirement of test environment.

  12. 200 ps FWHM and 100 MHz repetition rate ultrafast gated camera for optical medical functional imaging

    NASA Astrophysics Data System (ADS)

    Uhring, Wilfried; Poulet, Patrick; Hanselmann, Walter; Glazenborg, René; Zint, Virginie; Nouizi, Farouk; Dubois, Benoit; Hirschi, Werner

    2012-04-01

    The paper describes the realization of a complete optical imaging device to clinical applications like brain functional imaging by time-resolved, spectroscopic diffuse optical tomography. The entire instrument is assembled in a unique setup that includes a light source, an ultrafast time-gated intensified camera and all the electronic control units. The light source is composed of four near infrared laser diodes driven by a nanosecond electrical pulse generator working in a sequential mode at a repetition rate of 100 MHz. The resulting light pulses, at four wavelengths, are less than 80 ps FWHM. They are injected in a four-furcated optical fiber ended with a frontal light distributor to obtain a uniform illumination spot directed towards the head of the patient. Photons back-scattered by the subject are detected by the intensified CCD camera; there are resolved according to their time of flight inside the head. The very core of the intensified camera system is the image intensifier tube and its associated electrical pulse generator. The ultrafast generator produces 50 V pulses, at a repetition rate of 100 MHz and a width corresponding to the 200 ps requested gate. The photocathode and the Micro-Channel-Plate of the intensifier have been specially designed to enhance the electromagnetic wave propagation and reduce the power loss and heat that are prejudicial to the quality of the image. The whole instrumentation system is controlled by an FPGA based module. The timing of the light pulses and the photocathode gating is precisely adjustable with a step of 9 ps. All the acquisition parameters are configurable via software through an USB plug and the image data are transferred to a PC via an Ethernet link. The compactness of the device makes it a perfect device for bedside clinical applications.

  13. Design and realization of an image mosaic system on the CCD aerial camera

    NASA Astrophysics Data System (ADS)

    Liu, Hai ying; Wang, Peng; Zhu, Hai bin; Li, Yan; Zhang, Shao jun

    2015-08-01

    It has long been difficulties in aerial photograph to stitch multi-route images into a panoramic image in real time for multi-route flight framing CCD camera with very large amount of data, and high accuracy requirements. An automatic aerial image mosaic system based on GPU development platform is described in this paper. Parallel computing of SIFT feature extraction and matching algorithm module is achieved by using CUDA technology for motion model parameter estimation on the platform, which makes it's possible to stitch multiple CCD images in real-time. Aerial tests proved that the mosaic system meets the user's requirements with 99% accuracy and 30 to 50 times' speed improvement of the normal mosaic system.

  14. High performance gel imaging with a commercial single lens reflex camera

    NASA Astrophysics Data System (ADS)

    Slobodan, J.; Corbett, R.; Wye, N.; Schein, J. E.; Marra, M. A.; Coope, R. J. N.

    2011-03-01

    A high performance gel imaging system was constructed using a digital single lens reflex camera with epi-illumination to image 19 × 23 cm agarose gels with up to 10,000 DNA bands each. It was found to give equivalent performance to a laser scanner in this high throughput DNA fingerprinting application using the fluorophore SYBR Green®. The specificity and sensitivity of the imager and scanner were within 1% using the same band identification software. Low and high cost color filters were also compared and it was found that with care, good results could be obtained with inexpensive dyed acrylic filters in combination with more costly dielectric interference filters, but that very poor combinations were also possible. Methods for determining resolution, dynamic range, and optical efficiency for imagers are also proposed to facilitate comparison between systems.

  15. A Gaseous Compton Camera using a 2D-sensitive gaseous photomultiplier for Nuclear Medical Imaging

    NASA Astrophysics Data System (ADS)

    Azevedo, C. D. R.; Pereira, F. A.; Lopes, T.; Correia, P. M. M.; Silva, A. L. M.; Carramate, L. F. N. D.; Covita, D. S.; Veloso, J. F. C. A.

    2013-12-01

    A new Compton Camera (CC) concept based on a High Pressure Scintillation Chamber coupled to a position-sensitive Gaseous PhotoMultiplier for Nuclear Medical Imaging applications is proposed. The main goal of this work is to describe the development of a ?25×12 cm3 cylindrical prototype, which will be suitable for scintimammography and for small-animal imaging applications. The possibility to scale it to an useful human size device is also in study. The idea is to develop a device capable to compete with the standard Anger Camera. Despite the large success of the Anger Camera, it still presents some limitations, such as: low position resolution and fair energy resolutions for 140 keV. The CC arises a different solution as it provides information about the incoming photon direction, avoiding the use of a collimator, which is responsible for a huge reduction (10-4) of the sensitivity. The main problem of the CC's is related with the Doppler Broadening which is responsible for the loss of angular resolution. In this work, calculations for the Doppler Broadening in Xe, Ar, Ne and their mixtures are presented. Simulations of the detector performance together with discussion about the gas choice are also included .

  16. Algorithm design for automated transportation photo enforcement camera image and video quality diagnostic check modules

    NASA Astrophysics Data System (ADS)

    Raghavan, Ajay; Saha, Bhaskar

    2013-03-01

    Photo enforcement devices for traffic rules such as red lights, toll, stops, and speed limits are increasingly being deployed in cities and counties around the world to ensure smooth traffic flow and public safety. These are typically unattended fielded systems, and so it is important to periodically check them for potential image/video quality problems that might interfere with their intended functionality. There is interest in automating such checks to reduce the operational overhead and human error involved in manually checking large camera device fleets. Examples of problems affecting such camera devices include exposure issues, focus drifts, obstructions, misalignment, download errors, and motion blur. Furthermore, in some cases, in addition to the sub-algorithms for individual problems, one also has to carefully design the overall algorithm and logic to check for and accurately classifying these individual problems. Some of these issues can occur in tandem or have the potential to be confused for each other by automated algorithms. Examples include camera misalignment that can cause some scene elements to go out of focus for wide-area scenes or download errors that can be misinterpreted as an obstruction. Therefore, the sequence in which the sub-algorithms are utilized is also important. This paper presents an overview of these problems along with no-reference and reduced reference image and video quality solutions to detect and classify such faults.

  17. Visible Light Digital Camera --Up to 2.3MP resolution with LED lamps provides sharp images

    E-print Network

    Short, Daniel

    · Visible Light Digital Camera -- Up to 2.3MP resolution with LED lamps provides sharp images regardless of lighting conditions · Fusion Picture in Picture (PIP) -- Displays thermal image super-imposed over a digital image · 0.1°C Thermal Sensitivity -- Provides the resolution needed to find problems

  18. Indium gallium arsenide imaging with smaller cameras, higher-resolution arrays, and greater material sensitivity

    NASA Astrophysics Data System (ADS)

    Ettenberg, Martin H.; Cohen, Marshall J.; Brubaker, Robert M.; Lange, Michael J.; O'Grady, Matthew T.; Olsen, Gregory H.

    2002-08-01

    Indium Gallium Arsenide (InGaAs) photodiode arrays have numerous commercial, industrial, and military applications. During the past 10 years, great strides have been made in the development of these devices starting with simple 256-element linear photodiode arrays and progressing to the large 640 x 512 element area arrays now readily available. Linear arrays are offered with 512 elements on a 25 micron pitch with no defective pixels, and are used in spectroscopic monitors for wavelength division multiplexing (WDM) systems as well as in machine vision applications. A 320 x 240 solid-state array operates at room temperature, which allows development of a camera which is smaller than 25 cm3 in volume, weighs less than 100 g and uses less than 750 mW of power. Two dimensional focal plane arrays and cameras have been manufactured with detectivity, D*, greater than 1014 cm-(root)Hz/W at room temperature and have demonstrated the ability to image at night. Cameras are also critical tools for the assembly and performance monitoring of optical switches and add-drop multiplexers in the telecommunications industry. These same cameras are used for the inspection of silicon wafers and fine art, laser beam profiling, and metals manufacturing. By varying the Indium content, InGaAs photodiode arrays can be tailored to cover the entire short-wave infrared spectrum from 1.0 micron to 2.5 microns. InGaAs focal plane arrays and cameras sensitive to 2.0 micron wavelength light are now available in 320 x 240 formats.

  19. A Method to Solve Interior and Exterior Camera Calibration Parameters for Image Resection

    NASA Technical Reports Server (NTRS)

    Samtaney, Ravi

    1999-01-01

    An iterative method is presented to solve the internal and external camera calibration parameters, given model target points and their images from one or more camera locations. The direct linear transform formulation was used to obtain a guess for the iterative method, and herein lies one of the strengths of the present method. In all test cases, the method converged to the correct solution. In general, an overdetermined system of nonlinear equations is solved in the least-squares sense. The iterative method presented is based on Newton-Raphson for solving systems of nonlinear algebraic equations. The Jacobian is analytically derived and the pseudo-inverse of the Jacobian is obtained by singular value decomposition.

  20. Optimization of camera exposure durations for multi-exposure speckle imaging of the microcirculation.

    PubMed

    Kazmi, S M Shams; Balial, Satyajit; Dunn, Andrew K

    2014-07-01

    Improved Laser Speckle Contrast Imaging (LSCI) blood flow analyses that incorporate inverse models of the underlying laser-tissue interaction have been used to develop more quantitative implementations of speckle flowmetry such as Multi-Exposure Speckle Imaging (MESI). In this paper, we determine the optimal camera exposure durations required for obtaining flow information with comparable accuracy with the prevailing MESI implementation utilized in recent in vivo rodent studies. A looping leave-one-out (LOO) algorithm was used to identify exposure subsets which were analyzed for accuracy against flows obtained from analysis with the original full exposure set over 9 animals comprising n = 314 regional flow measurements. From the 15 original exposures, 6 exposures were found using the LOO process to provide comparable accuracy, defined as being no more than 10% deviant, with the original flow measurements. The optimal subset of exposures provides a basis set of camera durations for speckle flowmetry studies of the microcirculation and confers a two-fold faster acquisition rate and a 28% reduction in processing time without sacrificing accuracy. Additionally, the optimization process can be used to identify further reductions in the exposure subsets for tailoring imaging over less expansive flow distributions to enable even faster imaging. PMID:25071956

  1. Contactless multiple wavelength photoplethysmographic imaging: a first step toward "SpO2 camera" technology.

    PubMed

    Wieringa, F P; Mastik, F; van der Steen, A F W

    2005-08-01

    We describe a route toward contactless imaging of arterial oxygen saturation (SpO2) distribution within tissue, based upon detection of a two-dimensional matrix of spatially resolved optical plethysmographic signals at different wavelengths. As a first step toward SpO2-imaging we built a monochrome CMOS-camera with apochromatic lens and 3lambda-LED-ringlight (lambda1 = 660 nm, lambda2 = 810 nm, lambda3 = 940 nm; 100 LEDs lambda(-1)). We acquired movies at three wavelengths while simultaneously recording ECG and respiration for seven volunteers. We repeated this experiment for one volunteer at increased frame rate, additionally recording the pulse wave of a pulse oximeter. Movies were processed by dividing each image frame into discrete Regions of Interest (ROIs), averaging 10 x 10 raw pixels each. For each ROI, pulsatile variation over time was assigned to a matrix of ROI-pixel time traces with individual Fourier spectra. Photoplethysmograms correlated well with respiration reference traces at three wavelengths. Increased frame rates revealed weaker pulsations (main frequency components 0.95 and 1.9 Hz) superimposed upon respiration-correlated photoplethysmograms, which were heartbeat-related at three wavelengths. We acquired spatially resolved heartbeat-related photoplethysmograms at multiple wavelengths using a remote camera. This feasibility study shows potential for non-contact 2-D imaging reflection-mode pulse oximetry. Clinical devices, however, require further development. PMID:16133912

  2. Loop closure detection by algorithmic information theory: implemented on range and camera image data.

    PubMed

    Ravari, Alireza Norouzzadeh; Taghirad, Hamid D

    2014-10-01

    In this paper the problem of loop closing from depth or camera image information in an unknown environment is investigated. A sparse model is constructed from a parametric dictionary for every range or camera image as mobile robot observations. In contrast to high-dimensional feature-based representations, in this model, the dimension of the sensor measurements' representations is reduced. Considering the loop closure detection as a clustering problem in high-dimensional space, little attention has been paid to the curse of dimensionality in the existing state-of-the-art algorithms. In this paper, a representation is developed from a sparse model of images, with a lower dimension than original sensor observations. Exploiting the algorithmic information theory, the representation is developed such that it has the geometrically transformation invariant property in the sense of Kolmogorov complexity. A universal normalized metric is used for comparison of complexity based representations of image models. Finally, a distinctive property of normalized compression distance is exploited for detecting similar places and rejecting incorrect loop closure candidates. Experimental results show efficiency and accuracy of the proposed method in comparison to the state-of-the-art algorithms and some recently proposed methods. PMID:24968363

  3. Full 3-D cluster-based iterative image reconstruction tool for a small animal PET camera

    NASA Astrophysics Data System (ADS)

    Valastyán, I.; Imrek, J.; Molnár, J.; Novák, D.; Balkay, L.; Emri, M.; Trón, L.; Bükki, T.; Kerek, A.

    2007-02-01

    Iterative reconstruction methods are commonly used to obtain images with high resolution and good signal-to-noise ratio in nuclear imaging. The aim of this work was to develop a scalable, fast, cluster based, fully 3-D iterative image reconstruction package for our small animal PET camera, the miniPET. The reconstruction package is developed to determine the 3-D radioactivity distribution from list mode type of data sets and it can also simulate noise-free projections of digital phantoms. We separated the system matrix generation and the fully 3-D iterative reconstruction process. As the detector geometry is fixed for a given camera, the system matrix describing this geometry is calculated only once and used for every image reconstruction, making the process much faster. The Poisson and the random noise sensitivity of the ML-EM iterative algorithm were studied for our small animal PET system with the help of the simulation and reconstruction tool. The reconstruction tool has also been tested with data collected by the miniPET from a line and a cylinder shaped phantom and also a rat.

  4. Modeling of three-dimensional camera imaging in a tokamak torus

    SciTech Connect

    Edmonds, P.H.; Medley, S.S.

    1997-01-01

    A procedure is described for precision modeling of the views for imaging diagnostics monitoring tokamak internal components, particularly high heat flux divertor components. Such modeling is required to enable predictions of resolution and viewing angle for the available viewing locations. Since oblique views are typically expected for tokamak divertors, fully three-dimensional (3D) perspective imaging is required. A suite of matched 3D CAD, graphics and animation applications are used to provide a fast and flexible technique for reproducing these views. An analytic calculation of the resolution and viewing incidence angle is developed to validate the results of the modeling procedures. The tokamak physics experiment (TPX) diagnostics1 for infrared viewing are used as an example to demonstrate the implementation of the tools. As is generally the case in tokamak experiments, the available diagnostic locations for TPX are severely constrained by access limitations and the resulting images can be marginal in both resolution and viewing incidence angle. The procedures described here provide a complete design tool for in-vessel viewing, both for camera location and for identification of viewed surfaces. Additionally, these same tools can be used for the interpretation of the actual images obtained by the diagnostic cameras. {copyright} {ital 1997 American Institute of Physics.}

  5. Megapixel imaging camera for expanded H{sup {minus}} beam measurements

    SciTech Connect

    Simmons, J.E.; Lillberg, J.W.; McKee, R.J.; Slice, R.W.; Torrez, J.H.; McCurnin, T.W.; Sanchez, P.G.

    1994-02-01

    A charge coupled device (CCD) imaging camera system has been developed as part of the Ground Test Accelerator project at the Los Alamos National Laboratory to measure the properties of a large diameter, neutral particle beam. The camera is designed to operate in the accelerator vacuum system for extended periods of time. It would normally be cooled to reduce dark current. The CCD contains 1024 {times} 1024 pixels with pixel size of 19 {times} 19 {mu}m{sup 2} and with four phase parallel clocking and two phase serial clocking. The serial clock rate is 2.5{times}10{sup 5} pixels per second. Clock sequence and timing are controlled by an external logic-word generator. The DC bias voltages are likewise located externally. The camera contains circuitry to generate the analog clocks for the CCD and also contains the output video signal amplifier. Reset switching noise is removed by an external signal processor that employs delay elements to provide noise suppression by the method of double-correlated sampling. The video signal is digitized to 12 bits in an analog to digital converter (ADC) module controlled by a central processor module. Both modules are located in a VME-type computer crate that communicates via ethernet with a separate workstation where overall control is exercised and image processing occurs. Under cooled conditions the camera shows good linearity with dynamic range of 2000 and with dark noise fluctuations of about {plus_minus}1/2 ADC count. Full well capacity is about 5{times}10{sup 5} electron charges.

  6. Superresolution imaging system on innovative localization microscopy technique with commonly using dyes and CMOS camera

    NASA Astrophysics Data System (ADS)

    Dudenkova, V.; Zakharov, Yu.

    2015-05-01

    Optical methods for study biological tissue and cell at micro- and nanoscale level step now over diffraction limit. Really it is single molecule localization techniques that achieve the highest spatial resolution. One of those techniques, called bleaching/blinking assisted localization microscopy (BaLM) relies on the intrinsic bleaching and blinking behavior characteristic of commonly used fluorescent probes. This feature is the base of BaLM image series acquisition and data analysis. In our work blinking of single fluorescent spot against a background of others comes to light by subtraction of time series successive frames. Then digital estimation gives the center of the spot as a point of fluorescent molecule presence, which transfers to other image with higher resolution according to accuracy of the center localization. It is a part of image with improved resolution. This approach allows overlapping fluorophores and not requires single photon sensitivity, so we use 8,8 megapixel CMOS camera with smallest (1.55 um) pixel size. This instrumentation on the base of Zeiss Axioscope 2 FS MOT allows image transmission from object plane to matrix on a scale less than 100 nm/pixel using 20x-objective, thereafter the same resolution and 5 times more field of view as compared to EMCCD camera with 6 um pixel size. To optimize excitation light power, frame rate and gain of camera we have made appropriate estimations taking into account fluorophores behaviors features and equipment characteristics. Finely we have clearly distinguishable details of the sample in the processed field of view.

  7. Temperature dependent operation of PSAPD-based compact gamma camera for SPECT imaging.

    PubMed

    Kim, Sangtaek; McClish, Mickel; Alhassen, Fares; Seo, Youngho; Shah, Kanai S; Gould, Robert G

    2011-10-10

    We investigated the dependence of image quality on the temperature of a position sensitive avalanche photodiode (PSAPD)-based small animal single photon emission computed tomography (SPECT) gamma camera with a CsI:Tl scintillator. Currently, nitrogen gas cooling is preferred to operate PSAPDs in order to minimize the dark current shot noise. Being able to operate a PSAPD at a relatively high temperature (e.g., 5 °C) would allow a more compact and simple cooling system for the PSAPD. In our investigation, the temperature of the PSAPD was controlled by varying the flow of cold nitrogen gas through the PSAPD module and varied from -40 °C to 20 °C. Three experiments were performed to demonstrate the performance variation over this temperature range. The point spread function (PSF) of the gamma camera was measured at various temperatures, showing variation of full-width-half-maximum (FWHM) of the PSF. In addition, a (99m)Tc-pertechnetate (140 keV) flood source was imaged and the visibility of the scintillator segmentation (16×16 array, 8 mm × 8 mm area, 400 ?m pixel size) at different temperatures was evaluated. Comparison of image quality was made at -25 °C and 5 °C using a mouse heart phantom filled with an aqueous solution of (99m)Tc-pertechnetate and imaged using a 0.5 mm pinhole collimator made of tungsten. The reconstructed image quality of the mouse heart phantom at 5 °C degraded in comparision to the reconstructed image quality at -25 °C. However, the defect and structure of the mouse heart phantom were clearly observed, showing the feasibility of operating PSAPDs for SPECT imaging at 5 °C, a temperature that would not need the nitrogen cooling. All PSAPD evaluations were conducted with an applied bias voltage that allowed the highest gain at a given temperature. PMID:24465051

  8. Quantigraphic Imaging: Estimating the camera response and exposures from differently exposed images

    E-print Network

    Mann, Richard

    naturally whenever a video camera having au­ tomatic exposure captures multiple frames of video with the same subject matter appearing in regions of overlap be­ tween at least some of the successive video out (overexposed). However, the inscriptions on the wall (names of soldiers killed in the war) now

  9. Noise modeling and estimation in image sequences from thermal infrared cameras

    NASA Astrophysics Data System (ADS)

    Alparone, Luciano; Corsini, Giovanni; Diani, Marco

    2004-11-01

    In this paper we present an automated procedure devised to measure noise variance and correlation from a sequence, either temporal or spectral, of digitized images acquired by an incoherent imaging detector. The fundamental assumption is that the noise is signal-independent and stationary in each frame, but may be non-stationary across the sequence of frames. The idea is to detect areas within bivariate scatterplots of local statistics, corresponding to statistically homogeneous pixels. After that, the noise PDF, modeled as a parametric generalized Gaussian function, is estimated from homogeneous pixels. Results obtained applying the noise model to images taken by an IR camera operated in different environmental conditions are presented and discussed. They demonstrate that the noise is heavy-tailed (tails longer than those of a Gaussian PDF) and spatially autocorrelated. Temporal correlation has been investigated as well and found to depend on the frame rate and, by a small extent, on the wavelength of the thermal radiation.

  10. Toward Real-time quantum imaging with a single pixel camera

    SciTech Connect

    Lawrie, Benjamin J; Pooser, Raphael C

    2013-01-01

    We present a workbench for the study of real-time quantum imaging by measuring the frame-by-frame quantum noise reduction of multi-spatial-mode twin beams generated by four wave mixing in Rb vapor. Exploiting the multiple spatial modes of this squeezed light source, we utilize spatial light modulators to selectively transmit macropixels of quantum correlated modes from each of the twin beams to a high quantum efficiency balanced detector. In low-light-level imaging applications, the ability to measure the quantum correlations between individual spatial modes and macropixels of spatial modes with a single pixel camera will facilitate compressive quantum imaging with sensitivity below the photon shot noise limit.

  11. Airborne imaging for heritage documentation using the Fotokite tethered flying camera

    NASA Astrophysics Data System (ADS)

    Verhoeven, Geert; Lupashin, Sergei; Briese, Christian; Doneus, Michael

    2014-05-01

    Since the beginning of aerial photography, researchers used all kinds of devices (from pigeons, kites, poles, and balloons to rockets) to take still cameras aloft and remotely gather aerial imagery. To date, many of these unmanned devices are still used for what has been referred to as Low-Altitude Aerial Photography or LAAP. In addition to these more traditional camera platforms, radio-controlled (multi-)copter platforms have recently added a new aspect to LAAP. Although model airplanes have been around for several decades, the decreasing cost, increasing functionality and stability of ready-to-fly multi-copter systems has proliferated their use among non-hobbyists. As such, they became a very popular tool for aerial imaging. The overwhelming amount of currently available brands and types (heli-, dual-, tri-, quad-, hexa-, octo-, dodeca-, deca-hexa and deca-octocopters), together with the wide variety of navigation options (e.g. altitude and position hold, waypoint flight) and camera mounts indicate that these platforms are here to stay for some time. Given the multitude of still camera types and the image quality they are currently capable of, endless combinations of low- and high-cost LAAP solutions are available. In addition, LAAP allows for the exploitation of new imaging techniques, as it is often only a matter of lifting the appropriate device (e.g. video cameras, thermal frame imagers, hyperspectral line sensors). Archaeologists were among the first to adopt this technology, as it provided them with a means to easily acquire essential data from a unique point of view, whether for simple illustration purposes of standing historic structures or to compute three-dimensional (3D) models and orthophotographs from excavation areas. However, even very cheap multi-copters models require certain skills to pilot them safely. Additionally, malfunction or overconfidence might lift these devices to altitudes where they can interfere with manned aircrafts. As such, the safe operation of these devices is still an issue, certainly when flying on locations which can be crowded (such as students on excavations or tourists walking around historic places). As the future of UAS regulation remains unclear, this talk presents an alternative approach to aerial imaging: the Fotokite. Developed at the ETH Zürich, the Fotokite is a tethered flying camera that is essentially a multi-copter connected to the ground with a taut tether to achieve controlled flight. Crucially, it relies solely on onboard IMU (Inertial Measurement Unit) measurements to fly, launches in seconds, and is classified as not a UAS (Unmanned Aerial System), e.g. in the latest FAA (Federal Aviation Administration) UAS proposal. As a result it may be used for imaging cultural heritage in a variety of environments and settings with minimal training by non-experienced pilots. Furthermore, it is subject to less extensive certification, regulation and import/export restrictions, making it a viable solution for use at a greater range of sites than traditional methods. Unlike a balloon or a kite it is not subject to particular weather conditions and, thanks to active stabilization, is capable of a variety of intelligent flight modes. Finally, it is compact and lightweight, making it easy to transport and deploy, and its lack of reliance on GNSS (Global Navigation Satellite System) makes it possible to use in urban, overbuilt areas. After outlining its operating principles, the talk will present some archaeological case studies in which the Fotokite was used, hereby assessing its capabilities compared to the conventional UAS's on the market.

  12. Static laser speckle contrast analysis for noninvasive burn diagnosis using a camera-phone imager.

    PubMed

    Ragol, Sigal; Remer, Itay; Shoham, Yaron; Hazan, Sivan; Willenz, Udi; Sinelnikov, Igor; Dronov, Vladimir; Rosenberg, Lior; Bilenca, Alberto

    2015-08-01

    Laser speckle contrast analysis (LASCA) is an established optical technique for accurate widefield visualization of relative blood perfusion when no or minimal scattering from static tissue elements is present, as demonstrated, for example, in LASCA imaging of the exposed cortex. However, when LASCA is applied to diagnosis of burn wounds, light is backscattered from both moving blood and static burn scatterers, and thus the spatial speckle contrast includes both perfusion and nonperfusion components and cannot be straightforwardly associated to blood flow. We extract from speckle contrast images of burn wounds the nonperfusion (static) component and discover that it conveys useful information on the ratio of static-to-dynamic scattering composition of the wound, enabling identification of burns of different depth in a porcine model in vivo within the first 48 h postburn. Our findings suggest that relative changes in the static-to-dynamic scattering composition of burns can dominate relative changes in blood flow for burns of different severity. Unlike conventional LASCA systems that employ scientific or industrial-grade cameras, our LASCA system is realized here using a camera phone, showing the potential to enable LASCA-based burn diagnosis with a simple imager. PMID:26271055

  13. Precise Trajectory Reconstruction of CE-3 Hovering Stage By Landing Camera Images

    NASA Astrophysics Data System (ADS)

    Yan, W.; Liu, J.; Li, C.; Ren, X.; Mu, L.; Gao, X.; Zeng, X.

    2014-12-01

    Chang'E-3 (CE-3) is part of the second phase of the Chinese Lunar Exploration Program, incorporating a lander and China's first lunar rover. It was landed on 14 December, 2013 successfully. Hovering and obstacle avoidance stages are essential for CE-3 safety soft landing so that precise spacecraft trajectory in these stages are of great significance to verify orbital control strategy, to optimize orbital design, to accurately determine the landing site of CE-3, and to analyze the geological background of the landing site. Because the time consumption of these stages is just 25s, it is difficult to present spacecraft's subtle movement by Measurement and Control System or by radio observations. Under this background, the trajectory reconstruction based on landing camera images can be used to obtain the trajectory of CE-3 because of its technical advantages such as unaffecting by lunar gravity field spacecraft kinetic model, high resolution, high frame rate, and so on. In this paper, the trajectory of CE-3 before and after entering hovering stage was reconstructed by landing camera images from frame 3092 to frame 3180, which lasted about 9s, under Single Image Space Resection (SISR). The results show that CE-3's subtle changes during hovering stage can be emerged by the reconstructed trajectory. The horizontal accuracy of spacecraft position was up to 1.4m while vertical accuracy was up to 0.76m. The results can be used for orbital control strategy analysis and some other application fields.

  14. Development of wide-field, multi-imaging x-ray streak camera technique with increased image-sampling arrays

    NASA Astrophysics Data System (ADS)

    Heya, M.; Fujioka, S.; Shiraga, H.; Miyanaga, N.; Yamanaka, T.

    2001-01-01

    In order to enlarge the field of view of a multi-imaging x-ray streak (MIXS) camera technique [H. Shiraga et al., Rev. Sci. Instrum. 66, 722 (1995)], which provides two-dimensionally space-resolved x-ray imaging with a high temporal resolution of ˜10 ps, we have proposed and designed a wide-field MIXS (W-MIXS) by increasing the number of image-sampling arrays. In this method, multiple cathode slits were used on the photocathode of an x-ray streak camera. The field of view of the W-MIXS can be enlarged up to 150-200 ?m instead of ˜70 ?m for a typical MIXS with a spatial resolution of ˜15 ?m. A proof-of-principle experiment with the W-MIXS was carried out at the Gekko-XII laser system. A cross-wire target was irradiated by four beams of the Gekko-XII laser. The data streaked with the W-MIXS system were reconstructed as a series of time-resolved, two-dimensional x-ray images. The W-MIXS system has been established as an improved two-dimensionally space-resolved and sequentially time-resolved technique.

  15. Intensified array camera imaging of solid surface combustion aboard the NASA Learjet

    NASA Technical Reports Server (NTRS)

    Weiland, Karen J.

    1992-01-01

    An intensified array camera was used to image weakly luminous flames spreading over thermally thin paper samples in a low gravity environment aboard the NASA-Lewis Learjet. The aircraft offers 10 to 20 sec of reduced gravity during execution of a Keplerian trajectory and allows the use of instrumentation that is delicate or requires higher electrical power than is available in drop towers. The intensified array camera is a charge intensified device type that responds to light between 400 and 900 nm and has a minimum sensitivity of 10(exp 6) footcandles. The paper sample, either ashless filter paper or a lab wiper, burns inside a sealed chamber which is filled with 21, 18, or 15 pct. oxygen in nitrogen at one atmosphere. The camera views the edge of the paper and its output is recorded on videotape. Flame positions are measured every 0.1 sec to calculate flame spread rates. Comparisons with drop tower data indicate that the flame shapes and spread rates are affected by the residual g level in the aircraft.

  16. Physical Activity Recognition Based on Motion in Images Acquired by a Wearable Camera

    PubMed Central

    Zhang, Hong; Li, Lu; Jia, Wenyan; Fernstrom, John D.; Sclabassi, Robert J.; Mao, Zhi-Hong; Sun, Mingui

    2011-01-01

    A new technique to extract and evaluate physical activity patterns from image sequences captured by a wearable camera is presented in this paper. Unlike standard activity recognition schemes, the video data captured by our device do not include the wearer him/herself. The physical activity of the wearer, such as walking or exercising, is analyzed indirectly through the camera motion extracted from the acquired video frames. Two key tasks, pixel correspondence identification and motion feature extraction, are studied to recognize activity patterns. We utilize a multiscale approach to identify pixel correspondences. When compared with the existing methods such as the Good Features detector and the Speed-up Robust Feature (SURF) detector, our technique is more accurate and computationally efficient. Once the pixel correspondences are determined which define representative motion vectors, we build a set of activity pattern features based on motion statistics in each frame. Finally, the physical activity of the person wearing a camera is determined according to the global motion distribution in the video. Our algorithms are tested using different machine learning techniques such as the K-Nearest Neighbor (KNN), Naive Bayesian and Support Vector Machine (SVM). The results show that many types of physical activities can be recognized from field acquired real-world video. Our results also indicate that, with a design of specific motion features in the input vectors, different classifiers can be used successfully with similar performances. PMID:21779142

  17. Physical Activity Recognition Based on Motion in Images Acquired by a Wearable Camera.

    PubMed

    Zhang, Hong; Li, Lu; Jia, Wenyan; Fernstrom, John D; Sclabassi, Robert J; Mao, Zhi-Hong; Sun, Mingui

    2011-06-01

    A new technique to extract and evaluate physical activity patterns from image sequences captured by a wearable camera is presented in this paper. Unlike standard activity recognition schemes, the video data captured by our device do not include the wearer him/herself. The physical activity of the wearer, such as walking or exercising, is analyzed indirectly through the camera motion extracted from the acquired video frames. Two key tasks, pixel correspondence identification and motion feature extraction, are studied to recognize activity patterns. We utilize a multiscale approach to identify pixel correspondences. When compared with the existing methods such as the Good Features detector and the Speed-up Robust Feature (SURF) detector, our technique is more accurate and computationally efficient. Once the pixel correspondences are determined which define representative motion vectors, we build a set of activity pattern features based on motion statistics in each frame. Finally, the physical activity of the person wearing a camera is determined according to the global motion distribution in the video. Our algorithms are tested using different machine learning techniques such as the K-Nearest Neighbor (KNN), Naive Bayesian and Support Vector Machine (SVM). The results show that many types of physical activities can be recognized from field acquired real-world video. Our results also indicate that, with a design of specific motion features in the input vectors, different classifiers can be used successfully with similar performances. PMID:21779142

  18. New Mars Camera's First Image of Mars from Mapping Orbit (Full Frame)

    NASA Technical Reports Server (NTRS)

    2006-01-01

    The high resolution camera on NASA's Mars Reconnaissance Orbiter captured its first image of Mars in the mapping orbit, demonstrating the full resolution capability, on Sept. 29, 2006. The High Resolution Imaging Science Experiment (HiRISE) acquired this first image at 8:16 AM (Pacific Time). With the spacecraft at an altitude of 280 kilometers (174 miles), the image scale is 25 centimeters per pixel (10 inches per pixel). If a person were located on this part of Mars, he or she would just barely be visible in this image.

    The image covers a small portion of the floor of Ius Chasma, one branch of the giant Valles Marineris system of canyons. The image illustrates a variety of processes that have shaped the Martian surface. There are bedrock exposures of layered materials, which could be sedimentary rocks deposited in water or from the air. Some of the bedrock has been faulted and folded, perhaps the result of large-scale forces in the crust or from a giant landslide. The image resolves rocks as small as small as 90 centimeters (3 feet) in diameter. It includes many dunes or ridges of windblown sand.

    This image (TRA_000823_1720) was taken by the High Resolution Imaging Science Experiment camera onboard the Mars Reconnaissance Orbiter spacecraft on Sept. 29, 2006. Shown here is the full image, centered at minus 7.8 degrees latitude, 279.5 degrees east longitude. The image is oriented such that north is to the top. The range to the target site was 297 kilometers (185.6 miles). At this distance the image scale is 25 centimeters (10 inches) per pixel (with one-by-one binning) so objects about 75 centimeters (30 inches) across are resolved. The image was taken at a local Mars time of 3:30 PM and the scene is illuminated from the west with a solar incidence angle of 59.7 degrees, thus the sun was about 30.3 degrees above the horizon. The season on Mars is northern winter, southern summer.

    [Photojournal note: Due to the large sizes of the high-resolution TIFF and JPEG files, some systems may experience extremely slow downlink time while viewing or downloading these images; some systems may be incapable of handling the download entirely.]

    NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the Mars Reconnaissance Orbiter for NASA's Science Mission Directorate, Washington. Lockheed Martin Space Systems, Denver, is the prime contractor for the project and built the spacecraft. The HiRISE camera was built by Ball Aerospace & Technologies Corporation, Boulder, Colo., and is operated by the University of Arizona, Tucson.

  19. 2000 Earth days around Venus: imaging with Venus Monitoring Camera on Venus Express

    NASA Astrophysics Data System (ADS)

    Markiewicz, W. J.; Titov, D.; Ignatiev, N.; Petrova, E.; Khatunatsev, I.; Limaye, S.; Shalygina, O.; Patsaeva, M.; Almeida, M.

    2011-10-01

    By the time of this meeting the Venus Express spacecraft (VEX) should have completed more than 2000, 24 hour orbits around Venus. The Venus Monitoring Camera (VMC) on has been observing the upper cloud layer in four filters in visible spectral range. On average VMC takes nearly two hundred images per day. VEX has a highly elliptical orbit allowing for global as well as close up views with resolution down to 200 meter per pixel. We will review some of the highlights of the results obtained from this enormous data set.

  20. Camera Image Transformation and Registration for Safe Spacecraft Landing and Hazard Avoidance

    NASA Technical Reports Server (NTRS)

    Jones, Brandon M.

    2005-01-01

    Inherent geographical hazards of Martian terrain may impede a safe landing for science exploration spacecraft. Surface visualization software for hazard detection and avoidance may accordingly be applied in vehicles such as the Mars Exploration Rover (MER) to induce an autonomous and intelligent descent upon entering the planetary atmosphere. The focus of this project is to develop an image transformation algorithm for coordinate system matching between consecutive frames of terrain imagery taken throughout descent. The methodology involves integrating computer vision and graphics techniques, including affine transformation and projective geometry of an object, with the intrinsic parameters governing spacecraft dynamic motion and camera calibration.

  1. Planetary Camera imaging of the counter-rotating core galaxy NGC 4365

    NASA Technical Reports Server (NTRS)

    Forbes, Duncan A.

    1994-01-01

    We analyze F555W(V) band Planetary Camera images of NGC 4365, for which ground-based spectroscopy has revealed a misaligned, counter-rotating core. Line profile analysis by Surma indicates that the counter-rotating component has a disk structure. After deconvolution and galaxy modeling, we find photometric evidence, at small radii to support this claim. There is no indication of a central point source or dust lane. The surface brightness profile reveals a steep outer profile and shallow, by not flat, inner profile with the inflection radius occurring at 1.8 sec. The inner profile is consistent with a cusp.

  2. Cloud Base Height Measurements at Manila Observatory: Initial Results from Constructed Paired Sky Imaging Cameras

    NASA Astrophysics Data System (ADS)

    Lagrosas, N.; Tan, F.; Antioquia, C. T.

    2014-12-01

    Fabricated all sky imagers are efficient and cost effective instruments for cloud detection and classification. Continuous operation of this instrument can result in the determination of cloud occurrence and cloud base heights for the paired system. In this study, a fabricated paired sky imaging system - consisting two commercial digital cameras (Canon Powershot A2300) enclosed in weatherproof containers - is developed in Manila Observatory for the purpose of determining cloud base heights at the Manila Observatory area. One of the cameras is placed on the rooftop of Manila Observatory and the other is placed on the rooftop of the university dormitory, 489m from the first camera. The cameras are programmed to simultaneously gather pictures every 5 min. Continuous operation of these cameras were implemented since the end of May of 2014 but data collection started end of October 2013. The data were processed following the algorithm proposed by Kassianov et al (2005). The processing involves the calculation of the merit function that determines the area of overlap of the two pictures. When two pictures are overlapped, the minimum of the merit function corresponds to the pixel column positions where the pictures have the best overlap. In this study, pictures of overcast sky prove to be difficult to process for cloud base height and were excluded from processing. The figure below shows the initial results of the hourly average of cloud base heights from data collected from November 2013 to July 2014. Measured cloud base heights ranged from 250m to 1.5km. These are the heights of cumulus and nimbus clouds that are dominant in this part of the world. Cloud base heights are low in the early hours of the day indicating low convection process during these times. However, the increase in the convection process in the atmosphere can be deduced from higher cloud base heights in the afternoon. The decrease of cloud base heights after 15:00 follows the trend of decreasing solar energy in the atmosphere after this time. The results show the potential of these instruments to determine cloud base heights on prolonged time intervals. The continuous operation of these instruments is implemented to gather seasonal variation of cloud base heights in this part of the world and to add to the much-needed dataset for future climate studies in Manila Observatory.

  3. System for photometric calibration of optoelectronic imaging devices especially streak cameras

    DOEpatents

    Boni, Robert; Jaanimagi, Paul

    2003-11-04

    A system for the photometric calibration of streak cameras and similar imaging devices provides a precise knowledge of the camera's flat-field response as well as a mapping of the geometric distortions. The system provides the flat-field response, representing the spatial variations in the sensitivity of the recorded output, with a signal-to-noise ratio (SNR) greater than can be achieved in a single submicrosecond streak record. The measurement of the flat-field response is carried out by illuminating the input slit of the streak camera with a signal that is uniform in space and constant in time. This signal is generated by passing a continuous wave source through an optical homogenizer made up of a light pipe or pipes in which the illumination typically makes several bounces before exiting as a spatially uniform source field. The rectangular cross-section of the homogenizer is matched to the usable photocathode area of the streak tube. The flat-field data set is obtained by using a slow streak ramp that may have a period from one millisecond (ms) to ten seconds (s), but may be nominally one second in duration. The system also provides a mapping of the geometric distortions, by spatially and temporarily modulating the output of the homogenizer and obtaining a data set using the slow streak ramps. All data sets are acquired using a CCD camera and stored on a computer, which is used to calculate all relevant corrections to the signal data sets. The signal and flat-field data sets are both corrected for geometric distortions prior to applying the flat-field correction. Absolute photometric calibration is obtained by measuring the output fluence of the homogenizer with a "standard-traceable" meter and relating that to the CCD pixel values for a self-corrected flat-field data set.

  4. SIMULTANEOUS DUAL-RADIONUCLIDE MYOCARDIAL PERFUSION IMAGING WITH A SOLID-STATE DEDICATED CARDIAC CAMERA

    PubMed Central

    Ben-Haim, S.; Kacperski, K.; Hain, S.; Van Gramberg, D.; Hutton, B.F.; Waddington, W.A.; Sharir, T.; Roth, N.; Berman, D.S.; Ell, P.J.

    2011-01-01

    We compared simultaneous dual-radionuclide stress and rest myocardial perfusion imaging (MPI) with a novel solid-state cardiac camera and a conventional SPECT camera with separate stress and rest acquisitions. Methods 24 consecutive patients (64.5 ± 11.8 years, 16 men) were injected with 74 MBq of 201Tl (rest) and 250 MBq 99mTc-MIBI (stress). Conventional MPI acquisition times for stress and rest were 21 min and 16 min, respectively. A simultaneous dual-radionuclide (DR) 15 minute list mode gated acquisition was performed on D-SPECT (Spectrum-dynamics, Caesarea, Israel). The DR D-SPECT data were processed using a spillover and scatter correction method. We compared DR D-SPECT images with conventional SPECT images by visual analysis employing the 17-segment model and a 5-point scale (0=normal, 4=absent) to calculate the summed stress and rest scores (SSS and SRS, respectively) and the % visual perfusion defect (TPD) at stress and rest, by dividing the stress and rest scores, respectively, by 68 and multiplying by 100. TPD <5% was considered normal. Image quality was assessed on a 4-point scale (1=poor, 4=very good) and gut activity was assessed on a 4-point scale (0=none, 3=high). Results Conventional MPI was abnormal at stress in 17 patients and at rest in 9 patients. In the 17 abnormal stress studies DR D-SPECT MPI was abnormal in 113 vs. 93 abnormal segments by conventional MPI. In the nine abnormal rest studies DR D-SPECT was abnormal in 45 vs. 48 segments abnormal by conventional MPI. SSS, SRS, TPD stress and TPD rest on conventional SPECT and DR D-SPECT highly correlated (r=0.9790, 0.9694, 0.9784, 0.9710, respectively; p<0.0001 for all). In addition, 6 patients had significantly larger perfusion defects on DR D-SPECT stress images, including five of 11 patients who were imaged earlier on D-SPECT than conventional SPECT. Conclusion D-SPECT enables fast and high quality simultaneous DR MPI in a single imaging session with comparable diagnostic performance and image quality to conventional SPECT. Modifications of the injected doses and of the imaging protocol with DR D-SPECT may enable shortening of imaging time, reducing radiation exposure and shortening significantly patient stay in the department. PMID:20383705

  5. High-resolution imaging of the Pluto-Charon system with the Faint Object Camera of the Hubble Space Telescope

    NASA Technical Reports Server (NTRS)

    Albrecht, R.; Barbieri, C.; Adorf, H.-M.; Corrain, G.; Gemmo, A.; Greenfield, P.; Hainaut, O.; Hook, R. N.; Tholen, D. J.; Blades, J. C.

    1994-01-01

    Images of the Pluto-Charon system were obtained with the Faint Object Camera (FOC) of the Hubble Space Telescope (HST) after the refurbishment of the telescope. The images are of superb quality, allowing the determination of radii, fluxes, and albedos. Attempts were made to improve the resolution of the already diffraction limited images by image restoration. These yielded indications of surface albedo distributions qualitatively consistent with models derived from observations of Pluto-Charon mutual eclipses.

  6. Efficient Smart CMOS Camera Based on FPGAs Oriented to Embedded Image Processing

    PubMed Central

    Bravo, Ignacio; Baliñas, Javier; Gardel, Alfredo; Lázaro, José L.; Espinosa, Felipe; García, Jorge

    2011-01-01

    This article describes an image processing system based on an intelligent ad-hoc camera, whose two principle elements are a high speed 1.2 megapixel Complementary Metal Oxide Semiconductor (CMOS) sensor and a Field Programmable Gate Array (FPGA). The latter is used to control the various sensor parameter configurations and, where desired, to receive and process the images captured by the CMOS sensor. The flexibility and versatility offered by the new FPGA families makes it possible to incorporate microprocessors into these reconfigurable devices, and these are normally used for highly sequential tasks unsuitable for parallelization in hardware. For the present study, we used a Xilinx XC4VFX12 FPGA, which contains an internal Power PC (PPC) microprocessor. In turn, this contains a standalone system which manages the FPGA image processing hardware and endows the system with multiple software options for processing the images captured by the CMOS sensor. The system also incorporates an Ethernet channel for sending processed and unprocessed images from the FPGA to a remote node. Consequently, it is possible to visualize and configure system operation and captured and/or processed images remotely. PMID:22163739

  7. Reconstruction of refocusing and all-in-focus images based on forward simulation model of plenoptic camera

    NASA Astrophysics Data System (ADS)

    Zhang, Rumin; Liu, Peng; Liu, Dijun; Su, Guobin

    2015-12-01

    In this paper, we establish a forward simulation model of plenoptic camera which is implemented by inserting a micro-lens array in a conventional camera. The simulation model is used to emulate how the space objects at different depths are imaged by the main lens then remapped by the micro-lens and finally captured on the 2D sensor. We can easily modify the parameters of the simulation model such as the focal lengths and diameters of the main lens and micro-lens and the number of micro-lens. Employing the spatial integration, the refocused images and all-in-focus images are rendered based on the plenoptic images produced by the model. The forward simulation model can be used to determine the trade-offs between different configurations and to test any new researches related to plenoptic camera without the need of prototype.

  8. 1024 1024 HgCdTe CMOS Camera for Infrared Imaging Magnetograph of Big Bear Solar Observatory

    E-print Network

    1024 × 1024 HgCdTe CMOS Camera for Infrared Imaging Magnetograph of Big Bear Solar Observatory W-Terrestrial Research, 323 Martin Luther King Blvd., Newark, NJ 07102 bBig Bear Solar Observatory, 40386 North Shore Lane, Big Bear City, CA 92314 ABSTRACT The InfraRed Imaging Magnetograph (IRIM)1, 2 is a two

  9. Image Set-based Hand Shape Recognition Using Camera Selection Driven by Multi-class AdaBoosting

    E-print Network

    Fukui, Kazuhiro

    Image Set-based Hand Shape Recognition Using Camera Selection Driven by Multi-class Ada. We propose a method for image set-based hand shape recognition that uses the multi-class AdaBoost framework. The recognition of hand shape is a difficult problem, as a hand's appearance depends greatly

  10. Global stratigraphy of the dwarf planet Ceres from RC2 imaging data of the Dawn FC camera

    NASA Astrophysics Data System (ADS)

    Wagner, R. J.; Schmedemann, N.; Kneissl, T.; Stephan, K.; Otto, K.; Krohn, K.; Schröder, S.; Kersten, E.; Roatsch, T.; Jaumann, R.; Williams, D. A.; Yingst, R. A.; Crown, D.; Mest, S. C.; Russell, C. T.

    2015-10-01

    On March 6, 2015, the Dawn spacecraft was captured into orbit around Ceres. During its approach phase since Dec. 1, 2014, imaging data returned by the framing camera (FC) have increased in spatial resolution exceeding that of the Hubble Space Telescope. In this paper, we use these first images to identify and map global geologic units and to establish a stratigraphic sequence.

  11. A camera for imaging hard x-rays from suprathermal electrons during lower hybrid current drive on PBX-M

    SciTech Connect

    von Goeler, S.; Kaita, R.; Bernabei, S.; Davis, W.; Fishman, H.; Gettelfinger, G.; Ignat, D.; Roney, P.; Stevens, J.; Stodiek, W. . Plasma Physics Lab.); Jones, S.; Paoletti, F. . Plasma Fusion Center); Petravich, G. . Central Research Inst. for Physics); Rimini,

    1993-05-01

    During lower hybrid current drive (LHCD), suprathermal electrons are generated that emit hard X-ray bremsstrahlung. A pinhole camera has been installed on the PBX-M tokamak that records 128 [times] 128 pixel images of the bremsstrahlung with a 3 ms time resolution. This camera has identified hollow radiation profiles on PBX-M, indicating off-axis current drive. The detector is a 9in. dia. intensifier. A detailed account of the construction of the Hard X-ray Camera, its operation, and its performance is given.

  12. A camera for imaging hard x-rays from suprathermal electrons during lower hybrid current drive on PBX-M

    SciTech Connect

    von Goeler, S.; Kaita, R.; Bernabei, S.; Davis, W.; Fishman, H.; Gettelfinger, G.; Ignat, D.; Roney, P.; Stevens, J.; Stodiek, W.; Jones, S.; Paoletti, F.; Petravich, G.; Rimini, F.

    1993-05-01

    During lower hybrid current drive (LHCD), suprathermal electrons are generated that emit hard X-ray bremsstrahlung. A pinhole camera has been installed on the PBX-M tokamak that records 128 {times} 128 pixel images of the bremsstrahlung with a 3 ms time resolution. This camera has identified hollow radiation profiles on PBX-M, indicating off-axis current drive. The detector is a 9in. dia. intensifier. A detailed account of the construction of the Hard X-ray Camera, its operation, and its performance is given.

  13. Two Years of Digital Terrain Model Production Using the Lunar Reconnaissance Orbiter Narrow Angle Camera

    NASA Astrophysics Data System (ADS)

    Burns, K.; Robinson, M. S.; Speyerer, E.; LROC Science Team

    2011-12-01

    One of the primary objectives of the Lunar Reconnaissance Orbiter Camera (LROC) is to gather stereo observations with the Narrow Angle Camera (NAC). These stereo observations are used to generate digital terrain models (DTMs). The NAC has a pixel scale of 0.5 to 2.0 meters but was not designed for stereo observations and thus requires the spacecraft to roll off-nadir to acquire these images. Slews interfere with the data collection of the other instruments, so opportunities are currently limited to four per day. Arizona State University has produced DTMs from 95 stereo pairs for 11 Constellation Project (CxP) sites (Aristarchus, Copernicus crater, Gruithuisen domes, Hortensius domes, Ina D-caldera, Lichtenberg crater, Mare Ingenii, Marius hills, Reiner Gamma, South Pole-Aitkin Rim, Sulpicius Gallus) as well as 30 other regions of scientific interest (including: Bhabha crater, highest and lowest elevation points, Highland Ponds, Kugler Anuchin, Linne Crater, Planck Crater, Slipher crater, Sears Crater, Mandel'shtam Crater, Virtanen Graben, Compton/Belkovich, Rumker Domes, King Crater, Luna 16/20/23/24 landing sites, Ranger 6 landing site, Wiener F Crater, Apollo 11/14/15/17, fresh craters, impact melt flows, Larmor Q crater, Mare Tranquillitatis pit, Hansteen Alpha, Moore F Crater, and Lassell Massif). To generate DTMs, the USGS ISIS software and SOCET SET° from BAE Systems are used. To increase the absolute accuracy of the DTMs, data obtained from the Lunar Orbiter Laser Altimeter (LOLA) is used to coregister the NAC images and define the geodetic reference frame. NAC DTMs have been used in examination of several sites, e.g. Compton-Belkovich, Marius Hills and Ina D-caldera [1-3]. LROC will continue to acquire high-resolution stereo images throughout the science phase of the mission and any extended mission opportunities, thus providing a vital dataset for scientific research as well as future human and robotic exploration. [1] B.L. Jolliff (2011) Nature Geoscience, in press. [2] Lawrence et al. (2011) LPSC XLII, Abst 2228. [3] Garry et al. (2011) LPSC XLII, Abst 2605.

  14. Firefly: A HOT camera core for thermal imagers with enhanced functionality

    NASA Astrophysics Data System (ADS)

    Pillans, Luke; Harmer, Jack; Edwards, Tim

    2015-06-01

    Raising the operating temperature of mercury cadmium telluride infrared detectors from 80K to above 160K creates new applications for high performance infrared imagers by vastly reducing the size, weight and power consumption of the integrated cryogenic cooler. Realizing the benefits of Higher Operating Temperature (HOT) requires a new kind of infrared camera core with the flexibility to address emerging applications in handheld, weapon mounted and UAV markets. This paper discusses the Firefly core developed to address these needs by Selex ES in Southampton UK. Firefly represents a fundamental redesign of the infrared signal chain reducing power consumption and providing compatibility with low cost, low power Commercial Off-The-Shelf (COTS) computing technology. This paper describes key innovations in this signal chain: a ROIC purpose built to minimize power consumption in the proximity electronics, GPU based image processing of infrared video, and a software customisable infrared core which can communicate wirelessly with other Battlespace systems.

  15. A JPEG-like algorithm for compression of single-sensor camera image

    NASA Astrophysics Data System (ADS)

    Benahmed Daho, Omar; Larabi, Mohamed-Chaker; Mukhopadhyay, Jayanta

    2011-01-01

    This paper presents a JPEG-like coder for image compression of single-sensor camera images using a Bayer Color Filter Array (CFA). The originality of the method is a joint scheme of compression.demosaicking in the DCT domain. In this method, the captured CFA raw data is first separated in four distinct components and then converted to YCbCr. A JPEG compression scheme is then applied. At the decoding level, the bitstream is decompressed until reaching the DCT coefficients. These latter are used for the interpolation stage. The obtained results are better than those obtained by the conventional JPEG in terms of CPSNR, ?E2000 and SSIM. The obtained JPEG-like scheme is also less complex.

  16. Automatic camera-based identification and 3-D reconstruction of electrode positions in electrocardiographic imaging.

    PubMed

    Schulze, Walther H W; Mackens, Patrick; Potyagaylo, Danila; Rhode, Kawal; Tülümen, Erol; Schimpf, Rainer; Papavassiliu, Theano; Borggrefe, Martin; Dössel, Olaf

    2014-12-01

    Electrocardiographic imaging (ECG imaging) is a method to depict electrophysiological processes in the heart. It is an emerging technology with the potential of making the therapy of cardiac arrhythmia less invasive, less expensive, and more precise. A major challenge for integrating the method into clinical workflow is the seamless and correct identification and localization of electrodes on the thorax and their assignment to recorded channels. This work proposes a camera-based system, which can localize all electrode positions at once and to an accuracy of approximately 1 ± 1 mm. A system for automatic identification of individual electrodes is implemented that overcomes the need of manual annotation. For this purpose, a system of markers is suggested, which facilitates a precise localization to subpixel accuracy and robust identification using an error-correcting code. The accuracy of the presented system in identifying and localizing electrodes is validated in a phantom study. Its overall capability is demonstrated in a clinical scenario. PMID:25229412

  17. Single-camera panoramic stereo imaging system with a fisheye lens and a convex mirror.

    PubMed

    Li, Weiming; Li, Y F

    2011-03-28

    This paper presents a panoramic stereo imaging system which uses a single camera coaxially combined with a fisheye lens and a convex mirror. It provides the design methodology, trade analysis, and experimental results using commercially available components. The trade study shows the design equations and the various tradeoffs that must be made during design. The system's novelty is that it provides stereo vision over a full 360-degree horizontal field-of-view (FOV). Meanwhile, the entire vertical FOV is enlarged compared to the existing systems. The system is calibrated with a computational model that can accommodate the non-single viewpoint imaging cases to conduct 3D reconstruction in Euclidean space. PMID:21451610

  18. Method for searching the mapping relationship between space points and their image points in CCD camera

    NASA Astrophysics Data System (ADS)

    Sun, Yuchen; Ge, Baozhen; Lu, Qieni; Zou, Jin; Zhang, Yimo

    2005-01-01

    BP Neural Network Method and Linear Partition Method are proposed to search the mapping relationship between space points and their image points in CCD cameras, which can be adopted to calibrate three-dimensional digitization systems based on optical method. Both of the methods only need the coordinates of calibration points and their corresponding image points" coordinates as parameters. The principle of the calibration techniques includes the formula and solution procedure is deduced in detail. Calibration experiment results indicate that the use of Linear Partition Method to coplanar points enables its measuring mean relative error to reach 0.44 percent and the use of BP Neural Network Method to non-coplanar points enables its testing accuracy to reach 0.5-0.6 pixels.

  19. The postcollapse core of M15 imaged with the HST planetary camera

    SciTech Connect

    Lauer, T.R.; Holtzman, J.A.; Faber, S.M.; Baum, W.A.; Currie, D.G.; Ewald, S.P.; Groth, E.J.; Hester, J.J.; Kelsall, T. Lowell Observatory, Flagstaff, AZ Lick Observatory, Santa Cruz, CA Washington Univ., Seattle Maryland Univ., College Park Space Telescope Science Institute, Baltimore, MD Princeton Univ., NJ California Institute of Technology, Pasadena NASA, Goddard Space Flight Center, Greenbelt, MD )

    1991-03-01

    It is shown here that, despite the severe spherical aberration present in the HST, the Wide Field/Planetary Camera (WFPC) images still present useful high-resolution information on M15, the classic candidate for a cluster with a collapsed core. The stars in M15 have been resolved down to the main-sequence turnoff and have been subtracted from the images. The remaining faint, unresolved stars form a diffuse background with a surprisingly large core with r(c) = 0.13 pc. The existence of a large core interior to the power-law cusp may imply that M15 has evolved well past maximum core collapse and may rule out the presence of a massive central black hole as well. 26 refs.

  20. Imaging system for cardiac planar imaging using a dedicated dual-head gamma camera

    DOEpatents

    Majewski, Stanislaw (Morgantown, VA); Umeno, Marc M. (Woodinville, WA)

    2011-09-13

    A cardiac imaging system employing dual gamma imaging heads co-registered with one another to provide two dynamic simultaneous views of the heart sector of a patient torso. A first gamma imaging head is positioned in a first orientation with respect to the heart sector and a second gamma imaging head is positioned in a second orientation with respect to the heart sector. An adjustment arrangement is capable of adjusting the distance between the separate imaging heads and the angle between the heads. With the angle between the imaging heads set to 180 degrees and operating in a range of 140-159 keV and at a rate of up to 500kHz, the imaging heads are co-registered to produce simultaneous dynamic recording of two stereotactic views of the heart. The use of co-registered imaging heads maximizes the uniformity of detection sensitivity of blood flow in and around the heart over the whole heart volume and minimizes radiation absorption effects. A normalization/image fusion technique is implemented pixel-by-corresponding pixel to increase signal for any cardiac region viewed in two images obtained from the two opposed detector heads for the same time bin. The imaging system is capable of producing enhanced first pass studies, bloodpool studies including planar, gated and non-gated EKG studies, planar EKG perfusion studies, and planar hot spot imaging.

  1. Calibration of HST wide field camera for quantitative analysis of faint galaxy images

    NASA Technical Reports Server (NTRS)

    Ratnatunga, Kavan U.; Griffiths, Richard E.; Casertano, Stefano; Neuschaefer, Lyman W.; Wyckoff, Eric W.

    1994-01-01

    We present the methods adopted to optimize the calibration of images obtained with the Hubble Space Telescope (HST) Wide Field Camera (WFC) (1991-1993). Our main goal is to improve quantitative measurement of faint images, with special emphasis on the faint (I approximately 20-24 mag) stars and galaxies observed as a part of the Medium-Deep Survey. Several modifications to the standard calibration procedures have been introduced, including improved bias and dark images, and a new supersky flatfield obtained by combining a large number of relatively object-free Medium-Deep Survey exposures of random fields. The supersky flat has a pixel-to-pixel rms error of about 2.0% in F555W and of 2.4% in F785LP; large-scale variations are smaller than 1% rms. Overall, our modifications improve the quality of faint images with respect to the standard calibration by about a factor of five in photometric accuracy and about 0.3 mag in sensitivity, corresponding to about a factor of two in observing time. The relevant calibration images have been made available to the scientific community.

  2. SU-C-18A-02: Image-Based Camera Tracking: Towards Registration of Endoscopic Video to CT

    SciTech Connect

    Ingram, S; Rao, A; Wendt, R; Castillo, R; Court, L; Yang, J; Beadle, B

    2014-06-01

    Purpose: Endoscopic examinations are routinely performed on head and neck and esophageal cancer patients. However, these images are underutilized for radiation therapy because there is currently no way to register them to a CT of the patient. The purpose of this work is to develop a method to track the motion of an endoscope within a structure using images from standard clinical equipment. This method will be incorporated into a broader endoscopy/CT registration framework. Methods: We developed a software algorithm to track the motion of an endoscope within an arbitrary structure. We computed frame-to-frame rotation and translation of the camera by tracking surface points across the video sequence and utilizing two-camera epipolar geometry. The resulting 3D camera path was used to recover the surrounding structure via triangulation methods. We tested this algorithm on a rigid cylindrical phantom with a pattern spray-painted on the inside. We did not constrain the motion of the endoscope while recording, and we did not constrain our measurements using the known structure of the phantom. Results: Our software algorithm can successfully track the general motion of the endoscope as it moves through the phantom. However, our preliminary data do not show a high degree of accuracy in the triangulation of 3D point locations. More rigorous data will be presented at the annual meeting. Conclusion: Image-based camera tracking is a promising method for endoscopy/CT image registration, and it requires only standard clinical equipment. It is one of two major components needed to achieve endoscopy/CT registration, the second of which is tying the camera path to absolute patient geometry. In addition to this second component, future work will focus on validating our camera tracking algorithm in the presence of clinical imaging features such as patient motion, erratic camera motion, and dynamic scene illumination.

  3. Incremental Multi-view 3D Reconstruction Starting from Two Images Taken by a Stereo Pair of Cameras

    NASA Astrophysics Data System (ADS)

    El hazzat, Soulaiman; Saaidi, Abderrahim; Karam, Antoine; Satori, Khalid

    2015-03-01

    In this paper, we present a new method for multi-view 3D reconstruction based on the use of a binocular stereo vision system constituted of two unattached cameras to initialize the reconstruction process. Afterwards , the second camera of stereo vision system (characterized by varying parameters) moves to capture more images at different times which are used to obtain an almost complete 3D reconstruction. The first two projection matrices are estimated by using a 3D pattern with known properties. After that, 3D scene points are recovered by triangulation of the matched interest points between these two images. The proposed approach is incremental. At each insertion of a new image, the camera projection matrix is estimated using the 3D information already calculated and new 3D points are recovered by triangulation from the result of the matching of interest points between the inserted image and the previous image. For the refinement of the new projection matrix and the new 3D points, a local bundle adjustment is performed. At first, all projection matrices are estimated, the matches between consecutive images are detected and Euclidean sparse 3D reconstruction is obtained. So, to increase the number of matches and have a more dense reconstruction, the Match propagation algorithm, more suitable for interesting movement of the camera, was applied on the pairs of consecutive images. The experimental results show the power and robustness of the proposed approach.

  4. A semi-automatic image-based close range 3D modeling pipeline using a multi-camera configuration.

    PubMed

    Rau, Jiann-Yeou; Yeh, Po-Chia

    2012-01-01

    The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum. PMID:23112656

  5. A Semi-Automatic Image-Based Close Range 3D Modeling Pipeline Using a Multi-Camera Configuration

    PubMed Central

    Rau, Jiann-Yeou; Yeh, Po-Chia

    2012-01-01

    The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum. PMID:23112656

  6. Light field sensor and real-time panorama imaging multi-camera system and the design of data acquisition

    NASA Astrophysics Data System (ADS)

    Lu, Yu; Tao, Jiayuan; Wang, Keyi

    2014-09-01

    Advanced image sensor and powerful parallel data acquisition chip can be used to collect more detailed and comprehensive light field information. Using multiple single aperture and high resolution sensor record light field data, and processing the light field data real time, we can obtain wide field-of-view (FOV) and high resolution image. Wide FOV and high-resolution imaging has promising application in areas of navigation, surveillance and robotics. Qualityenhanced 3D rending, very high resolution depth map estimation, high dynamic-range and other applications we can obtained when we post-process these large light field data. The FOV and resolution are contradictions in traditional single aperture optic imaging system, and can't be solved very well. We have designed a multi-camera light field data acquisition system, and optimized each sensor's spatial location and relations. It can be used to wide FOV and high resolution real-time image. Using 5 megapixel CMOS sensors, and field programmable Gate Array (FPGA) acquisition light field data, paralleled processing and transmission to PC. A common clock signal is distributed to all of the cameras, and the precision of synchronization each camera achieved 40ns. Using 9 CMOSs build an initial system and obtained high resolution 360°×60° FOV image. It is intended to be flexible, modular and scalable, with much visibility and control over the cameras. In the system we used high speed dedicated camera interface CameraLink for system data transfer. The detail of the hardware architecture, its internal blocks, the algorithms, and the device calibration procedure are presented, along with imaging results.

  7. Accidental Pinhole and Pinspeck Cameras

    E-print Network

    Torralba, Antonio

    We identify and study two types of “accidental” images that can be formed in scenes. The first is an accidental pinhole camera image. The second class of accidental images are “inverse” pinhole camera images, formed by ...

  8. Noise-limited resolution of the advanced helicopter pilotage helmet-mounted display and image-intensified camera

    NASA Astrophysics Data System (ADS)

    Perconti, Philip

    1997-06-01

    The Advanced Helicopter Pilotage (AHP) program is developing a wide field-of-view, night and adverse weather vision system for helicopter navigation and obstacle avoidance. THe AHP hardware consists of a second generation thermal sensor, a high definition image intensified camera, and a helmet mounted display (HMD). The specifications for next generation night vision sensor require through the system performance measures. As such, particular importance is given to the quantification of limiting resolution at operationally relevant illumination levels rather than measures performed under typical laboratory illumination.In this paper, descriptions of the AHP HMD and image intensified camera are given, and the measured modulation transfer function of the HMD is reported. Also covered are the results of noise limited resolution testing of the AHP HMD and image intensified camera. A comparison of MTF and noise limited resolution measures, made under the appropriate illumination, using a Dage HR-2000 is presented.

  9. Imaging early demineralization on tooth occlusional surfaces with a high definition InGaAs camera

    NASA Astrophysics Data System (ADS)

    Fried, William A.; Fried, Daniel; Chan, Kenneth H.; Darling, Cynthia L.

    In vivo and in vitro studies have shown that high contrast images of tooth demineralization can be acquired in the near-IR due to the high transparency of dental enamel. The purpose of this study is to compare the lesion contrast in reflectance at near-IR wavelengths coincident with high water absorption with those in the visible, the near-IR at 1300-nm and with fluorescence measurements for early lesions in occlusal surfaces. Twenty-four human molars were used in this in vitro study. Teeth were painted with an acidresistant varnish, leaving a 4×4 mm window in the occlusal surface of each tooth exposed for demineralization. Artificial lesions were produced in the exposed windows after 1 and 2-day exposure to a demineralizing solution at pH 4.5. Lesions were imaged using NIR reflectance at 3 wavelengths, 1310, 1460 and 1600-nm using a high definition InGaAs camera. Visible light reflectance, and fluorescence with 405-nm excitation and detection at wavelengths greater than 500-nm were also used to acquire images for comparison. Crossed polarizers were used for reflectance measurements to reduce interference from specular reflectance. The contrast of both the 24 hr and 48 hr lesions were significantly higher (P<0.05) for NIR reflectance imaging at 1460-nm and 1600-nm than it was for NIR reflectance imaging at 1300-nm, visible reflectance imaging, and fluorescence. The results of this study suggest that NIR reflectance measurements at longer near-IR wavelengths coincident with higher water absorption are better suited for imaging early caries lesions.

  10. Lymphoscintigraphic imaging study for quantitative evaluation of a small field of view (SFOV) gamma camera

    NASA Astrophysics Data System (ADS)

    Alqahtani, M. S.; Lees, J. E.; Bugby, S. L.; Jambi, L. K.; Perkins, A. C.

    2015-07-01

    The Hybrid Compact Gamma Camera (HCGC) is a portable optical-gamma hybrid imager designed for intraoperative medical imaging, particularly for sentinel lymph node biopsy procedures. To investigate the capability of the HCGC in lymphatic system imaging, two lymphoscintigraphic phantoms have been designed and constructed. These phantoms allowed quantitative assessment and evaluation of the HCGC for lymphatic vessel (LV) and sentinel lymph node (SLN) detection. Fused optical and gamma images showed good alignment of the two modalities allowing localisation of activity within the LV and the SLN. At an imaging distance of 10 cm, the spatial resolution of the HCGC during the detection process of the simulated LV was not degraded at a separation of more than 1.5 cm (variation <5%) from the injection site (IS). Even in the presence of the IS the targeted LV was detectable with an acquisition time of less than 2 minutes. The HCGC could detect SLNs containing different radioactivity concentrations (ranging between 1:20 to 1:100 SLN to IS activity ratios) and under various scattering thicknesses (ranging between 5 mm to 30 mm) with high contrast-to-noise ratio (CNR) values (ranging between 11.6 and 110.8). The HCGC can detect the simulated SLNs at various IS to SLN distances, different IS to SLN activity ratios and through varied scattering medium thicknesses. The HCGC provided an accurate physical localisation of radiopharmaceutical uptake in the simulated SLN. These characteristics of the HCGC reflect its suitability for utilisation in lymphatic vessel drainage imaging and SLN imaging in patients in different critical clinical situations such as interventional and surgical procedures.

  11. Imaging Early Demineralization on Tooth Occlusal Surfaces with a High Definition InGaAs Camera

    PubMed Central

    Fried, William A.; Fried, Daniel; Chan, Kenneth H.; Darling, Cynthia L.

    2013-01-01

    In vivo and in vitro studies have shown that high contrast images of tooth demineralization can be acquired in the near-IR due to the high transparency of dental enamel. The purpose of this study is to compare the lesion contrast in reflectance at near-IR wavelengths coincident with high water absorption with those in the visible, the near-IR at 1300-nm and with fluorescence measurements for early lesions in occlusal surfaces. Twenty-four human molars were used in this in vitro study. Teeth were painted with an acid-resistant varnish, leaving a 4×4 mm window in the occlusal surface of each tooth exposed for demineralization. Artificial lesions were produced in the exposed windows after 1 & 2-day exposure to a demineralizing solution at pH 4.5. Lesions were imaged using NIR reflectance at 3 wavelengths, 1310, 1460 and 1600-nm using a high definition InGaAs camera. Visible light reflectance, and fluorescence with 405-nm excitation and detection at wavelengths greater than 500-nm were also used to acquire images for comparison. Crossed polarizers were used for reflectance measurements to reduce interference from specular reflectance. The contrast of both the 24 hr and 48 hr lesions were significantly higher (P<0.05) for NIR reflectance imaging at 1460-nm and 1600-nm than it was for NIR reflectance imaging at 1300-nm, visible reflectance imaging, and fluorescence. The results of this study suggest that NIR reflectance measurements at longer near-IR wavelengths coincident with higher water absorption are better suited for imaging early caries lesions. PMID:24357911

  12. The High Resolution Stereo Camera (HRSC): 10 Years of Imaging Mars

    NASA Astrophysics Data System (ADS)

    Jaumann, R.; Neukum, G.; Tirsch, D.; Hoffmann, H.

    2014-04-01

    The HRSC Experiment: Imagery is the major source for our current understanding of the geologic evolution of Mars in qualitative and quantitative terms.Imaging is required to enhance our knowledge of Mars with respect to geological processes occurring on local, regional and global scales and is an essential prerequisite for detailed surface exploration. The High Resolution Stereo Camera (HRSC) of ESA's Mars Express Mission (MEx) is designed to simultaneously map the morphology, topography, structure and geologic context of the surface of Mars as well as atmospheric phenomena [1]. The HRSC directly addresses two of the main scientific goals of the Mars Express mission: (1) High-resolution three-dimensional photogeologic surface exploration and (2) the investigation of surface-atmosphere interactions over time; and significantly supports: (3) the study of atmospheric phenomena by multi-angle coverage and limb sounding as well as (4) multispectral mapping by providing high-resolution threedimensional color context information. In addition, the stereoscopic imagery will especially characterize landing sites and their geologic context [1]. The HRSC surface resolution and the digital terrain models bridge the gap in scales between highest ground resolution images (e.g., HiRISE) and global coverage observations (e.g., Viking). This is also the case with respect to DTMs (e.g., MOLA and local high-resolution DTMs). HRSC is also used as cartographic basis to correlate between panchromatic and multispectral stereo data. The unique multi-angle imaging technique of the HRSC supports its stereo capability by providing not only a stereo triplet but also a stereo quintuplet, making the photogrammetric processing very robust [1, 3]. The capabilities for three dimensional orbital reconnaissance of the Martian surface are ideally met by HRSC making this camera unique in the international Mars exploration effort.

  13. Research on auto-calibration technology of the image plane's center of 360-degree and all round looking camera

    NASA Astrophysics Data System (ADS)

    Zhang, Shaojun; Xu, Xiping

    2015-10-01

    The 360-degree and all round looking camera, as its characteristics of suitable for automatic analysis and judgment on the ambient environment of the carrier by image recognition algorithm, is usually applied to opto-electronic radar of robots and smart cars. In order to ensure the stability and consistency of image processing results of mass production, it is necessary to make sure the centers of image planes of different cameras are coincident, which requires to calibrate the position of the image plane's center. The traditional mechanical calibration method and electronic adjusting mode of inputting the offsets manually, both exist the problem of relying on human eyes, inefficiency and large range of error distribution. In this paper, an approach of auto- calibration of the image plane of this camera is presented. The imaging of the 360-degree and all round looking camera is a ring-shaped image consisting of two concentric circles, the center of the image is a smaller circle and the outside is a bigger circle. The realization of the technology is just to exploit the above characteristics. Recognizing the two circles through HOUGH TRANSFORM algorithm and calculating the center position, we can get the accurate center of image, that the deviation of the central location of the optic axis and image sensor. The program will set up the image sensor chip through I2C bus automatically, we can adjusting the center of the image plane automatically and accurately. The technique has been applied to practice, promotes productivity and guarantees the consistent quality of products.

  14. Shot-by-shot imaging of Hong-Ou-Mandel interference with an intensified sCMOS camera

    E-print Network

    Micha? Jachura; Rados?aw Chrapkiewicz

    2015-02-27

    We report the first observation of Hong-Ou-Mandel (HOM) interference of highly indistinguishable photon pairs with spatial resolution. Direct imaging of two-photon coalescence with an intensified sCMOS camera system clearly reveals spatially separated photons appearing pairwise within one of the two modes. With the use of the camera system we quantified the number of pairs and recovered the full HOM dip yielding 96.3% interference visibility, as well as retrieved the number of coalesced pairs. We retrieved the spatial mode structure of both interfering photons by performing a proof-of-principle demonstration of a new, low noise high resolution coincidence imaging scheme.

  15. Shot-by-shot imaging of Hong-Ou-Mandel interference with an intensified sCMOS camera

    NASA Astrophysics Data System (ADS)

    Jachura, Micha?; Chrapkiewicz, Rados?aw

    2015-04-01

    We report the first observation of Hong-Ou-Mandel (HOM) interference of highly indistinguishable photon pairs with spatial resolution. Direct imaging of two-photon coalescence with an intensified sCMOS camera system clearly reveals spatially separated photons appearing pairwise within one of the two modes. With the use of the camera system we quantified the number of pairs and recovered the full HOM dip yielding 96.3% interference visibility, as well as retrieved the number of coalesced pairs. We retrieved the spatial mode structure of both interfering photons by performing a proof-of-principle demonstration of a new, low noise high resolution coincidence imaging scheme.

  16. Space-bandwidth extension in parallel phase-shifting digital holography using a four-channel polarization-imaging camera.

    PubMed

    Tahara, Tatsuki; Ito, Yasunori; Xia, Peng; Awatsuji, Yasuhiro; Nishio, Kenzo; Ura, Shogo; Kubota, Toshihiro; Matoba, Osamu

    2013-07-15

    We propose a method for extending the space bandwidth (SBW) available for recording an object wave in parallel phase-shifting digital holography using a four-channel polarization-imaging camera. A linear spatial carrier of the reference wave is introduced to an optical setup of parallel four-step phase-shifting interferometry using a commercially available polarization-imaging camera that has four polarization-detection channels. Then a hologram required for parallel two-step phase shifting, which is a technique capable of recording the widest SBW in parallel phase shifting, can be obtained. The effectiveness of the proposed method was numerically and experimentally verified. PMID:23939081

  17. Calibration of the Lunar Reconnaissance Orbiter Camera

    NASA Astrophysics Data System (ADS)

    Tschimmel, M.; Robinson, M. S.; Humm, D. C.; Denevi, B. W.; Lawrence, S. J.; Brylow, S.; Ravine, M.; Ghaemi, T.

    2008-12-01

    The Lunar Reconnaissance Orbiter Camera (LROC) onboard the NASA Lunar Reconnaissance Orbiter (LRO) spacecraft consists of three cameras: the Wide-Angle Camera (WAC) and two identical Narrow Angle Cameras (NAC-L, NAC-R). The WAC is push-frame imager with 5 visible wavelength filters (415 to 680 nm) at a spatial resolution of 100 m/pixel and 2 UV filters (315 and 360 nm) with a resolution of 400 m/pixel. In addition to the multicolor imaging the WAC can operate in monochrome mode to provide a global large- incidence angle basemap and a time-lapse movie of the illumination conditions at both poles. The WAC has a highly linear response, a read noise of 72 e- and a full well capacity of 47,200 e-. The signal-to-noise ratio in each band is 140 in the worst case. There are no out-of-band leaks and the spectral response of each filter is well characterized. Each NAC is a monochrome pushbroom scanner, providing images with a resolution of 50 cm/pixel from a 50-km orbit. A single NAC image has a swath width of 2.5 km and a length of up to 26 km. The NACs are mounted to acquire side-by-side imaging for a combined swath width of 5 km. The NAC is designed to fully characterize future human and robotic landing sites in terms of topography and hazard risks. The North and South poles will be mapped on a 1-meter-scale poleward of 85.5° latitude. Stereo coverage can be provided by pointing the NACs off-nadir. The NACs are also highly linear. Read noise is 71 e- for NAC-L and 74 e- for NAC-R and the full well capacity is 248,500 e- for NAC-L and 262,500 e- for NAC- R. The focal lengths are 699.6 mm for NAC-L and 701.6 mm for NAC-R; the system MTF is 28% for NAC-L and 26% for NAC-R. The signal-to-noise ratio is at least 46 (terminator scene) and can be higher than 200 (high sun scene). Both NACs exhibit a straylight feature, which is caused by out-of-field sources and is of a magnitude of 1-3%. However, as this feature is well understood it can be greatly reduced during ground processing. All three cameras were calibrated in the laboratory under ambient conditions. Future thermal vacuum tests will characterize critical behaviors across the full range of lunar operating temperatures. In-flight tests will check for changes in response after launch and provide key data for meeting the requirements of 1% relative and 10% absolute radiometric calibration.

  18. Laser Doppler field sensor for high resolution flow velocity imaging without camera

    SciTech Connect

    Voigt, Andreas; Bayer, Christian; Shirai, Katsuaki; Buettner, Lars; Czarske, Juergen

    2008-09-20

    In this paper we present a laser sensor for highly spatially resolved flow imaging without using a camera. The sensor is an extension of the principle of laser Doppler anemometry (LDA). Instead of a parallel fringe system, diverging and converging fringes are employed. This method facilitates the determination of the tracer particle position within the measurement volume and leads to an increased spatial and velocity resolution compared to conventional LDA. Using a total number of four fringe systems the flow is resolved in two spatial dimensions and the orthogonal velocity component. Since no camera is used, the resolution of the sensor is not influenced by pixel size effects. A spatial resolution of 4 {mu}m in the x direction and 16 {mu}m in the y direction and a relative velocity resolution of 1x10{sup -3} have been demonstrated up to now. As a first application we present the velocity measurement of an injection nozzle flow. The sensor is also highly suitable for applications in nano- and microfluidics, e.g., for the measurement of flow rates.

  19. ANTS — a simulation package for secondary scintillation Anger-camera type detector in thermal neutron imaging

    NASA Astrophysics Data System (ADS)

    Morozov, A.; Defendi, I.; Engels, R.; Fraga, F. A. F.; Fraga, M. M. F. R.; Guerard, B.; Jurkovic, M.; Kemmerling, G.; Manzin, G.; Margato, L. M. S.; Niko, H.; Pereira, L.; Petrillo, C.; Peyaud, A.; Piscitelli, F.; Raspino, D.; Rhodes, N. J.; Sacchetti, F.; Schooneveld, E. M.; Van Esch, P.; Zeitelhack, K.

    2012-08-01

    A custom and fully interactive simulation package ANTS (Anger-camera type Neutron detector: Toolkit for Simulations) has been developed to optimize the design and operation conditions of secondary scintillation Anger-camera type gaseous detectors for thermal neutron imaging. The simulation code accounts for all physical processes related to the neutron capture, energy deposition pattern, drift of electrons of the primary ionization and secondary scintillation. The photons are traced considering the wavelength-resolved refraction and transmission of the output window. Photo-detection accounts for the wavelength-resolved quantum efficiency, angular response, area sensitivity, gain and single-photoelectron spectra of the photomultipliers (PMTs). The package allows for several geometrical shapes of the PMT photocathode (round, hexagonal and square) and offers a flexible PMT array configuration: up to 100 PMTs in a custom arrangement with the square or hexagonal packing. Several read-out patterns of the PMT array are implemented. Reconstruction of the neutron capture position (projection on the plane of the light emission) is performed using the center of gravity, maximum likelihood or weighted least squares algorithm. Simulation results reproduce well the preliminary results obtained with a small-scale detector prototype. ANTS executables can be downloaded from http://coimbra.lip.pt/~andrei/.

  20. Retrieval of Garstang's emission function from all-sky camera images

    NASA Astrophysics Data System (ADS)

    Kocifaj, Miroslav; Solano Lamphar, Héctor Antonio; Kundracik, František

    2015-10-01

    The emission function from ground-based light sources predetermines the skyglow features to a large extent, while most mathematical models that are used to predict the night sky brightness require the information on this function. The radiant intensity distribution on a clear sky is experimentally determined as a function of zenith angle using the theoretical approach published only recently in MNRAS, 439, 3405-3413. We have made the experiments in two localities in Slovakia and Mexico by means of two digital single lens reflex professional cameras operating with different lenses that limit the system's field-of-view to either 180º or 167º. The purpose of using two cameras was to identify variances between two different apertures. Images are taken at different distances from an artificial light source (a city) with intention to determine the ratio of zenith radiance relative to horizontal irradiance. Subsequently, the information on the fraction of the light radiated directly into the upward hemisphere (F) is extracted. The results show that inexpensive devices can properly identify the upward emissions with adequate reliability as long as the clear sky radiance distribution is dominated by a largest ground-based light source. Highly unstable turbidity conditions can also make the parameter F difficult to find or even impossible to retrieve. The measurements at low elevation angles should be avoided due to a potentially parasitic effect of direct light emissions from luminaires surrounding the measuring site.

  1. Human Detection Based on the Generation of a Background Image by Using a Far-Infrared Light Camera

    PubMed Central

    Jeon, Eun Som; Choi, Jong-Suk; Lee, Ji Hoon; Shin, Kwang Yong; Kim, Yeong Gon; Le, Toan Thanh; Park, Kang Ryoung

    2015-01-01

    The need for computer vision-based human detection has increased in fields, such as security, intelligent surveillance and monitoring systems. However, performance enhancement of human detection based on visible light cameras is limited, because of factors, such as nonuniform illumination, shadows and low external light in the evening and night. Consequently, human detection based on thermal (far-infrared light) cameras has been considered as an alternative. However, its performance is influenced by the factors, such as low image resolution, low contrast and the large noises of thermal images. It is also affected by the high temperature of backgrounds during the day. To solve these problems, we propose a new method for detecting human areas in thermal camera images. Compared to previous works, the proposed research is novel in the following four aspects. One background image is generated by median and average filtering. Additional filtering procedures based on maximum gray level, size filtering and region erasing are applied to remove the human areas from the background image. Secondly, candidate human regions in the input image are located by combining the pixel and edge difference images between the input and background images. The thresholds for the difference images are adaptively determined based on the brightness of the generated background image. Noise components are removed by component labeling, a morphological operation and size filtering. Third, detected areas that may have more than two human regions are merged or separated based on the information in the horizontal and vertical histograms of the detected area. This procedure is adaptively operated based on the brightness of the generated background image. Fourth, a further procedure for the separation and removal of the candidate human regions is performed based on the size and ratio of the height to width information of the candidate regions considering the camera viewing direction and perspective projection. Experimental results with two types of databases confirm that the proposed method outperforms other methods. PMID:25808774

  2. Mars Orbiter Camera Acquires High Resolution Stereoscopic Images of the Viking One Landing Site

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Two MOC images of the vicinity of the Viking Lander 1 (MOC 23503 and 25403), acquired separately on 12 April 1998 at 08:32 PDT and 21 April 1998 at 13:54 PDT (respectively), are combined here in a stereoscopic anaglyph. The more recent, slightly better quality image is in the red channel, while the earlier image is shown in the blue and green channels. Only the overlap portion of the images is included in the composite.

    Image 23503 was taken at a viewing angle of 31.6o from vertical; 25403 was taken at an angle of 22.4o, for a difference of 9.4o. Although this is not as large a difference as is typically used in stereo mapping, it is sufficient to provide some indication of relief, at least in locations of high relief.

    The image shows the raised rims and deep interiors of the larger impact craters in the area (the largest crater is about 650 m/2100 feet across). It shows that the relief on the ridges is very subtle, and that, in general, the Viking landing site is very flat. This result is, of course, expected: the VL-1 site was chosen specifically because it was likely to have low to very low slopes that represented potential hazards to the spacecraft.

    Malin Space Science Systems and the California Institute of Technology built the MOC using spare hardware from the Mars Observer mission. MSSS operates the camera from its facilities in San Diego, CA. The Jet Propulsion Laboratory's Mars Surveyor Operations Project operates the Mars Global Surveyor spacecraft with its industrial partner, Lockheed Martin Astronautics, from facilities in Pasadena, CA and Denver, CO.

  3. Multi-temporal database of High Resolution Stereo Camera (HRSC) images - Alpha version

    NASA Astrophysics Data System (ADS)

    Erkeling, G.; Luesebrink, D.; Hiesinger, H.; Reiss, D.; Jaumann, R.

    2014-04-01

    Image data transmitted to Earth by Martian spacecraft since the 1970s, for example by Mariner and Viking, Mars Global Surveyor (MGS), Mars Express (MEx) and the Mars Reconnaissance Orbiter (MRO) showed, that the surface of Mars has changed dramatically and actually is continually changing [e.g., 1-8]. The changes are attributed to a large variety of atmospherical, geological and morphological processes, including eolian processes [9,10], mass wasting processes [11], changes of the polar caps [12] and impact cratering processes [13]. In addition, comparisons between Mariner, Viking and Mars Global Surveyor images suggest that more than one third of the Martian surface has brightened or darkened by at least 10% [6]. Albedo changes can have effects on the global heat balance and the circulation of winds, which can result in further surface changes [14-15]. The High Resolution Stereo Camera (HRSC) [16,17] on board Mars Express (MEx) covers large areas at high resolution and is therefore suited to detect the frequency, extent and origin of Martian surface changes. Since 2003 HRSC acquires highresolution images of the Martian surface and contributes to Martian research, with focus on the surface morphology, the geology and mineralogy, the role of liquid water on the surface and in the atmosphere, on volcanism, as well as on the proposed climate change throughout the Martian history and has improved our understanding of the evolution of Mars significantly [18-21]. The HRSC data are available at ESA's Planetary Science Archive (PSA) as well as through the NASA Planetary Data System (PDS). Both data platforms are frequently used by the scientific community and provide additional software and environments to further generate map-projected and geometrically calibrated HRSC data. However, while previews of the images are available, there is no possibility to quickly and conveniently see the spatial and temporal availability of HRSC images in a specific region, which is important to detect the surface changes that occurred between two or more images.

  4. Evaluation of a CdTe semiconductor based compact gamma camera for sentinel lymph node imaging

    SciTech Connect

    Russo, Paolo; Curion, Assunta S.; Mettivier, Giovanni; Esposito, Michela; Aurilio, Michela; Caraco, Corradina; Aloj, Luigi; Lastoria, Secondo

    2011-03-15

    Purpose: The authors assembled a prototype compact gamma-ray imaging probe (MediPROBE) for sentinel lymph node (SLN) localization. This probe is based on a semiconductor pixel detector. Its basic performance was assessed in the laboratory and clinically in comparison with a conventional gamma camera. Methods: The room-temperature CdTe pixel detector (1 mm thick) has 256x256 square pixels arranged with a 55 {mu}m pitch (sensitive area 14.08x14.08 mm{sup 2}), coupled pixel-by-pixel via bump-bonding to the Medipix2 photon-counting readout CMOS integrated circuit. The imaging probe is equipped with a set of three interchangeable knife-edge pinhole collimators (0.94, 1.2, or 2.1 mm effective diameter at 140 keV) and its focal distance can be regulated in order to set a given field of view (FOV). A typical FOV of 70 mm at 50 mm skin-to-collimator distance corresponds to a minification factor 1:5. The detector is operated at a single low-energy threshold of about 20 keV. Results: For {sup 99m}Tc, at 50 mm distance, a background-subtracted sensitivity of 6.5x10{sup -3} cps/kBq and a system spatial resolution of 5.5 mm FWHM were obtained for the 0.94 mm pinhole; corresponding values for the 2.1 mm pinhole were 3.3x10{sup -2} cps/kBq and 12.6 mm. The dark count rate was 0.71 cps. Clinical images in three patients with melanoma indicate detection of the SLNs with acquisition times between 60 and 410 s with an injected activity of 26 MBq {sup 99m}Tc and prior localization with standard gamma camera lymphoscintigraphy. Conclusions: The laboratory performance of this imaging probe is limited by the pinhole collimator performance and the necessity of working in minification due to the limited detector size. However, in clinical operative conditions, the CdTe imaging probe was effective in detecting SLNs with adequate resolution and an acceptable sensitivity. Sensitivity is expected to improve with the future availability of a larger CdTe detector permitting operation at shorter distances from the patient skin.

  5. Results of shuttle EMU thermal vacuum tests incorporating an infrared imaging camera data acquisition system

    NASA Technical Reports Server (NTRS)

    Anderson, James E.; Tepper, Edward H.; Trevino, Louis A.

    1991-01-01

    Manned tests in Chamber B at NASA JSC were conducted in May and June of 1990 to better quantify the Space Shuttle Extravehicular Mobility Unit's (EMU) thermal performance in the cold environmental extremes of space. Use of an infrared imaging camera with real-time video monitoring of the output significantly added to the scope, quality and interpretation of the test conduct and data acquisition. Results of this test program have been effective in the thermal certification of a new insulation configuration and the '5000 Series' glove. In addition, the acceptable thermal performance of flight garments with visually deteriorated insulation was successfully demonstrated, thereby saving significant inspection and garment replacement cost. This test program also established a new method for collecting data vital to improving crew thermal comfort in a cold environment.

  6. Spatial frequency-domain multiplexed microscopy for simultaneous, single-camera, one-shot, fluorescent, and quantitative-phase imaging.

    PubMed

    Chowdhury, Shwetadwip; Eldridge, Will J; Wax, Adam; Izatt, Joseph A

    2015-11-01

    Multimodal imaging is a crucial tool when imaging biological phenomena that cannot be comprehensively captured by a single modality. Here, we introduce a theoretical framework for spatial-frequency-multiplexed microscopy via off-axis interference as a novel wide-field imaging technique that enables true simultaneous multimodal and multichannel wide-field imaging. We experimentally demonstrate this technique for single-camera, simultaneous two-channel fluorescence and one-channel quantitative-phase imaging for fluorescent microspheres and fixed cells stained for F-actin and nuclear fluorescence. PMID:26512463

  7. In vitro near-infrared imaging of occlusal dental caries using a germanium-enhanced CMOS camera

    NASA Astrophysics Data System (ADS)

    Lee, Chulsung; Darling, Cynthia L.; Fried, Daniel

    2010-02-01

    The high transparency of dental enamel in the near-infrared (NIR) at 1310-nm can be exploited for imaging dental caries without the use of ionizing radiation. The objective of this study was to determine whether the lesion contrast derived from NIR transillumination can be used to estimate lesion severity. Another aim was to compare the performance of a new Ge enhanced complementary metal-oxide-semiconductor (CMOS) based NIR imaging camera with the InGaAs focal plane array (FPA). Extracted human teeth (n=52) with natural occlusal caries were imaged with both cameras at 1310-nm and the image contrast between sound and carious regions was calculated. After NIR imaging, teeth were sectioned and examined using more established methods, namely polarized light microscopy (PLM) and transverse microradiography (TMR) to calculate lesion severity. Lesions were then classified into 4 categories according to the lesion severity. Lesion contrast increased significantly with lesion severity for both cameras (p<0.05). The Ge enhanced CMOS camera equipped with the larger array and smaller pixels yielded higher contrast values compared with the smaller InGaAs FPA (p<0.01). Results demonstrate that NIR lesion contrast can be used to estimate lesion severity.

  8. Measurement of effective temperature range of fire service thermal imaging cameras

    NASA Astrophysics Data System (ADS)

    Amon, Francine; Bryner, Nelson

    2008-04-01

    The use of thermal imaging cameras (TIC) by the fire service is increasing as fire fighters become more aware of the value of these tools. The National Fire Protection Association (NFPA) is currently developing a consensus standard for design and performance requirements of TIC as used by the fire service. The National Institute of Standards and Technology facilitates this process by providing recommendations for science-based performance metrics and test methods to the NFPA technical committee charged with the development of this standard. A suite of imaging performance metrics and test methods, based on the harsh operating environment and limitations of use particular to the fire service, has been proposed for inclusion in the standard. The Effective Temperature Range (ETR) measures the range of temperatures that a TIC can view while still providing useful information to the user. Specifically, extreme heat in the field of view tends to inhibit a TIC's ability to discern surfaces having intermediate temperatures, such as victims and fire fighters. The ETR measures the contrast of a target having alternating 25 °C and 30 °C bars while an increasing temperature range is imposed on other surfaces in the field of view. The ETR also indicates the thermal conditions that trigger a shift in integration time common to TIC employing microbolometer sensors. The reported values for this imaging performance metric are the hot surface temperature range within which the TIC provides adequate bar contrast, and the hot surface temperature at which the TIC shifts integration time.

  9. Dual-modality imaging in vivo with an NIR and gamma emitter using an intensified CCD camera and a conventional gamma camera

    NASA Astrophysics Data System (ADS)

    Houston, Jessica P.; Ke, Shi; Wang, Wei; Li, Chun; Sevick-Muraca, Eva M.

    2005-04-01

    Fluorescence-enhanced optical imaging measurements and conventional gamma camera images on human M21 melanoma xenografts were acquired for a "dual-modality" molecular imaging study. The avb3 integrin cell surface receptors were imaged using a cyclic peptide, cyclopentapeptide cyclo(lys-Arg-Gly-Asp-phe) [c(KRGDf)] probe which is known to target the membrane receptor. The probe, dual-labeled with a radiotracer, 111Indium, for gamma scintigraphy as well as with a near-infrared dye, IRDye800, was injected into six nude mice at a dose equivalent to 90mCi of 111In and 5 nanomoles of near-infrared (NIR) dye. A 15 min gamma scan and 800 millisecond NIR-sensitive ICCD optical photograph were collected 24 hours after injection of the dual-labeled probe. The image quality between the nuclear and optical data was investigated with the results showing similar target-to-background ratios (TBR) based on the origin of fluorescence and gamma emissions at the targeted tumor site. Furthermore, an analysis of SNR versus contrast showed greater sensitivity of optical over nuclear imaging for the subcutaneous tumor targets measured by surface regions of interest.

  10. A depth camera for natural human-computer interaction based on near-infrared imaging and structured light

    NASA Astrophysics Data System (ADS)

    Liu, Yue; Wang, Liqiang; Yuan, Bo; Liu, Hao

    2015-08-01

    Designing of a novel depth camera is presented, which targets close-range (20-60cm) natural human-computer interaction especially for mobile terminals. In order to achieve high precision through the working range, a two-stepping method is employed to match the near infrared intensity image to absolute depth in real-time. First, we use structured light achieved by an 808nm laser diode and a Dammann grating to coarsely quantize the output space of depth values into discrete bins. Then use a learning-based classification forest algorithm to predict the depth distribution over these bins for each pixel in the image. The quantitative experimental results show that this depth camera has 1% precision over range of 20-60cm, which show that the camera suit resource-limited and low-cost application.

  11. Tower Camera Handbook

    SciTech Connect

    Moudry, D

    2005-01-01

    The tower camera in Barrow provides hourly images of ground surrounding the tower. These images may be used to determine fractional snow cover as winter arrives, for comparison with the albedo that can be calculated from downward-looking radiometers, as well as some indication of present weather. Similarly, during spring time, the camera images show the changes in the ground albedo as the snow melts. The tower images are saved in hourly intervals. In addition, two other cameras, the skydeck camera in Barrow and the piling camera in Atqasuk, show the current conditions at those sites.

  12. Pixel-level Image Fusion Algorithms for Multi-camera Imaging System

    E-print Network

    Abidi, Mongi A.

    for knowledge, supporting me technically and financially with the resources in the Imaging, Robotics needed them emotionally, financially and every possible situation I imposed upon them. I owe them to realize my dream. Dr. Abidi took over as the academic advisor in graduate school, fuelled my quest

  13. IMAGING CONCERT HALL ACOUSTICS USING VISUAL AND AUDIO CAMERAS Adam O'Donovan, Ramani Duraiswami and Dmitry Zotkin

    E-print Network

    Zotkin, Dmitry N.

    IMAGING CONCERT HALL ACOUSTICS USING VISUAL AND AUDIO CAMERAS Adam O'Donovan, Ramani Duraiswami, acoustical scene analysis. 1. INTRODUCTION Human listening enjoyment and our ability to localize sound and measurements/simulation to assure that the room acoustics helps the perception of the performance rather than

  14. 2950 IEEE TRANSACTIONS ON PLASMA SCIENCE, VOL. 39, NO. 11, NOVEMBER 2011 Fast Camera Imaging of Hall Thruster Ignition

    E-print Network

    . The cathode introduces azimuthal asymmetry, which persists for about 30 µs into the ignition. Index Terms--Plasma2950 IEEE TRANSACTIONS ON PLASMA SCIENCE, VOL. 39, NO. 11, NOVEMBER 2011 Fast Camera Imaging devices, plasma diagnostics. PLASMA thrusters are used on satellites for repositioning, orbit correction

  15. A Multispectral Image Creating Method for a New Airborne Four-Camera System with Different Bandpass Filters.

    PubMed

    Li, Hanlun; Zhang, Aiwu; Hu, Shaoxing

    2015-01-01

    This paper describes an airborne high resolution four-camera multispectral system which mainly consists of four identical monochrome cameras equipped with four interchangeable bandpass filters. For this multispectral system, an automatic multispectral data composing method was proposed. The homography registration model was chosen, and the scale-invariant feature transform (SIFT) and random sample consensus (RANSAC) were used to generate matching points. For the difficult registration problem between visible band images and near-infrared band images in cases lacking manmade objects, we presented an effective method based on the structural characteristics of the system. Experiments show that our method can acquire high quality multispectral images and the band-to-band alignment error of the composed multiple spectral images is less than 2.5 pixels. PMID:26205264

  16. A Multispectral Image Creating Method for a New Airborne Four-Camera System with Different Bandpass Filters

    PubMed Central

    Li, Hanlun; Zhang, Aiwu; Hu, Shaoxing

    2015-01-01

    This paper describes an airborne high resolution four-camera multispectral system which mainly consists of four identical monochrome cameras equipped with four interchangeable bandpass filters. For this multispectral system, an automatic multispectral data composing method was proposed. The homography registration model was chosen, and the scale-invariant feature transform (SIFT) and random sample consensus (RANSAC) were used to generate matching points. For the difficult registration problem between visible band images and near-infrared band images in cases lacking manmade objects, we presented an effective method based on the structural characteristics of the system. Experiments show that our method can acquire high quality multispectral images and the band-to-band alignment error of the composed multiple spectral images is less than 2.5 pixels. PMID:26205264

  17. Real-time imaging using a 2.8 THz quantum cascade laser and uncooled infrared microbolometer camera.

    PubMed

    Behnken, Barry N; Karunasiri, Gamani; Chamberlin, Danielle R; Robrish, Peter R; Faist, Jérôme

    2008-03-01

    Real-time imaging in the terahertz (THz) spectral range was achieved using a milliwatt-scale, 2.8 THz quantum cascade laser and an uncooled, 160 x 120 pixel microbolometer camera modified with Picarin optics. Noise equivalent temperature difference of the camera in the 1-5 THz frequency range was estimated to be at least 3 K, confirming the need for external THz illumination when imaging in this frequency regime. Despite the appearance of fringe patterns produced by multiple diffraction effects, single-frame and extended video imaging of obscured objects show high-contrast differentiation between metallic and plastic materials, supporting the viability of this imaging approach for use in future security screening applications. PMID:18311285

  18. Rethinking color cameras

    E-print Network

    Chakrabarti, Ayan

    Digital color cameras make sub-sampled measurements of color at alternating pixel locations, and then “demosaick” these measurements to create full color images by up-sampling. This allows traditional cameras with restricted ...

  19. Development of Electron Tracking Compton Camera using micro pixel gas chamber for medical imaging

    NASA Astrophysics Data System (ADS)

    Kabuki, Shigeto; Hattori, Kaori; Kohara, Ryota; Kunieda, Etsuo; Kubo, Atsushi; Kubo, Hidetoshi; Miuchi, Kentaro; Nakahara, Tadaki; Nagayoshi, Tsutomu; Nishimura, Hironobu; Okada, Yoko; Orito, Reiko; Sekiya, Hiroyuki; Shirahata, Takashi; Takada, Atsushi; Tanimori, Toru; Ueno, Kazuki

    2007-10-01

    We have developed the Electron Tracking Compton Camera (ETCC) with reconstructing the 3-D tracks of the scattered electron in Compton process for both sub-MeV and MeV gamma rays. By measuring both the directions and energies of not only the recoil gamma ray but also the scattered electron, the direction of the incident gamma ray is determined for each individual photon. Furthermore, a residual measured angle between the recoil electron and scattered gamma ray is quite powerful for the kinematical background rejection. For the 3-D tracking of the electrons, the Micro Time Projection Chamber (?-TPC) was developed using a new type of the micro pattern gas detector. The ETCC consists of this ?-TPC (10×10×8 cm 3) and the 6×6×13 mm 3 GSO crystal pixel arrays with a flat panel photo-multiplier surrounding the ?-TPC for detecting recoil gamma rays. The ETCC provided the angular resolution of 6.6° (FWHM) at 364 keV of 131I. A mobile ETCC for medical imaging, which is fabricated in a 1 m cubic box, has been operated since October 2005. Here, we present the imaging results for the line sources and the phantom of human thyroid gland using 364 keV gamma rays of 131I.

  20. 4 Vesta in Color: High Resolution Mapping from Dawn Framing Camera Images

    NASA Technical Reports Server (NTRS)

    Reddy, V.; LeCorre, L.; Nathues, A.; Sierks, H.; Christensen, U.; Hoffmann, M.; Schroeder, S. E.; Vincent, J. B.; McSween, H. Y.; Denevi, B. W.; Li, J.-Y.; Pieters, C. M.; Gaffey, M.; Mittlefehldt, D.; Buratti, B.; Hicks, M.; McCord, T.; Combe, J.-P.; DeSantis, M. C.; Russell, C. T.; Raymond, C. A.; Marques, P. Gutierrez; Maue, T.; Hall, I.

    2011-01-01

    Rotational surface variations on asteroid 4 Vesta have been known from ground-based and HST observations, and they have been interpreted as evidence of compositional diversity. NASA s Dawn mission entered orbit around Vesta on July 16, 2011 for a year-long global characterization. The framing cameras (FC) onboard the Dawn spacecraft will image the asteroid in one clear (broad) and seven narrow band filters covering the wavelength range between 0.4-1.0 microns. We present color mapping results from the Dawn FC observations of Vesta obtained during Survey orbit (approx.3000 km) and High-Altitude Mapping Orbit (HAMO) (approx.950 km). Our aim is to create global color maps of Vesta using multi spectral FC images to identify the spatial extent of compositional units and link them with other available data sets to extract the basic mineralogy. While the VIR spectrometer onboard Dawn has higher spectral resolution (864 channels) allowing precise mineralogical assessment of Vesta s surface, the FC has three times higher spatial resolution in any given orbital phase. In an effort to extract maximum information from FC data we have developed algorithms using laboratory spectra of pyroxenes and HED meteorites to derive parameters associated with the 1-micron absorption band wing. These parameters will help map the global distribution of compositionally related units on Vesta s surface. Interpretation of these units will involve the integration of FC and VIR data.

  1. Fire service and first responder thermal imaging camera (TIC) advances and standards

    NASA Astrophysics Data System (ADS)

    Konsin, Lawrence S.; Nixdorff, Stuart

    2007-04-01

    Fire Service and First Responder Thermal Imaging Camera (TIC) applications are growing, saving lives and preventing injury and property damage. Firefighters face a wide range of serious hazards. TICs help mitigate the risks by protecting Firefighters and preventing injury, while reducing time spent fighting the fire and resources needed to do so. Most fire safety equipment is covered by performance standards. Fire TICs, however, are not covered by such standards and are also subject to inadequate operational performance and insufficient user training. Meanwhile, advancements in Fire TICs and lower costs are driving product demand. The need for a Fire TIC Standard was spurred in late 2004 through a Government sponsored Workshop where experts from the First Responder community, component manufacturers, firefighter training, and those doing research on TICs discussed strategies, technologies, procedures, best practices and R&D that could improve Fire TICs. The workshop identified pressing image quality, performance metrics, and standards issues. Durability and ruggedness metrics and standard testing methods were also seen as important, as was TIC training and certification of end-users. A progress report on several efforts in these areas and their impact on the IR sensor industry will be given. This paper is a follow up to the SPIE Orlando 2004 paper on Fire TIC usage (entitled Emergency Responders' Critical Infrared) which explored the technological development of this IR industry segment from the viewpoint of the end user, in light of the studies and reports that had established TICs as a mission critical tool for firefighters.

  2. The JANUS camera onboard JUICE mission for Jupiter system optical imaging

    NASA Astrophysics Data System (ADS)

    Della Corte, Vincenzo; Schmitz, Nicole; Zusi, Michele; Castro, José Maria; Leese, Mark; Debei, Stefano; Magrin, Demetrio; Michalik, Harald; Palumbo, Pasquale; Jaumann, Ralf; Cremonese, Gabriele; Hoffmann, Harald; Holland, Andrew; Lara, Luisa Maria; Fiethe, Björn; Friso, Enrico; Greggio, Davide; Herranz, Miguel; Koncz, Alexander; Lichopoj, Alexander; Martinez-Navajas, Ignacio; Mazzotta Epifani, Elena; Michaelis, Harald; Ragazzoni, Roberto; Roatsch, Thomas; Rodrigo, Julio; Rodriguez, Emilio; Schipani, Pietro; Soman, Matthew; Zaccariotto, Mirco

    2014-08-01

    JANUS (Jovis, Amorum ac Natorum Undique Scrutator) is the visible camera selected for the ESA JUICE mission to the Jupiter system. Resources constraints, S/C characteristics, mission design, environment and the great variability of observing conditions for several targets put stringent constraints on instrument architecture. In addition to the usual requirements for a planetary mission, the problem of mass and power consumption is particularly stringent due to the long-lasting cruising and operations at large distance from the Sun. JANUS design shall cope with a wide range of targets, from Jupiter atmosphere, to solid satellite surfaces, exosphere, rings, and lightning, all to be observed in several color and narrow-band filters. All targets shall be tracked during the mission and in some specific cases the DTM will be derived from stereo imaging. Mission design allows a quite long time range for observations in Jupiter system, with orbits around Jupiter and multiple fly-bys of satellites for 2.5 years, followed by about 6 months in orbit around Ganymede, at surface distances variable from 104 to few hundreds km. Our concept was based on a single optical channel, which was fine-tuned to cover all scientific objectives based on low to high-resolution imaging. A catoptric telescope with excellent optical quality is coupled with a rectangular detector, avoiding any scanning mechanism. In this paper the present JANUS design and its foreseen scientific capabilities are discussed.

  3. Factors affecting the repeatability of gamma camera calibration for quantitative imaging applications using a sealed source

    NASA Astrophysics Data System (ADS)

    Anizan, N.; Wang, H.; Zhou, X. C.; Wahl, R. L.; Frey, E. C.

    2015-02-01

    Several applications in nuclear medicine require absolute activity quantification of single photon emission computed tomography images. Obtaining a repeatable calibration factor that converts voxel values to activity units is essential for these applications. Because source preparation and measurement of the source activity using a radionuclide activity meter are potential sources of variability, this work investigated instrumentation and acquisition factors affecting repeatability using planar acquisition of sealed sources. The calibration factor was calculated for different acquisition and geometry conditions to evaluate the effect of the source size, lateral position of the source in the camera field-of-view (FOV), source-to-camera distance (SCD), and variability over time using sealed Ba-133 sources. A small region of interest (ROI) based on the source dimensions and collimator resolution was investigated to decrease the background effect. A statistical analysis with a mixed-effects model was used to evaluate quantitatively the effect of each variable on the global calibration factor variability. A variation of 1?cm in the measurement of the SCD from the assumed distance of 17?cm led to a variation of 1-2% in the calibration factor measurement using a small disc source (0.4?cm diameter) and less than 1% with a larger rod source (2.9?cm diameter). The lateral position of the source in the FOV and the variability over time had small impacts on calibration factor variability. The residual error component was well estimated by Poisson noise. Repeatability of better than 1% in a calibration factor measurement using a planar acquisition of a sealed source can be reasonably achieved. The best reproducibility was obtained with the largest source with a count rate much higher than the average background in the ROI, and when the SCD was positioned within 5?mm of the desired position. In this case, calibration source variability was limited by the quantum noise.

  4. Factors affecting the repeatability of gamma camera calibration for quantitative imaging applications using a sealed source.

    PubMed

    Anizan, N; Wang, H; Zhou, X C; Wahl, R L; Frey, E C

    2015-02-01

    Several applications in nuclear medicine require absolute activity quantification of single photon emission computed tomography images. Obtaining a repeatable calibration factor that converts voxel values to activity units is essential for these applications. Because source preparation and measurement of the source activity using a radionuclide activity meter are potential sources of variability, this work investigated instrumentation and acquisition factors affecting repeatability using planar acquisition of sealed sources. The calibration factor was calculated for different acquisition and geometry conditions to evaluate the effect of the source size, lateral position of the source in the camera field-of-view (FOV), source-to-camera distance (SCD), and variability over time using sealed Ba-133 sources. A small region of interest (ROI) based on the source dimensions and collimator resolution was investigated to decrease the background effect. A statistical analysis with a mixed-effects model was used to evaluate quantitatively the effect of each variable on the global calibration factor variability. A variation of 1?cm in the measurement of the SCD from the assumed distance of 17?cm led to a variation of 1-2% in the calibration factor measurement using a small disc source (0.4?cm diameter) and less than 1% with a larger rod source (2.9?cm diameter). The lateral position of the source in the FOV and the variability over time had small impacts on calibration factor variability. The residual error component was well estimated by Poisson noise. Repeatability of better than 1% in a calibration factor measurement using a planar acquisition of a sealed source can be reasonably achieved. The best reproducibility was obtained with the largest source with a count rate much higher than the average background in the ROI, and when the SCD was positioned within 5?mm of the desired position. In this case, calibration source variability was limited by the quantum noise. PMID:25592130

  5. A new solution for camera calibration and real-time image distortion correction in medical endoscopy-initial technical evaluation.

    PubMed

    Melo, Rui; Barreto, João P; Falcão, Gabriel

    2012-03-01

    Medical endoscopy is used in a wide variety of diagnostic and surgical procedures. These procedures are renowned for the difficulty of orienting the camera and instruments inside the human body cavities. The small size of the lens causes radial distortion of the image, which hinders the navigation process and leads to errors in depth perception and object morphology. This article presents a complete software-based system to calibrate and correct the radial distortion in clinical endoscopy in real time. Our system can be used with any type of medical endoscopic technology, including oblique-viewing endoscopes and HD image acquisition. The initial camera calibration is performed in an unsupervised manner from a single checkerboard pattern image. For oblique-viewing endoscopes the changes in calibration during operation are handled by a new adaptive camera projection model and an algorithm that infer the rotation of the probe lens using only image information. The workload is distributed across the CPU and GPU through an optimized CPU+GPU hybrid solution. This enables real-time performance, even for HD video inputs. The system is evaluated for different technical aspects, including accuracy of modeling and calibration, overall robustness, and runtime profile. The contributions are highly relevant for applications in computer-aided surgery and image-guided intervention such as improved visualization by image warping, 3-D modeling, and visual SLAM. PMID:22127990

  6. Streak camera time calibration procedures

    NASA Technical Reports Server (NTRS)

    Long, J.; Jackson, I.

    1978-01-01

    Time calibration procedures for streak cameras utilizing a modulated laser beam are described. The time calibration determines a writing rate accuracy of 0.15% with a rotating mirror camera and 0.3% with an image converter camera.

  7. Development of a pixelated GSO gamma camera system with tungsten parallel hole collimator for single photon imaging

    SciTech Connect

    Yamamoto, S.; Watabe, H.; Kanai, Y.; Shimosegawa, E.; Hatazawa, J.

    2012-02-15

    Purpose: In small animal imaging using a single photon emitting radionuclide, a high resolution gamma camera is required. Recently, position sensitive photomultiplier tubes (PSPMTs) with high quantum efficiency have been developed. By combining these with nonhygroscopic scintillators with a relatively low light output, a high resolution gamma camera can become useful for low energy gamma photons. Therefore, the authors developed a gamma camera by combining a pixelated Ce-doped Gd{sub 2}SiO{sub 5} (GSO) block with a high quantum efficiency PSPMT. Methods: GSO was selected for the scintillator, because it is not hygroscopic and does not contain any natural radioactivity. An array of 1.9 mm x 1.9 mm x 7 mm individual GSO crystal elements was constructed. These GSOs were combined with a 0.1-mm thick reflector to form a 22 x 22 matrix and optically coupled to a high quantum efficiency PSPMT (H8500C-100 MOD8). The GSO gamma camera was encased in a tungsten gamma-ray shield with tungsten pixelated parallel hole collimator, and the basic performance was measured for Co-57 gamma photons (122 keV). Results: In a two-dimensional position histogram, all pixels were clearly resolved. The energy resolution was {approx}15% FWHM. With the 20-mm thick tungsten pixelated collimator, the spatial resolution was 4.4-mm FWHM 40 mm from the collimator surface, and the sensitivity was {approx}0.05%. Phantom and small animal images were successfully obtained with our developed gamma camera. Conclusions: These results confirmed that the developed pixelated GSO gamma camera has potential as an effective instrument for low energy gamma photon imaging.

  8. On-Orbit Geometric Calibration of the Lunar Reconnaissance Orbiter Wide Angle Camera

    NASA Astrophysics Data System (ADS)

    Speyerer, E. J.; Wagner, R.; Robinson, M. S.

    2013-12-01

    Lunar Reconnaissance Orbiter (LRO) is equipped with a single Wide Angle Camera (WAC) [1] designed to collect monochromatic and multispectral observations of the lunar surface. Cartographically accurate image mosaics and stereo image based terrain models requires the position of each pixel in a given image be known to a corresponding point on the lunar surface with a high degree of accuracy and precision. The Lunar Reconnaissance Orbiter Camera (LROC) team initially characterized the WAC geometry prior to launch at the Malin Space Science Systems calibration facility. After lunar orbit insertion, the LROC team recognized spatially varying geometric offsets between color bands. These misregistrations made analysis of the color data problematic and showed that refinements to the pre-launch geometric analysis were necessary. The geometric parameters that define the WAC optical system were characterized from statistics gathered from co-registering over 84,000 image pairs. For each pair, we registered all five visible WAC bands to a precisely rectified Narrow Angle Camera (NAC) image (accuracy <15 m) [2] to compute key geometric parameters. In total, we registered 2,896 monochrome and 1,079 color WAC observations to nearly 34,000 NAC observations and collected over 13.7 million data points across the visible portion of the WAC CCD. Using the collected statistics, we refined the relative pointing (yaw, pitch and roll), effective focal length, principal point coordinates, and radial distortion coefficients. This large dataset also revealed spatial offsets between bands after orthorectification due to chromatic aberrations in the optical system. As white light enters the optical system, the light bends at different magnitudes as a function of wavelength, causing a single incident ray to disperse in a spectral spread of color [3,4]. This lateral chromatic aberration effect, also known as 'chromatic difference in magnification' [5] introduces variation to the effective focal length for each WAC band. Secondly, tangential distortions caused by minor decentering in the optical system altered the derived exterior orientation parameters for each 14-line WAC band. We computed the geometric parameter sets separately for each band to characterize the lateral chromatic aberrations and the decentering components in the WAC optical system. From this approach, we negated the need for additional tangential terms in the distortion model, thus reducing the number of computations during image orthorectification and therefore expediting the orthorectification process. We undertook a similar process for refining the geometry for the UV bands (321 and 360 nm), except we registered each UV bands to orthorectified visible bands of the same WAC observation (the visible bands have resolutions 4 times greater than the UV). The resulting 7-band camera model with refined geometric parameters enables map projection with sub-pixel accuracy. References: [1] Robinson et al. (2010) Space Sci. Rev. 150, 81-124 [2] Wagner et al. (2013) Lunar Sci Forum [3] Mahajan, V.N. (1998) Optical Imaging and Aberrations [4] Fiete, R.D. (2013), Manual of Photogrammetry, pp. 359-450 [5] Brown, D.C. (1966) Photometric Eng. 32, 444-462.

  9. CCD Camera

    DOEpatents

    Roth, R.R.

    1983-08-02

    A CCD camera capable of observing a moving object which has varying intensities of radiation emanating therefrom and which may move at varying speeds is shown wherein there is substantially no overlapping of successive images and wherein the exposure times and scan times may be varied independently of each other. 7 figs.

  10. CCD Camera

    DOEpatents

    Roth, Roger R. (Minnetonka, MN)

    1983-01-01

    A CCD camera capable of observing a moving object which has varying intensities of radiation eminating therefrom and which may move at varying speeds is shown wherein there is substantially no overlapping of successive images and wherein the exposure times and scan times may be varied independently of each other.

  11. Snapshot Serengeti, high-frequency annotated camera trap images of 40 mammalian species in an African savanna.

    PubMed

    Swanson, Alexandra; Kosmala, Margaret; Lintott, Chris; Simpson, Robert; Smith, Arfon; Packer, Craig

    2015-01-01

    Camera traps can be used to address large-scale questions in community ecology by providing systematic data on an array of wide-ranging species. We deployed 225 camera traps across 1,125?km(2)?in Serengeti National Park, Tanzania, to evaluate spatial and temporal inter-species dynamics. The cameras have operated continuously since 2010 and had accumulated 99,241 camera-trap days and produced 1.2 million sets of pictures by 2013. Members of the general public classified the images via the citizen-science website www.snapshotserengeti.org. Multiple users viewed each image and recorded the species, number of individuals, associated behaviours, and presence of young. Over 28,000 registered users contributed 10.8 million classifications. We applied a simple algorithm to aggregate these individual classifications into a final 'consensus' dataset, yielding a final classification for each image and a measure of agreement among individual answers. The consensus classifications and raw imagery provide an unparalleled opportunity to investigate multi-species dynamics in an intact ecosystem and a valuable resource for machine-learning and computer-vision research. PMID:26097743

  12. Snapshot Serengeti, high-frequency annotated camera trap images of 40 mammalian species in an African savanna

    PubMed Central

    Swanson, Alexandra; Kosmala, Margaret; Lintott, Chris; Simpson, Robert; Smith, Arfon; Packer, Craig

    2015-01-01

    Camera traps can be used to address large-scale questions in community ecology by providing systematic data on an array of wide-ranging species. We deployed 225 camera traps across 1,125?km2?in Serengeti National Park, Tanzania, to evaluate spatial and temporal inter-species dynamics. The cameras have operated continuously since 2010 and had accumulated 99,241 camera-trap days and produced 1.2 million sets of pictures by 2013. Members of the general public classified the images via the citizen-science website www.snapshotserengeti.org. Multiple users viewed each image and recorded the species, number of individuals, associated behaviours, and presence of young. Over 28,000 registered users contributed 10.8 million classifications. We applied a simple algorithm to aggregate these individual classifications into a final ‘consensus’ dataset, yielding a final classification for each image and a measure of agreement among individual answers. The consensus classifications and raw imagery provide an unparalleled opportunity to investigate multi-species dynamics in an intact ecosystem and a valuable resource for machine-learning and computer-vision research. PMID:26097743

  13. Evaluating intensified camera systems

    SciTech Connect

    S. A. Baker

    2000-07-01

    This paper describes image evaluation techniques used to standardize camera system characterizations. Key areas of performance include resolution, noise, and sensitivity. This team has developed a set of analysis tools, in the form of image processing software used to evaluate camera calibration data, to aid an experimenter in measuring a set of camera performance metrics. These performance metrics identify capabilities and limitations of the camera system, while establishing a means for comparing camera systems. Analysis software is used to evaluate digital camera images recorded with charge-coupled device (CCD) cameras. Several types of intensified camera systems are used in the high-speed imaging field. Electro-optical components are used to provide precise shuttering or optical gain for a camera system. These components including microchannel plate or proximity focused diode image intensifiers, electro-static image tubes, or electron-bombarded CCDs affect system performance. It is important to quantify camera system performance in order to qualify a system as meeting experimental requirements. The camera evaluation tool is designed to provide side-by-side camera comparison and system modeling information.

  14. Morphology of the Venus clouds from the imaging by Venus Monitoring Camera onboard Venus Express

    NASA Astrophysics Data System (ADS)

    Titov, D. V.; Markiewicz, W. J.; Moissl, R.; Ignatiev, N.; Limaye, S.; Khatuntsev, I.; Roatsch, Th.; Almeida, M.

    2008-09-01

    For more than 2 years Venus Monitoring Camera onboard ESA's Venus Express collects images of Venus from global views with resolution of ~50 km to close-up snapshots resolving features of about few hundreds meters. The UV filter is centered at a characteristic wavelength of the unknown UV absorber (365 nm) and allows one to study morphology of the cloud tops that bears the information about dynamical process and distribution of the UV absorber. Low latitudes (< 40 deg) are dominated by relatively dark clouds that have mottled and fragmented appearance clearly indicating convective activity in the sub-solar region. At ~50 degrees latitude this pattern gives way to streaky clouds suggesting that horizontal flow prevails here. Poleward from ~60 degrees the planet is covered by almost featureless bright polar hood sometimes crossed by dark thin (~300 km) spiral or circular structures. This global cloud pattern changes on time scales of few days resulting in so called "brightening events" when the bright haze can extend far into low latitudes. Cloud pattern shows remarkable diurnal variability. Afternoon sector of the planet has strongly developed traces of turbulence in contrast to the atmosphere in the morning. Also the bright hood extends further to low latitudes in the morning than in the evening. We will present latitudinal, diurnal, and temporal variations based on two years of VMC observations. Imaging of streaky clouds in the middle and high latitudes provides a tool to study the wind pattern. We will also present preliminary results on the cloud streaks orientation derived from the VMC images.

  15. Biophysical control of intertidal benthic macroalgae revealed by high-frequency multispectral camera images

    NASA Astrophysics Data System (ADS)

    van der Wal, Daphne; van Dalen, Jeroen; Wielemaker-van den Dool, Annette; Dijkstra, Jasper T.; Ysebaert, Tom

    2014-07-01

    Intertidal benthic macroalgae are a biological quality indicator in estuaries and coasts. While remote sensing has been applied to quantify the spatial distribution of such macroalgae, it is generally not used for their monitoring. We examined the day-to-day and seasonal dynamics of macroalgal cover on a sandy intertidal flat using visible and near-infrared images from a time-lapse camera mounted on a tower. Benthic algae were identified using supervised, semi-supervised and unsupervised classification techniques, validated with monthly ground-truthing over one year. A supervised classification (based on maximum likelihood, using training areas identified in the field) performed best in discriminating between sediment, benthic diatom films and macroalgae, with highest spectral separability between macroalgae and diatoms in spring/summer. An automated unsupervised classification (based on the Normalised Differential Vegetation Index NDVI) allowed detection of daily changes in macroalgal coverage without the need for calibration. This method showed a bloom of macroalgae (filamentous green algae, Ulva sp.) in summer with > 60% cover, but with pronounced superimposed day-to-day variation in cover. Waves were a major factor in regulating macroalgal cover, but regrowth of the thalli after a summer storm was fast (2 weeks). Images and in situ data demonstrated that the protruding tubes of the polychaete Lanice conchilega facilitated both settlement (anchorage) and survival (resistance to waves) of the macroalgae. Thus, high-frequency, high resolution images revealed the mechanisms for regulating the dynamics in cover of the macroalgae and for their spatial structuring. Ramifications for the mode, timing, frequency and evaluation of monitoring macroalgae by field and remote sensing surveys are discussed.

  16. Retrieval of sulphur dioxide from a ground-based thermal infrared imaging camera

    NASA Astrophysics Data System (ADS)

    Prata, A. J.; Bernardo, C.

    2014-02-01

    Recent advances in uncooled detector technology now offer the possibility of using relatively inexpensive thermal (7 to 14 ?m) imaging devices as tools for studying and quantifying the behaviour of hazardous gases and particulates in atmospheric plumes. An experimental fast-sampling (60 Hz) ground-based uncooled thermal imager (Cyclops), operating with four spectral channels at central wavelengths of 8.6, 10, 11, and 12 ?m and one broadband channel (7-14 ?m), has been tested at several volcanoes and at two industrial sites, where SO2 was a major constituent of the plumes. This paper presents new algorithms, which include atmospheric corrections to the data and better calibrations to show that SO2 slant column density can be reliably detected and quantified. Our results indicate that it is relatively easy to identify and discriminate SO2 in plumes, but more challenging to quantify the column densities. A full description of the retrieval algorithms, illustrative results and a detailed error analysis are provided. The Noise-Equivalent Temperature Difference (NE?T) of the spectral channels, a fundamental measure of the quality of the measurements, lies between 0.4-0.8 K, resulting in slant column density errors of 20%. Frame averaging and improved NE?T's can reduce this error to less than 10%, making a stand-off, day or night operation of an instrument of this type very practical for both monitoring industrial SO2 emissions and for SO2 column densities and emission measurements at active volcanoes. The imaging camera system may also be used to study thermal radiation from meteorological clouds and from the atmosphere.

  17. Retrieval of sulfur dioxide from a ground-based thermal infrared imaging camera

    NASA Astrophysics Data System (ADS)

    Prata, A. J.; Bernardo, C.

    2014-09-01

    Recent advances in uncooled detector technology now offer the possibility of using relatively inexpensive thermal (7 to 14 ?m) imaging devices as tools for studying and quantifying the behaviour of hazardous gases and particulates in atmospheric plumes. An experimental fast-sampling (60 Hz) ground-based uncooled thermal imager (Cyclops), operating with four spectral channels at central wavelengths of 8.6, 10, 11 and 12 ?m and one broadband channel (7-14 ?m) has been tested at several volcanoes and at an industrial site, where SO2 was a major constituent of the plumes. This paper presents new algorithms, which include atmospheric corrections to the data and better calibrations to show that SO2 slant column density can be reliably detected and quantified. Our results indicate that it is relatively easy to identify and discriminate SO2 in plumes, but more challenging to quantify the column densities. A full description of the retrieval algorithms, illustrative results and a detailed error analysis are provided. The noise-equivalent temperature difference (NE?T) of the spectral channels, a fundamental measure of the quality of the measurements, lies between 0.4 and 0.8 K, resulting in slant column density errors of 20%. Frame averaging and improved NE?T's can reduce this error to less than 10%, making a stand-off, day or night operation of an instrument of this type very practical for both monitoring industrial SO2 emissions and for SO2 column densities and emission measurements at active volcanoes. The imaging camera system may also be used to study thermal radiation from meteorological clouds and the atmosphere.

  18. Temporal resolved x-ray penumbral imaging technique using heuristic image reconstruction procedure and wide dynamic range x-ray streak camera

    SciTech Connect

    Fujioka, Shinsuke; Shiraga, Hiroyuki; Azechi, Hiroshi; Nishimura, Hiroaki; Izawa, Yasukazu; Nozaki, Shinya; Chen, Yen-wei

    2004-10-01

    Temporal resolved x-ray penumbral imaging has been developed using an image reconstruction procedure of the heuristic method and a wide dynamic range x-ray streak camera (XSC). Reconstruction procedure of the penumbral imaging is inherently intolerant to noise, a reconstructed image is strongly distorted by artifacts caused by noise in a penumbral image. Statistical fluctuation in the number of detected photon is the dominant source of noise in an x-ray image, however acceptable brightness of an image is limited by dynamic range of an XSC. The wide dynamic range XSC was used to obtain penumbral images bright enough to be reconstructed. Additionally, the heuristic method was introduced in the penumbral image reconstruction procedure. Distortion of reconstructed images is sufficiently suppressed by these improvements. Density profiles of laser driven brominated plastic and tin plasma were measured with this technique.

  19. Test of Compton camera components for prompt gamma imaging at the ELBE bremsstrahlung beam

    NASA Astrophysics Data System (ADS)

    Hueso-González, F.; Golnik, C.; Berthel, M.; Dreyer, A.; Enghardt, W.; Fiedler, F.; Heidel, K.; Kormoll, T.; Rohling, H.; Schöne, S.; Schwengner, R.; Wagner, A.; Pausch, G.

    2014-05-01

    In the context of ion beam therapy, particle range verification is a major challenge for the quality assurance of the treatment. One approach is the measurement of the prompt gamma rays resulting from the tissue irradiation. A Compton camera based on several position sensitive gamma ray detectors, together with an imaging algorithm, is expected to reconstruct the prompt gamma ray emission density map, which is correlated with the dose distribution. At OncoRay and Helmholtz-Zentrum Dresden-Rossendorf (HZDR), a Compton camera setup is being developed consisting of two scatter planes: two CdZnTe (CZT) cross strip detectors, and an absorber consisting of one Lu2SiO5 (LSO) block detector. The data acquisition is based on VME electronics and handled by software developed on the ROOT framework. The setup has been tested at the linear electron accelerator ELBE at HZDR, which is used in this experiment to produce bunched bremsstrahlung photons with up to 12.5 MeV energy and a repetition rate of 13 MHz. Their spectrum has similarities with the shape expected from prompt gamma rays in the clinical environment, and the flux is also bunched with the accelerator frequency. The charge sharing effect of the CZT detector is studied qualitatively for different energy ranges. The LSO detector pixel discrimination resolution is analyzed and it shows a trend to improve for high energy depositions. The time correlation between the pulsed prompt photons and the measured detector signals, to be used for background suppression, exhibits a time resolution of 3 ns FWHM for the CZT detector and of 2 ns for the LSO detector. A time walk correction and pixel-wise calibration is applied for the LSO detector, whose resolution improves up to 630 ps. In conclusion, the detector setup is suitable for time-resolved background suppression in pulsed clinical particle accelerators. Ongoing tasks are the quantitative comparison with simulations and the test of imaging algorithms. Experiments at proton accelerators have also been performed and are currently under analysis.

  20. A high resolution Small Field Of View (SFOV) gamma camera: a columnar scintillator coated CCD imager for medical applications

    NASA Astrophysics Data System (ADS)

    Lees, J. E.; Bassford, D. J.; Blake, O. E.; Blackshaw, P. E.; Perkins, A. C.

    2011-12-01

    We describe a high resolution, small field of view (SFOV), Charge Coupled Device (CCD) based camera for imaging small volumes of radionuclide uptake in tissues. The Mini Gamma Ray Camera (MGRC) is a collimated, scintillator-coated, low cost, high performance imager using low noise CCDs. The prototype MGRC has a 600 ?m thick layer of columnar CsI(Tl) and operates in photon counting mode using a thermoelectric cooler to achieve an operating temperature of - 10°C. Collimation was performed using a pin hole collimator. We have measured the spatial resolution, energy resolution and efficiency using a number of radioisotope sources including 140 keV gamma-rays from 99mTc in a specially designed phantom. We also describe our first imaging of a volunteer patient.

  1. Removing cosmic-ray hits from multiorbit HST Wide Field Camera images

    NASA Technical Reports Server (NTRS)

    Windhorst, Rogier A.; Franklin, Barbara E.; Neuschaefer, Lyman W.

    1994-01-01

    We present an optimized algorithm that removes cosmic rays ('CRs') from multiorbit Hubble Space Telescope (HST) Wide Field/Planetary Camera ('WF/PC') images. It computes the image noise in every iteration from the WF/PC CCD equation. This includes all known sources of random and systematic calibration errors. We test this algorithm on WF/PC stacks of 2-12 orbits as a function of the number of available orbits and the formal Poissonian sigma-clipping level. We find that the algorithm needs greater than or equal 4 WF/PC exposures to locate the minimal sky signal (which is noticeably affected by CRs), with an optimal clipping level at 2-2.5 x sigma(sub Poisson). We analyze the CR flux detected on multiorbit 'CR stacks,' which are constructed by subtracting the best CR filtered images from the unfiltered 8-12 orbit average. We use an automated object finder to determine the surface density of CRS as a function of the apparent magnitude (or ADU flux) they would have generated in the images had they not been removed. The power law slope of the CR 'counts' (gamma approximately = 0.6 for N(m) m(exp gamma)) is steeper than that of the faint galaxy counts down to V approximately = 28 mag. The CR counts show a drop off between 28 less than or approximately V less than or approximately 30 mag (the latter is our formal 2 sigma point source sensitivity without spherical aberration). This prevents the CR sky integral from diverging, and is likely due to a real cutoff in the CR energy distribution below approximately 11 ADU per orbit. The integral CR surface density is less than or approximately 10(exp 8)/sq. deg, and their sky signal is V approximately = 25.5-27.0 mag/sq. arcsec, or 3%-13% of our NEP sky background (V = 23.3 mag/sq. arcsec), and well above the EBL integral of the deepest galaxy counts (B(sub J) approximately = 28.0 mag/sq. arcsec). We conclude that faint CRs will always contribute to the sky signal in the deepest WF/PC images. Since WFPC2 has approximately 2.7x lower read noise and a thicker CCD, this will result in more CR detections than in WF/PC, potentially affecting approximately 10%-20% of the pixels in multiorbit WFPC2 data cubes.

  2. Far ultraviolet wide field imaging and photometry - Spartan-202 Mark II Far Ultraviolet Camera

    NASA Technical Reports Server (NTRS)

    Carruthers, George R.; Heckathorn, Harry M.; Opal, Chet B.; Witt, Adolf N.; Henize, Karl G.

    1988-01-01

    The U.S. Naval Research Laboratory' Mark II Far Ultraviolet Camera, which is expected to be a primary scientific instrument aboard the Spartan-202 Space Shuttle mission, is described. This camera is intended to obtain FUV wide-field imagery of stars and extended celestial objects, including diffuse nebulae and nearby galaxies. The observations will support the HST by providing FUV photometry of calibration objects. The Mark II camera is an electrographic Schmidt camera with an aperture of 15 cm, a focal length of 30.5 cm, and sensitivity in the 1230-1600 A wavelength range.

  3. Miniature, vacuum compatible 1024 {times} 1024 CCD camera for x-ray, ultra-violet, or optical imaging

    SciTech Connect

    Conder, A.D.; Dunn, J.; Young, B.K.F.

    1994-05-01

    We have developed a very compact (60 {times} 60 {times} 75 mm{sup 3}), vacuum compatible, large format (25 {times} 25 mm{sup 2}, 1024 {times} 1024 pixels) CCD camera for digital imaging of visible and ultraviolet radiation, soft to penetrating x-rays ({le}20 keV), and charged particles. This camera provides a suitable replacement for film with a linear response, dynamic range and intrinsic signal-to- noise response superior than current x-ray film, and provides real- time access to the data. The spatial resolution of the camera (< 25 {mu}m) is similar to typical digitization slit or step sizes used in processing film data. This new large format CCD camera has immediate applications as the recording device for steak cameras or gated microchannel plate diagnostic, or when used directly as the detector for x-ray, xuv, or optical signals. This is especially important in studying high-energy plasmas produced in pulse-power, ICF, and high powered laser-plasma experiments, as well as other medical and industrial applications.

  4. Determining Sala mango qualities with the use of RGB images captured by a mobile phone camera

    NASA Astrophysics Data System (ADS)

    Yahaya, Ommi Kalsom Mardziah; Jafri, Mohd Zubir Mat; Aziz, Azlan Abdul; Omar, Ahmad Fairuz

    2015-04-01

    Sala mango (Mangifera indicia) is one of the Malaysia's most popular tropical fruits that are widely marketed within the country. The degrees of ripeness of mangoes have conventionally been evaluated manually on the basis of color parameters, but a simple non-destructive technique using the Samsung Galaxy Note 1 mobile phone camera is introduced to replace the destructive technique. In this research, color parameters in terms of RGB values acquired using the ENVI software system were linked to detect Sala mango quality parameters. The features of mango were extracted from the acquired images and then used to classify of fruit skin color, which relates to the stages of ripening. A multivariate analysis method, multiple linear regression, was employed with the purpose of using RGB color parameters to estimate the pH, soluble solids content (SSC), and firmness. The relationship between these qualities parameters of Sala mango and its mean pixel values in the RGB system is analyzed. Findings show that pH yields the highest accuracy with a correlation coefficient R = 0.913 and root mean square of error RMSE = 0.166 pH. Meanwhile, firmness has R = 0.875 and RMSE = 1.392 kgf, whereas soluble solid content has the lowest accuracy with R = 0.814 and RMSE = 1.218°Brix with the correlation between color parameters. Therefore, this non-invasive method can be used to determine the quality attributes of mangoes.

  5. A double photomultiplier Compton camera and its readout system for mice imaging

    SciTech Connect

    Fontana, Cristiano Lino; Atroshchenko, Kostiantyn; Baldazzi, Giuseppe; Uzunov, Nikolay; Di Domenico, Giovanni

    2013-04-19

    We have designed a Compton Camera (CC) to image the bio-distribution of gamma-emitting radiopharmaceuticals in mice. A CC employs the 'electronic collimation', i.e. a technique that traces the gamma-rays instead of selecting them with physical lead or tungsten collimators. To perform such a task, a CC measures the parameters of the Compton interaction that occurs in the device itself. At least two detectors are required: one (tracker), where the primary gamma undergoes a Compton interaction and a second one (calorimeter), in which the scattered gamma is completely absorbed. Eventually the polar angle and hence a 'cone' of possible incident directions are obtained (event with 'incomplete geometry'). Different solutions for the two detectors are proposed in the literature: our design foresees two similar Position Sensitive Photomultipliers (PMT, Hamamatsu H8500). Each PMT has 64 output channels that are reduced to 4 using a charge multiplexed readout system, i.e. a Series Charge Multiplexing net of resistors. Triggering of the system is provided by the coincidence of fast signals extracted at the last dynode of the PMTs. Assets are the low cost and the simplicity of design and operation, having just one type of device; among drawbacks there is a lower resolution with respect to more sophisticated trackers and full 64 channels Readout. This paper does compare our design of our two-Hamamatsu CC to other solutions and shows how the spatial and energy accuracy is suitable for the inspection of radioactivity in mice.

  6. Measurement of time varying temperature fields using visible imaging CCD cameras

    SciTech Connect

    Keanini, R.G.; Allgood, C.L.

    1996-12-31

    A method for measuring time-varying surface temperature distributions using high frame rate visible imaging CCD cameras is described. The technique is based on an ad hoc model relating measured radiance to local surface temperature. This approach is based on the fairly non-restrictive assumptions that atmospheric scattering and absorption, and secondary emission and reflection are negligible. In order to assess performance, both concurrent and non-concurrent calibration and measurement, performed under dynamic thermal conditions, are examined. It is found that measurement accuracy is comparable to the theoretical accuracy predicted for infrared-based systems. In addition, performance tests indicate that in the experimental system, real-time calibration can be achieved while real-time whole-field temperature measurements require relatively coarse spatial resolution. The principal advantages of the proposed method are its simplicity and low cost. In addition, since independent temperature measurements are used for calibration, emissivity remains unspecified, so that a potentially significant source of error is eliminated.

  7. A double photomultiplier Compton camera and its readout system for mice imaging

    NASA Astrophysics Data System (ADS)

    Fontana, Cristiano Lino; Atroshchenko, Kostiantyn; Baldazzi, Giuseppe; Bello, Michele; Uzunov, Nikolay; Di Domenico, Giovanni Di

    2013-04-01

    We have designed a Compton Camera (CC) to image the bio-distribution of gamma-emitting radiopharmaceuticals in mice. A CC employs the "electronic collimation", i.e. a technique that traces the gamma-rays instead of selecting them with physical lead or tungsten collimators. To perform such a task, a CC measures the parameters of the Compton interaction that occurs in the device itself. At least two detectors are required: one (tracker), where the primary gamma undergoes a Compton interaction and a second one (calorimeter), in which the scattered gamma is completely absorbed. Eventually the polar angle and hence a "cone" of possible incident directions are obtained (event with "incomplete geometry"). Different solutions for the two detectors are proposed in the literature: our design foresees two similar Position Sensitive Photomultipliers (PMT, Hamamatsu H8500). Each PMT has 64 output channels that are reduced to 4 using a charge multiplexed readout system, i.e. a Series Charge Multiplexing net of resistors. Triggering of the system is provided by the coincidence of fast signals extracted at the last dynode of the PMTs. Assets are the low cost and the simplicity of design and operation, having just one type of device; among drawbacks there is a lower resolution with respect to more sophisticated trackers and full 64 channels Readout. This paper does compare our design of our two-Hamamatsu CC to other solutions and shows how the spatial and energy accuracy is suitable for the inspection of radioactivity in mice.

  8. UVUDF: Ultraviolet imaging of the Hubble ultra deep field with wide-field camera 3

    SciTech Connect

    Teplitz, Harry I.; Rafelski, Marc; Colbert, James W.; Hanish, Daniel J.; Kurczynski, Peter; Gawiser, Eric; Bond, Nicholas A.; Gardner, Jonathan P.; De Mello, Duilia F.; Grogin, Norman; Koekemoer, Anton M.; Brown, Thomas M.; Coe, Dan; Ferguson, Henry C.; Atek, Hakim; Finkelstein, Steven L.; Giavalisco, Mauro; Gronwall, Caryl; Lee, Kyoung-Soo; Ravindranath, Swara; and others

    2013-12-01

    We present an overview of a 90 orbit Hubble Space Telescope treasury program to obtain near-ultraviolet imaging of the Hubble Ultra Deep Field using the Wide Field Camera 3 UVIS detector with the F225W, F275W, and F336W filters. This survey is designed to: (1) investigate the episode of peak star formation activity in galaxies at 1 < z < 2.5; (2) probe the evolution of massive galaxies by resolving sub-galactic units (clumps); (3) examine the escape fraction of ionizing radiation from galaxies at z ? 2-3; (4) greatly improve the reliability of photometric redshift estimates; and (5) measure the star formation rate efficiency of neutral atomic-dominated hydrogen gas at z ? 1-3. In this overview paper, we describe the survey details and data reduction challenges, including both the necessity of specialized calibrations and the effects of charge transfer inefficiency. We provide a stark demonstration of the effects of charge transfer inefficiency on resultant data products, which when uncorrected, result in uncertain photometry, elongation of morphology in the readout direction, and loss of faint sources far from the readout. We agree with the STScI recommendation that future UVIS observations that require very sensitive measurements use the instrument's capability to add background light through a 'post-flash'. Preliminary results on number counts of UV-selected galaxies and morphology of galaxies at z ? 1 are presented. We find that the number density of UV dropouts at redshifts 1.7, 2.1, and 2.7 is largely consistent with the number predicted by published luminosity functions. We also confirm that the image mosaics have sufficient sensitivity and resolution to support the analysis of the evolution of star-forming clumps, reaching 28-29th magnitude depth at 5? in a 0.''2 radius aperture depending on filter and observing epoch.

  9. Automated Camera Calibration

    NASA Technical Reports Server (NTRS)

    Chen, Siqi; Cheng, Yang; Willson, Reg

    2006-01-01

    Automated Camera Calibration (ACAL) is a computer program that automates the generation of calibration data for camera models used in machine vision systems. Machine vision camera models describe the mapping between points in three-dimensional (3D) space in front of the camera and the corresponding points in two-dimensional (2D) space in the camera s image. Calibrating a camera model requires a set of calibration data containing known 3D-to-2D point correspondences for the given camera system. Generating calibration data typically involves taking images of a calibration target where the 3D locations of the target s fiducial marks are known, and then measuring the 2D locations of the fiducial marks in the images. ACAL automates the analysis of calibration target images and greatly speeds the overall calibration process.

  10. Imaging of radiocesium uptake dynamics in a plant body by using a newly developed high-resolution gamma camera.

    PubMed

    Kawachi, Naoki; Yin, Yong-Gen; Suzui, Nobuo; Ishii, Satomi; Yoshihara, Toshihiro; Watabe, Hiroshi; Yamamoto, Seiichi; Fujimaki, Shu

    2016-01-01

    We developed a new gamma camera specifically for plant nutritional research and successfully performed live imaging of the uptake and partitioning of (137)Cs in intact plants. The gamma camera was specially designed for high-energy gamma photons from (137)Cs (662 keV). To obtain reliable images, a pinhole collimator made of tungsten heavy alloy was used to reduce penetration and scattering of gamma photons. A single-crystal scintillator, Ce-doped Gd3Al2Ga3O12, with high sensitivity, no natural radioactivity, and no hygroscopicity was used. The array block of the scintillator was coupled to a high-quantum efficiency position sensitive photomultiplier tube to obtain accurate images. The completed gamma camera had a sensitivity of 0.83 count s(-1) MBq(-1) for (137)Cs with an energy window from 600 keV to 730 keV, and a spatial resolution of 23.5 mm. We used this gamma camera to study soybean plants that were hydroponically grown and fed with 2.0 MBq of (137)Cs for 6 days to visualize and investigate the transport dynamics in aerial plant parts. (137)Cs gradually appeared in the shoot several hours after feeding, and then accumulated preferentially and intensively in growing pods and seeds; very little accumulation was observed in mature leaves. Our results also suggested that this gamma-camera method may serve as a practical analyzing tool for breeding crops and improving cultivation techniques resulting in low accumulation of radiocesium into the consumable parts of plants. PMID:25959930

  11. Iterative color constancy with temporal filtering for an image sequence with no relative motion between the camera and the scene.

    PubMed

    Simão, Josemar; Jörg Andreas Schneebeli, Hans; Vassallo, Raquel Frizera

    2015-11-01

    Color constancy is the ability to perceive the color of a surface as invariant even under changing illumination. In outdoor applications, such as mobile robot navigation or surveillance, the lack of this ability harms the segmentation, tracking, and object recognition tasks. The main approaches for color constancy are generally targeted to static images and intend to estimate the scene illuminant color from the images. We present an iterative color constancy method with temporal filtering applied to image sequences in which reference colors are estimated from previous corrected images. Furthermore, two strategies to sample colors from the images are tested. The proposed method has been tested using image sequences with no relative movement between the scene and the camera. It also has been compared with known color constancy algorithms such as gray-world, max-RGB, and gray-edge. In most cases, the iterative color constancy method achieved better results than the other approaches. PMID:26560917

  12. A compact, discrete CsI(Tl) scintillator/Si photodiode gamma camera for breast cancer imaging

    SciTech Connect

    Gruber, Gregory J.

    2000-12-01

    Recent clinical evaluations of scintimammography (radionuclide breast imaging) are promising and suggest that this modality may prove a valuable complement to X-ray mammography and traditional breast cancer detection and diagnosis techniques. Scintimammography, however, typically has difficulty revealing tumors that are less than 1 cm in diameter, are located in the medial part of the breast, or are located in the axillary nodes. These shortcomings may in part be due to the use of large, conventional Anger cameras not optimized for breast imaging. In this thesis I present compact single photon camera technology designed specifically for scintimammography which strives to alleviate some of these limitations by allowing better and closer access to sites of possible breast tumors. Specific applications are outlined. The design is modular, thus a camera of the desired size and geometry can be constructed from an array (or arrays) of individual modules and a parallel hole lead collimator for directional information. Each module consists of: (1) an array of 64 discrete, optically-isolated CsI(Tl) scintillator crystals 3 x 3 x 5 mm{sup 3} in size, (2) an array of 64 low-noise Si PIN photodiodes matched 1-to-1 to the scintillator crystals, (3) an application-specific integrated circuit (ASIC) that amplifies the 64 photodiode signals and selects the signal with the largest amplitude, and (4) connectors and hardware for interfacing the module with a motherboard, thereby allowing straightforward computer control of all individual modules within a camera.

  13. Digital photogrammetric analysis of the IMP camera images: Mapping the Mars Pathfinder landing site in three dimensions

    USGS Publications Warehouse

    Kirk, R.L.; Howington-Kraus, E.; Hare, T.; Dorrer, E.; Cook, D.; Becker, K.; Thompson, K.; Redding, B.; Blue, J.; Galuszka, D.; Lee, E.M.; Gaddis, L.R.; Johnson, J.R.; Soderblom, L.A.; Ward, A.W.; Smith, P.H.; Britt, D.T.

    1999-01-01

    This paper describes our photogrammetric analysis of the Imager for Mars Pathfinder data, part of a broader program of mapping the Mars Pathfinder landing site in support of geoscience investigations. This analysis, carried out primarily with a commercial digital photogrammetric system, supported by our in-house Integrated Software for Imagers and Spectrometers (ISIS), consists of three steps: (1) geometric control: simultaneous solution for refined estimates of camera positions and pointing plus three-dimensional (3-D) coordinates of ???103 features sitewide, based on the measured image coordinates of those features; (2) topographic modeling: identification of ???3 ?? 105 closely spaced points in the images and calculation (based on camera parameters from step 1) of their 3-D coordinates, yielding digital terrain models (DTMs); and (3) geometric manipulation of the data: combination of the DTMs from different stereo pairs into a sitewide model, and reprojection of image data to remove parallax between the different spectral filters in the two cameras and to provide an undistorted planimetric view of the site. These processes are described in detail and example products are shown. Plans for combining the photogrammetrically derived topographic data with spectrophotometry are also described. These include photometric modeling using surface orientations from the DTM to study surface microtextures and improve the accuracy of spectral measurements, and photoclinometry to refine the DTM to single-pixel resolution where photometric properties are sufficiently uniform. Finally, the inclusion of rover images in a joint photogrammetric analysis with IMP images is described. This challenging task will provide coverage of areas hidden to the IMP, but accurate ranging of distant features can be achieved only if the lander is also visible in the rover image used. Copyright 1999 by the American Geophysical Union.

  14. Managing the Storage and Battery Resources in an Image Capture Device (Digital Camera) using Dynamic

    E-print Network

    Vahdat, Amin

    are fundamental for the mass consumer accep- tance of these newer digital technologies. We show are becoming available. By contrast, 35 mm point-and-shoot cameras offer resolutions in the range of 8 consumption and dollar cost. Microdrives are expensive for mass market cameras. Floppy disks, though

  15. Digital Pinhole Camera

    ERIC Educational Resources Information Center

    Lancor, Rachael; Lancor, Brian

    2014-01-01

    In this article we describe how the classic pinhole camera demonstration can be adapted for use with digital cameras. Students can easily explore the effects of the size of the pinhole and its distance from the sensor on exposure time, magnification, and image quality. Instructions for constructing a digital pinhole camera and our method for…

  16. Automatically designing an image processing pipeline for a five-band camera prototype using the local, linear, learned (L3) method

    NASA Astrophysics Data System (ADS)

    Tian, Qiyuan; Blasinski, Henryk; Lansel, Steven; Jiang, Haomiao; Fukunishi, Munenori; Farrell, Joyce E.; Wandell, Brian A.

    2015-02-01

    The development of an image processing pipeline for each new camera design can be time-consuming. To speed camera development, we developed a method named L3 (Local, Linear, Learned) that automatically creates an image processing pipeline for any design. In this paper, we describe how we used the L3 method to design and implement an image processing pipeline for a prototype camera with five color channels. The process includes calibrating and simulating the prototype, learning local linear transforms and accelerating the pipeline using graphics processing units (GPUs).

  17. Mesospheric circulation at the cloud top level of Venus according to Venus Monitoring Camera images

    NASA Astrophysics Data System (ADS)

    Khatuntsev, Igor; Patsaeva, Marina; Ignatiev, Nikolay; Titov, Dmitri; Markiewicz, Wojciech; Turin, Alexander

    We present results of wind speed measurements at the cloud top level of Venus derived from manual cloud tracking in the UV (365 nm) and IR (965 nm) channels of the Venus Monitoring Camera Experiment (VMC) [1] on board the Venus Express mission. Cloud details have a maximal contrast in the UV range. More then 90 orbits have been processed. 30000 manual vectors were obtained. The period of the observations covers more than 4 venusian year. Zonal wind speed demonstrates the local solar time dependence. Possible diurnal and semidiurnal components are observed [2]. According to averaged latitude profile of winds at level of the upper clouds: -The zonal speed is slightly increasing by absolute values from 90 on the equator to 105 m/s at latitudes —47 degrees; -The period of zonal rotation has the maximum at the equator (5 earth days). It has the minimum (3 days) at altitudes —50 degrees. After minimum periods are slightly increasing toward the South pole; -The meridional speed has a value 0 on the equator, and then it is linear increasing up to 10 m/s (by absolute value) at 50 degrees latitude. "-" denotes movement from the equator to the pole. -From 50 to 80 degrees the meridional speed is again decreasing by absolute value up to 0. IR (965+10 nm) day side images can be used for wind tracking. The obtained speed of the zonal wind in the low and middle latitudes are systematically less than the wind speed derived from the UV images. The average zonal speed obtained from IR day side images in the low and average latitudes is about 65-70 m/s. The given fact can be interpreted as observation of deeper layers of mesosphere in the IR range in comparison with UV. References [1] Markiewicz W. J. et al. (2007) Planet. Space Set V55(12). P.1701-1711. [2] Moissl R., et al. (2008) J. Geophys. Res. 2008. doi:10.1029/2008JE003117. V.113.

  18. Atmospheric winds on the cloud top level of Venus according to Venus Monitoring Camera images

    NASA Astrophysics Data System (ADS)

    Khatuntsev, Igor; Ignatiev, Nikolai; Patsaeva, Marina; Titov, Dmitri; Markiewicz, Wojciech

    2010-05-01

    We present results of wind speed measurements at the cloud top level of Venus derived from manual and automated cloud tracking in the UV (365 nm) and IR (965 nm) channels of the Venus Monitoring Camera Experiment (VMC) [1] on board the Venus Express mission. Cloud details have a maximal contrast in the UV range. More then 80 orbits have been processed. More then 27500 manual vectors were obtained. The period of the observations covers more than 4 venusian year. Zonal wind speed demonstrates the local solar time dependence. Possible diurnal and semidiurnal components are observed [2]. According to averaged latitude profile of winds at level of the upper clouds: - The zonal speed is slightly increasing by absolute values from 90 on the equator to 105 m/s at latitudes —47 degrees; - The period of zonal rotation has the maximum at the equator (˜5 earth days). It has the minimum (˜3 days) at altitudes —50 degrees. After minimum periods are slightly increasing toward the South pole; - The meridional speed has a value ˜0 on the equator, and then it is linear increasing up to ˜ 10 m/s (by absolute value) at 50 degrees latitude. "-" denotes movement from the equator to the pole. From 50 to 80 degrees the meridional speed is again decreasing by absolute value up to 0. IR (965+10 nm) day side images can be used for wind tracking. The obtained speed of the zonal wind in the low and middle latitudes are systematically less than the wind speed derived from the UV images. The average zonal speed obtained from IR day side images in the low and average latitudes is about 65-70 m/s. The given fact can be interpreted as observation of deeper layers of mesosphere in the IR range in comparison with UV. References [1] Markiewicz W. J. et al. (2007) Planet. Space Set V55(12). P.1701-1711. [2] Moissl R., et al. (2008) J. Geophys. Res. 2008. doi:10.1029/2008JE003117. V.113.

  19. Optimized Imaging and Target Tracking within a Distributed Camera A. A. Morye, C. Ding, B. Song, A. Roy-Chowdhury, J. A. Farrell

    E-print Network

    Chowdhury, Amit K. Roy

    Optimized Imaging and Target Tracking within a Distributed Camera Network* A. A. Morye, C. Ding, B) is time-varying and may exceed NC . Tracking a target is defined as estimating the position of the target of the network of cameras at each sampling instant such that the tracking specification for all targets

  20. Medical applications of fast 3D cameras in real-time image-guided radiotherapy (IGRT) of cancer

    NASA Astrophysics Data System (ADS)

    Li, Shidong; Li, Tuotuo; Geng, Jason

    2013-03-01

    Dynamic volumetric medical imaging (4DMI) has reduced motion artifacts, increased early diagnosis of small mobile tumors, and improved target definition for treatment planning. High speed cameras for video, X-ray, or other forms of sequential imaging allow a live tracking of external or internal movement useful for real-time image-guided radiation therapy (IGRT). However, none of 4DMI can track real-time organ motion and no camera has correlated with 4DMI to show volumetric changes. With a brief review of various IGRT techniques, we propose a fast 3D camera for live-video stereovision, an automatic surface-motion identifier to classify body or respiratory motion, a mechanical model for synchronizing the external surface movement with the internal target displacement by combination use of the real-time stereovision and pre-treatment 4DMI, and dynamic multi-leaf collimation for adaptive aiming the moving target. Our preliminary results demonstrate that the technique is feasible and efficient in IGRT of mobile targets. A clinical trial has been initiated for validation of its spatial and temporal accuracies and dosimetric impact for intensity-modulated RT (IMRT), volumetric-modulated arc therapy (VMAT), and stereotactic body radiotherapy (SBRT) of any mobile tumors. The technique can be extended for surface-guided stereotactic needle insertion in biopsy of small lung nodules.

  1. Reconstruction of Indoor Models Using Point Clouds Generated from Single-Lens Reflex Cameras and Depth Images

    NASA Astrophysics Data System (ADS)

    Tsai, F.; Wu, T.-S.; Lee, I.-C.; Chang, H.; Su, A. Y. S.

    2015-05-01

    This paper presents a data acquisition system consisting of multiple RGB-D sensors and digital single-lens reflex (DSLR) cameras. A systematic data processing procedure for integrating these two kinds of devices to generate three-dimensional point clouds of indoor environments is also developed and described. In the developed system, DSLR cameras are used to bridge the Kinects and provide a more accurate ray intersection condition, which takes advantage of the higher resolution and image quality of the DSLR cameras. Structure from Motion (SFM) reconstruction is used to link and merge multiple Kinect point clouds and dense point clouds (from DSLR color images) to generate initial integrated point clouds. Then, bundle adjustment is used to resolve the exterior orientation (EO) of all images. Those exterior orientations are used as the initial values to combine these point clouds at each frame into the same coordinate system using Helmert (seven-parameter) transformation. Experimental results demonstrate that the design of the data acquisition system and the data processing procedure can generate dense and fully colored point clouds of indoor environments successfully even in featureless areas. The accuracy of the generated point clouds were evaluated by comparing the widths and heights of identified objects as well as coordinates of pre-set independent check points against in situ measurements. Based on the generated point clouds, complete and accurate three-dimensional models of indoor environments can be constructed effectively.

  2. Geologic map of the northern hemisphere of Vesta based on Dawn Framing Camera (FC) images

    NASA Astrophysics Data System (ADS)

    Ruesch, Ottaviano; Hiesinger, Harald; Blewett, David T.; Williams, David A.; Buczkowski, Debra; Scully, Jennifer; Yingst, R. Aileen; Roatsch, Thomas; Preusker, Frank; Jaumann, Ralf; Russell, Christopher T.; Raymond, Carol A.

    2014-12-01

    The Dawn Framing Camera (FC) has imaged the northern hemisphere of the Asteroid (4) Vesta at high spatial resolution and coverage. This study represents the first investigation of the overall geology of the northern hemisphere (22-90°N, quadrangles Av-1, 2, 3, 4 and 5) using these unique Dawn mission observations. We have compiled a morphologic map and performed crater size-frequency distribution (CSFD) measurements to date the geologic units. The hemisphere is characterized by a heavily cratered surface with a few highly subdued basins up to ?200 km in diameter. The most widespread unit is a plateau (cratered highland unit), similar to, although of lower elevation than the equatorial Vestalia Terra plateau. Large-scale troughs and ridges have regionally affected the surface. Between ?180°E and ?270°E, these tectonic features are well developed and related to the south pole Veneneia impact (Saturnalia Fossae trough unit), elsewhere on the hemisphere they are rare and subdued (Saturnalia Fossae cratered unit). In these pre-Rheasilvia units we observed an unexpectedly high frequency of impact craters up to ?10 km in diameter, whose formation could in part be related to the Rheasilvia basin-forming event. The Rheasilvia impact has potentially affected the northern hemisphere also with S-N small-scale lineations, but without covering it with an ejecta blanket. Post-Rheasilvia impact craters are small (<60 km in diameter) and show a wide range of degradation states due to impact gardening and mass wasting processes. Where fresh, they display an ejecta blanket, bright rays and slope movements on walls. In places, crater rims have dark material ejecta and some crater floors are covered by ponded material interpreted as impact melt.

  3. Compact, rugged, and intuitive thermal imaging cameras for homeland security and law enforcement applications

    NASA Astrophysics Data System (ADS)

    Hanson, Charles M.

    2005-05-01

    Low cost, small size, low power uncooled thermal imaging sensors have completely changed the way the world views commercial law enforcement and military applications. Key applications include security, medical, automotive, power generation monitoring, manufacturing and process control, aerospace application, defense, environmental and resource monitoring, maintenance monitoring and night vision. Commercial applications also include law enforcement and military special operations. Each application drives a unique set of requirements that include similar fundamental infrared technologies. Recently, in the uncooled infrared camera and microbolometer detector areas, major strides have been made in the design and manufacture of personal military and law enforcement sensors. L-3 Communications Infrared Products (L-3 IP) is producing a family of new products based on the amorphous silicon microbolometer with low cost, low power, high volume, wafer-level vacuum packaged silicon focal plane array technologies. These bolometer systems contain no choppers or thermoelectric coolers, require no manual calibration, and use readily available commercial off-the-shelf components. One such successful product is the Thermal-Eye X100xp. Extensive market needs analysis for these small hand held sensors has been validated by the quick acceptability into the Law Enforcement and Military Segments. As well as this product has already been received, L-3 IP has developed a strategic roadmap to improve and enhance the features and function of this product to include upgrades such as the new 30-Hz, 30-?m pitch detector. This paper describes advances in bolometric focal plane arrays, optical and circuit card technologies while providing a glimpse into the future of micro hand held sensor growth. Also, technical barriers are addressed in light of constraints, lessons learned and boundary conditions. One conclusion is that the Thermal Eye Silicon Bolometer technology simultaneously drives weight, cost, size, power, performance, producibility and design flexibility, each individually and all together - a must for the portable commercial law enforcement and military markets.

  4. Dual charge-coupled device /CCD/, astronomical spectrometer and direct imaging camera. II - Data handling and control systems

    NASA Technical Reports Server (NTRS)

    Dewey, D.; Ricker, G. R.

    1980-01-01

    The data collection system for the MASCOT (MIT Astronomical Spectrometer/Camera for Optical Telescopes) is described. The system relies on an RCA 1802 microprocessor-based controller, which serves to collect and format data, to present data to a scan converter, and to operate a device communication bus. A NOVA minicomputer is used to record and recall frame images and to perform refined image processing. The RCA 1802 also provides instrument mode control for the MASCOT. Commands are issued using STOIC, a FORTH-like language. Sufficient flexibility has been provided so that a variety of CCDs can be accommodated.

  5. LaBr3:Ce small FOV gamma camera with excellent energy resolution for multi-isotope imaging

    NASA Astrophysics Data System (ADS)

    Pani, R.; Fabbri, A.; Cinti, M. N.; Orlandi, C.; Pellegrini, R.; Scafè, R.; Artibani, M.

    2015-06-01

    The simultaneous administration of radiopharmaceuticals labeled with more than one radioisotope is becoming of increasing interest in clinical practice. Because the photon energies of the utilized radioisotopes could be very close (less than 15% difference), a gamma camera with adequate energy resolution is required. The availability of scintillation crystals with high light yield, as lanthanum tri-bromide (LaBr3:Ce), is particularly appealing for these applications. In this work, a new small field of view gamma camera prototype is presented, based on a planar LaBr3:Ce scintillation crystal with surfaces treatment typical of spectrometric devices, in order to enhance energy resolution performances. The crystal has round shape and has been optically coupled with a position sensitive photomultiplier tube with high quantum efficiency. The presented gamma camera shows outstanding energy resolution results in the investigated energy range (32-662 keV). These relevant performances have been obtained through the application of uniformity correction on the raw data, necessary due to the presence of position sensitive phototube, characterized by a spread of anodic gain values. In spite of position linearity degradation at crystal edges, due to reflective treatment of surfaces, intrinsic spatial resolution values are satisfactory on the useful field of view.The characterization of the presented gamma camera, based on a continuous LaBr3:Ce scintillation crystal with reflective surfaces, indicates good performances in multi-isotope imaging due to the excellent energy resolution results, also in comparison with similar detectors.

  6. Determining Camera Gain in Room Temperature Cameras

    SciTech Connect

    Joshua Cogliati

    2010-12-01

    James R. Janesick provides a method for determining the amplification of a CCD or CMOS camera when only access to the raw images is provided. However, the equation that is provided ignores the contribution of dark current. For CCD or CMOS cameras that are cooled well below room temperature, this is not a problem, however, the technique needs adjustment for use with room temperature cameras. This article describes the adjustment made to the equation, and a test of this method.

  7. Large-format imaging plate and weissenberg camera for accurate protein crystallographic data collection using synchrotron radiation.

    PubMed

    Sakabe, K; Sasaki, K; Watanabe, N; Suzuki, M; Wang, Z G; Miyahara, J; Sakabe, N

    1997-05-01

    Off-line and on-line protein data-collection systems using an imaging plate as a detector are described and their components reported. The off-line scanner IPR4080 was developed for a large-format imaging plate ;BASIII' of dimensions 400 x 400 mm and 400 x 800 mm. The characteristics of this scanner are a dynamic range of 10(5) photons pixel(-1), low background noise and high sensitivity. A means of reducing electronic noise and a method for finding the origin of the noise are discussed in detail. A dedicated screenless Weissenberg camera matching IPR4080 with synchrotron radiation was developed and installed on beamline BL6B at the Photon Factory. This camera can attach one or two sheets of 400 x 800 mm large-format imaging plate inside the film cassette by evacuation. The positional reproducibility of the imaging plate on the cassette is so good that the data can be processed by batch job. Data of 93% completeness up to 1.6 A resolution were collected on a single axis rotation and the value of R(merge) becomes 4% from a tetragonal lysozyme crystal using a set of two imaging-plate sheets. Comparing two types of imaging plates, the signal-to-noise ratio of the ST-VIP-type imaging plate is 25% better than that of the BASIII-type imaging plate for protein data collection using 1.0 and 0.7 A X-rays. A new on-line protein data-collection system with imaging plates is specially designed to use synchrotron radiation X-rays at maximum efficiency. PMID:16699220

  8. Aircraft engine-mounted camera system for long wavelength infrared imaging of in-service thermal barrier coated turbine blades.

    PubMed

    Markham, James; Cosgrove, Joseph; Scire, James; Haldeman, Charles; Agoos, Ian

    2014-12-01

    This paper announces the implementation of a long wavelength infrared camera to obtain high-speed thermal images of an aircraft engine's in-service thermal barrier coated turbine blades. Long wavelength thermal images were captured of first-stage blades. The achieved temporal and spatial resolutions allowed for the identification of cooling-hole locations. The software and synchronization components of the system allowed for the selection of any blade on the turbine wheel, with tuning capability to image from leading edge to trailing edge. Its first application delivered calibrated thermal images as a function of turbine rotational speed at both steady state conditions and during engine transients. In advance of presenting these data for the purpose of understanding engine operation, this paper focuses on the components of the system, verification of high-speed synchronized operation, and the integration of the system with the commercial jet engine test bed. PMID:25554314

  9. Aircraft engine-mounted camera system for long wavelength infrared imaging of in-service thermal barrier coated turbine blades

    NASA Astrophysics Data System (ADS)

    Markham, James; Cosgrove, Joseph; Scire, James; Haldeman, Charles; Agoos, Ian

    2014-12-01

    This paper announces the implementation of a long wavelength infrared camera to obtain high-speed thermal images of an aircraft engine's in-service thermal barrier coated turbine blades. Long wavelength thermal images were captured of first-stage blades. The achieved temporal and spatial resolutions allowed for the identification of cooling-hole locations. The software and synchronization components of the system allowed for the selection of any blade on the turbine wheel, with tuning capability to image from leading edge to trailing edge. Its first application delivered calibrated thermal images as a function of turbine rotational speed at both steady state conditions and during engine transients. In advance of presenting these data for the purpose of understanding engine operation, this paper focuses on the components of the system, verification of high-speed synchronized operation, and the integration of the system with the commercial jet engine test bed.

  10. Using a high-speed movie camera to evaluate slice dropping in clinical image interpretation with stack mode viewers.

    PubMed

    Yakami, Masahiro; Yamamoto, Akira; Yanagisawa, Morio; Sekiguchi, Hiroyuki; Kubo, Takeshi; Togashi, Kaori

    2013-06-01

    The purpose of this study is to verify objectively the rate of slice omission during paging on picture archiving and communication system (PACS) viewers by recording the images shown on the computer displays of these viewers with a high-speed movie camera. This study was approved by the institutional review board. A sequential number from 1 to 250 was superimposed on each slice of a series of clinical Digital Imaging and Communication in Medicine (DICOM) data. The slices were displayed using several DICOM viewers, including in-house developed freeware and clinical PACS viewers. The freeware viewer and one of the clinical PACS viewers included functions to prevent slice dropping. The series was displayed in stack mode and paged in both automatic and manual paging modes. The display was recorded with a high-speed movie camera and played back at a slow speed to check whether slices were dropped. The paging speeds were also measured. With a paging speed faster than half the refresh rate of the display, some viewers dropped up to 52.4 % of the slices, while other well-designed viewers did not, if used with the correct settings. Slice dropping during paging was objectively confirmed using a high-speed movie camera. To prevent slice dropping, the viewer must be specially designed for the purpose and must be used with the correct settings, or the paging speed must be slower than half of the display refresh rate. PMID:23053908

  11. Autoguiding on the 20-inch Telescope The direct imaging camera on the telescope has a second, smaller, CCD that can be used to

    E-print Network

    Gustafsson, Torgny

    Autoguiding on the 20-inch Telescope The direct imaging camera on the telescope has a second, smaller, CCD that can be used to autoguide the telescope while exposing an image on the main CCD. Use the imager CCD and the guider CCD, so you can move the telescope to bring a good (i.e. as bright as possible

  12. Two detector, active digital holographic camera for 3D imaging and digital holographic interferometry

    NASA Astrophysics Data System (ADS)

    ?ak, Jakub; Kujawi?ska, Ma?gorzata; Józwik, Micha?

    2015-09-01

    In this paper we present the novel design and proof of concept of an active holographic camera consisting of two array detectors and Liquid Crystal on Silicon (LCOS) Spatial Light Modulator (SLM). The device allows sequential or simultaneous capture of two Fresnel holograms of 3D object/scene. The two detectors configuration provides an increased viewing angle of the camera, allows to capture two double exposure holograms with different sensitivity vectors and even facilitate capturing a synthetic aperture hologram for static objects. The LCOS SLM, located in a reference arm, serves as an active element, which enables phase shifting and proper pointing of reference beams towards both detectors in the configuration which allows miniaturization of the camera. The laboratory model of the camera has been tested for different modes of work namely for capture and reconstruction of 3D scene and for double exposure holographic interferometry applied for an engineering object under load. The future extension of the camera functionalities for Fourier holograms capture is discussed.

  13. Microcomputer-based artificial vision support system for real-time image processing for camera-driven visual prostheses

    NASA Astrophysics Data System (ADS)

    Fink, Wolfgang; You, Cindy X.; Tarbell, Mark A.

    2010-01-01

    It is difficult to predict exactly what blind subjects with camera-driven visual prostheses (e.g., retinal implants) can perceive. Thus, it is prudent to offer them a wide variety of image processing filters and the capability to engage these filters repeatedly in any user-defined order to enhance their visual perception. To attain true portability, we employ a commercial off-the-shelf battery-powered general purpose Linux microprocessor platform to create the microcomputer-based artificial vision support system (?AVS2) for real-time image processing. Truly standalone, ?AVS2 is smaller than a deck of playing cards, lightweight, fast, and equipped with USB, RS-232 and Ethernet interfaces. Image processing filters on ?AVS2 operate in a user-defined linear sequential-loop fashion, resulting in vastly reduced memory and CPU requirements during execution. ?AVS2 imports raw video frames from a USB or IP camera, performs image processing, and issues the processed data over an outbound Internet TCP/IP or RS-232 connection to the visual prosthesis system. Hence, ?AVS2 affords users of current and future visual prostheses independent mobility and the capability to customize the visual perception generated. Additionally, ?AVS2 can easily be reconfigured for other prosthetic systems. Testing of ?AVS2 with actual retinal implant carriers is envisioned in the near future.

  14. Omnifocus video camera

    NASA Astrophysics Data System (ADS)

    Iizuka, Keigo

    2011-04-01

    The omnifocus video camera takes videos, in which objects at different distances are all in focus in a single video display. The omnifocus video camera consists of an array of color video cameras combined with a unique distance mapping camera called the Divcam. The color video cameras are all aimed at the same scene, but each is focused at a different distance. The Divcam provides real-time distance information for every pixel in the scene. A pixel selection utility uses the distance information to select individual pixels from the multiple video outputs focused at different distances, in order to generate the final single video display that is everywhere in focus. This paper presents principle of operation, design consideration, detailed construction, and over all performance of the omnifocus video camera. The major emphasis of the paper is the proof of concept, but the prototype has been developed enough to demonstrate the superiority of this video camera over a conventional video camera. The resolution of the prototype is high, capturing even fine details such as fingerprints in the image. Just as the movie camera was a significant advance over the still camera, the omnifocus video camera represents a significant advance over all-focus cameras for still images.

  15. Single-Volume Neutron Scatter Camera for High-Efficiency Neutron Imaging and Source Characterization. Year 2 of 3 Summary

    SciTech Connect

    Brubaker, Erik

    2015-10-01

    The neutron scatter camera (NSC), an imaging spectrometer for fission energy neutrons, is an established and proven detector for nuclear security applications such as weak source detection of special nuclear material (SNM), arms control treaty verification, and emergency response. Relative to competing technologies such as coded aperture imaging, time-encoded imaging, neutron time projection chamber, and various thermal neutron imagers, the NSC provides excellent event-by-event directional information for signal/background discrimination, reasonable imaging resolution, and good energy resolution. Its primary drawback is very low detection efficiency due to the requirement for neutron elastic scatters in two detector cells. We will develop a singlevolume double-scatter neutron imager, in which both neutron scatters can occur in the same large active volume. If successful, the efficiency will be dramatically increased over the current NSC cell-based geometry. If the detection efficiency approaches that of e.g. coded aperture imaging, the other inherent advantages of double-scatter imaging would make it the most attractive fast neutron detector for a wide range of security applications.

  16. Realistic simulation of camera images of local surface defects in the context of multi-sensor inspection systems

    NASA Astrophysics Data System (ADS)

    Yang, Haiyue; Haist, Tobias; Gronle, Marc; Osten, Wolfgang

    2015-05-01

    Industrial automation has developed rapidly in the past decades. Customized fabrications and short production time require flexible and high speed inspection systems. Based on these requirements, optical surface inspection systems (OSIS) as efficient and cheap systems for detecting surface defects and none-defects becomes more and more important. To achieve a high recognition rate, huge amounts of image data of defects need to be stored. We introduce a virtual surface defect rendering method to obtain large amount of defect images. In this paper, the ray tracing methods are applied to realistically simulate camera images in OSIS. We used three different bidirectional reflectance distribution function (BRDF) rendering models to describe the scattering between collimated white light and aluminum materials.

  17. A real-time ultrasonic field mapping system using a Fabry Pérot single pixel camera for 3D photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Huynh, Nam; Zhang, Edward; Betcke, Marta; Arridge, Simon R.; Beard, Paul; Cox, Ben

    2015-03-01

    A system for dynamic mapping of broadband ultrasound fields has been designed, with high frame rate photoacoustic imaging in mind. A Fabry-Pérot interferometric ultrasound sensor was interrogated using a coherent light single-pixel camera. Scrambled Hadamard measurement patterns were used to sample the acoustic field at the sensor, and either a fast Hadamard transform or a compressed sensing reconstruction algorithm were used to recover the acoustic pressure data. Frame rates of 80 Hz were achieved for 32x32 images even though no specialist hardware was used for the on-the-fly reconstructions. The ability of the system to obtain photocacoustic images with data compressions as low as 10% was also demonstrated.

  18. Determining iconometric parameters of imaging devices using a wide-angle collimator. [calibration of satellite-borne television and photographic cameras

    NASA Technical Reports Server (NTRS)

    Ziman, Y. L.

    1974-01-01

    The problem of determining the iconometric parameters of the imaging device can be solved if the camera being calibrated is used to obtain the image of a group of reference points, the directions to which are known. In order to specify the imaging device coordinate system, it is sufficient in principle to obtain on the picture the images of three reference points which do not lie on a single straight line. Many more such points are required in order to determine the distortion corrections, and they must be distributed uniformly over the entire field of view of the camera being calibrated. Experimental studies were made using this technique to calibrate photographic and phototelevision systems. Evaluation of the results of these experiments permits recommending collimators for calibrating television and phototelevision imaging systems, and also short-focus small-format photographic cameras.

  19. Low-cost camera modifications and methodologies for very-high-resolution digital images

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Aerial color and color-infrared photography are usually acquired at high altitude so the ground resolution of the photographs is < 1 m. Moreover, current color-infrared cameras and manned aircraft flight time are expensive, so the objective is the development of alternative methods for obtaining ve...

  20. Author's personal copy In-flight calibration of the Cassini imaging science sub-system cameras

    E-print Network

    Throop, Henry

    , Henry Throop g , Daren Wilson b a Jet Propulsion Laboratory, California Institute of Technology, MS 169 on the Cassini spacecraft. The cameras were built by the Jet Propulsion Laboratory, California Institute 14853, USA e CRSR, Cornell University, Ithaca, NY 14853, USA f Lunar and Planetary Laboratory

  1. Experimental evaluation of an online gamma-camera imaging of permanent seed implantation (OGIPSI) prototype for partial breast irradiation

    SciTech Connect

    Ravi, Ananth; Caldwell, Curtis B.; Pignol, Jean-Philippe

    2008-06-15

    Previously, our team used Monte Carlo simulation to demonstrate that a gamma camera could potentially be used as an online image guidance device to visualize seeds during permanent breast seed implant procedures. This could allow for intraoperative correction if seeds have been misplaced. The objective of this study is to describe an experimental evaluation of an online gamma-camera imaging of permanent seed implantation (OGIPSI) prototype. The OGIPSI device is intended to be able to detect a seed misplacement of 5 mm or more within an imaging time of 2 min or less. The device was constructed by fitting a custom built brass collimator (16 mm height, 0.65 mm hole pitch, 0.15 mm septal thickness) on a 64 pixel linear array CZT detector (eValuator-2000, eV Products, Saxonburg, PA). Two-dimensional projection images of seed distributions were acquired by the use of a digitally controlled translation stage. Spatial resolution and noise characteristics of the detector were measured. The ability and time needed for the OGIPSI device to image the seeds and to detect cold spots was tested using an anthropomorphic breast phantom. Mimicking a real treatment plan, a total of 52 {sup 103}Pd seeds of 65.8 MBq each were placed on three different layers at appropriate depths within the phantom. The seeds were reliably detected within 30 s with a median error in localization of 1 mm. In conclusion, an OGIPSI device can potentially be used for image guidance of permanent brachytherapy applications in the breast and, possibly, other sites.

  2. 3D papillary image capturing by the stereo fundus camera system for clinical diagnosis on retina and optic nerve

    NASA Astrophysics Data System (ADS)

    Motta, Danilo A.; Serillo, André; de Matos, Luciana; Yasuoka, Fatima M. M.; Bagnato, Vanderlei S.; Carvalho, Luis A. V.

    2014-03-01

    Glaucoma is the second main cause of the blindness in the world and there is a tendency to increase this number due to the lifetime expectation raise of the population. Glaucoma is related to the eye conditions, which leads the damage to the optic nerve. This nerve carries visual information from eye to brain, then, if it has damage, it compromises the visual quality of the patient. In the majority cases the damage of the optic nerve is irreversible and it happens due to increase of intraocular pressure. One of main challenge for the diagnosis is to find out this disease, because any symptoms are not present in the initial stage. When is detected, it is already in the advanced stage. Currently the evaluation of the optic disc is made by sophisticated fundus camera, which is inaccessible for the majority of Brazilian population. The purpose of this project is to develop a specific fundus camera without fluorescein angiography and red-free system to accomplish 3D image of optic disc region. The innovation is the new simplified design of a stereo-optical system, in order to make capable the 3D image capture and in the same time quantitative measurements of excavation and topography of optic nerve; something the traditional fundus cameras do not do. The dedicated hardware and software is developed for this ophthalmic instrument, in order to permit quick capture and print of high resolution 3D image and videos of optic disc region (20° field-of-view) in the mydriatic and nonmydriatic mode.

  3. Robotic Arm Camera Image of the South Side of the Thermal and Evolved-Gas Analyzer (Door TA4

    NASA Technical Reports Server (NTRS)

    2008-01-01

    The Thermal and Evolved-Gas Analyzer (TEGA) instrument aboard NASA's Phoenix Mars Lander is shown with one set of oven doors open and dirt from a sample delivery. After the 'seventh shake' of TEGA, a portion of the dirt sample entered the oven via a screen for analysis. This image was taken by the Robotic Arm Camera on Sol 18 (June 13, 2008), or 18th Martian day of the mission.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  4. Adaptive Optics Imaging at 1-5 Microns on Large Telescopes:The COMIC Camera for ADONIS

    NASA Astrophysics Data System (ADS)

    Lacombe, F.; Marco, O.; Geoffray, H.; Beuzit, J. L.; Monin, J. L.; Gigan, P.; Talureau, B.; Feautrier, P.; Petmezakis, P.; Bonaccini, D.

    1998-09-01

    A new 1-5 ?m high-resolution camera dedicated to the ESO adaptive optics system ADONIS has been developed as a collaborative project of Observatoire de Paris-Meudon and Observatoire de Grenoble, under ESO contract. Since this camera has been designed to correctly sample the diffraction, two focal plate scales are available: 36 mas pixel^-1 for the 1-2.5 ?m range and 100 mas pixel^-1 for the 3-5 ?m range, yielding fields of view of 4.5"x4.5" and 12.8"x12.8", respectively. Several broadband and narrowband filters are available as well as two circular variable filters, allowing low spectral resolution (R~60-120) imagery between 1.2 and 4.8 mum. This camera is equipped with a 128x128 HgCdTe/CCD array detector built by the CEA-LETI-LIR (Grenoble, France). Among its main characteristics, this detector offers a remarkably high storage capacity (more than 10^6 electrons) with a total system readout noise of ~1000 electrons rms, making it particularly well suited for long integration time imagery in the 3-5 ?m range of the near-infrared domain. The measured dark current is 2000 electrons s^-1 pixel^-1 at the regular operating temperature of 77 K, allowing long exposure times at short wavelengths (lambda<3 ?m), where the performances are readout-noise limited. At longer wavelengths (lambda>3 ?m), the performances are background-noise limited. We have estimated the ADONIS + COMIC imaging performances using a method specially dedicated to high angular resolution cameras.

  5. Latest developments in the iLids performance standard: from multiple standard camera views to new imaging modalities

    NASA Astrophysics Data System (ADS)

    Sage, K. H.; Nilski, A. J.; Sillett, I. M.

    2009-09-01

    The Imagery Library for Intelligent Detection Systems (iLids) is the UK Government's standard for Video Based Detection Systems (VBDS). The first four iLids scenarios were released in November 2006 and annual evaluations for these four scenarios began in 2007. The Home Office Scientific Development Branch (HOSDB), in partnership with the Centre for the Protection of National Infrastructure (CPNI), has also developed a fifth iLids Scenario; Multiple Camera Tracking (MCT). The fifth scenario data sets were made available in November 2008 to industry, academic and commercial research organizations The imagery contains various staged events of people walking through the camera views. Multiple Camera Tracking Systems (MCTS) are expected to initialise on a specific target and be able to track the target over some or all of the camera views. HOSDB and CPNI are now working on a sixth iLids dataset series. These datasets will cover several technology areas: • Thermal imaging systems • Systems that rely on active IR illumination The aim is to develop libraries that promote the development of systems that are able to demonstrate effective performance in the key application area of people and vehicular detection at a distance. This paper will: • Describe the evaluation process, infrastructure and tools that HOSDB will use to evaluate MCT systems. Building on the success of our previous automated tools for evaluation, HOSDB has developed the MCT evaluation tool CLAYMORE. CLAYMORE is a tool for the real-time evaluation of MCT systems. • Provide an overview of the new sixth scenario aims and objectives, library specifications and timescales for release.

  6. An all-solid-state optical range camera for 3D real-time imaging with sub-centimeter depth resolution (SwissRanger)

    NASA Astrophysics Data System (ADS)

    Oggier, Thierry; Lehmann, Michael; Kaufmann, Rolf; Schweizer, Matthias; Richter, Michael; Metzler, Peter; Lang, Graham; Lustenberger, Felix; Blanc, Nicolas

    2004-02-01

    A new miniaturized camera system that is capable of 3-dimensional imaging in real-time is presented. The compact imaging device is able to entirely capture its environment in all three spatial dimensions. It reliably and simultaneously delivers intensity data as well as range information on the objects and persons in the scene. The depth measurement is based on the time-of-flight (TOF) principle. A custom solid-state image sensor allows the parallel measurement of the phase, offset and amplitude of a radio frequency (RF) modulated light field that is emitted by the system and reflected back by the camera surroundings without requiring any mechanical scanning parts. In this paper, the theoretical background of the implemented TOF principle is presented, together with the technological requirements and detailed practical implementation issues of such a distance measuring system. Furthermore, the schematic overview of the complete 3D-camera system is provided. The experimental test results are presented and discussed. The present camera system can achieve sub-centimeter depth resolution for a wide range of operating conditions. A miniaturized version of such a 3D-solid-state camera, the SwissRanger 2, is presented as an example, illustrating the possibility of manufacturing compact, robust and cost effective ranging camera products for 3D imaging in real-time.

  7. Spartan Infrared Camera, a High-Resolution Imager for the SOAR Telescope: Design, Tests, and On-Telescope Performance

    NASA Astrophysics Data System (ADS)

    Loh, Edwin D.; Biel, Jason D.; Davis, Michael W.; Laporte, Renéé; Loh, Owen Y.; Verhanovitz, Nathan J.

    2012-04-01

    The Spartan Infrared Camera provides tip-tilt corrected imaging for the SOAR Telescope in the 900--2500 nm spectral range with four 2048 × 2048 HAWAII-2 detectors. The camera has two plate scales: high-resolution (40 mas pixel-1) for future diffraction-limited sampling in the and bands and wide-field (66 mas pixel-1) to cover a 5 × 5 field, over which tip-tilt correction is substantial. The design is described in detail. Except for CaF field-flattening lenses, the optics are aluminum mirrors to thermally match the aluminum cryogenic-optical box in which the optics mount. The design minimizes the tilt of the optics as the instrument rotates on the Nasmyth port of the telescope. Two components of the gravitational torque on an optic are eliminated by symmetry, and the third component is minimized by balancing the optic. The optics (including the off-axis aspherical mirrors) were aligned with precise metrology. For the detector assembly, Henein pivots are used to provide frictionless, thermally compliant, lubricant-free, and thermally conducting rotation of the detectors. The heat load is 14 W for an ambient temperature of 10°C. Cooling down takes 40 hr. An activated-charcoal getter controls permeation through the large Viton O-ring for at least nine months. We present maps of the image distortion, which amount to tens of pixels at the greatest. The wavelength of the narrowband filters shift with position in the sky. The measured Strehl ratio of the camera itself is 0.8-0.84 at ?1650 nm. The width of the best -band image was 260 mas in unexceptional seeing measured after tuning the telescope and before moving the telescope. Since images are normally taken after pointing the telescope to a different field, this supports the idea that the image quality could be improved by better control of the focus and the shape of the primary mirror. The instrument has proved to be capable of producing images that can be stitched together to measure faint, extended features and to produce photometry that agree internally to better than 0.01 mag and are well calibrated to 2MASS stars in the range of 12 < K < 16.

  8. In vivo imaging of scattering and absorption properties of exposed brain using a digital red-green-blue camera

    NASA Astrophysics Data System (ADS)

    Nishidate, Izumi; Yoshida, Keiichiro; Kawauchi, Satoko; Sato, Shunichi; Sato, Manabu

    2014-03-01

    We investigate a method to estimate the spectral images of reduced scattering coefficients and the absorption coefficients of in vivo exposed brain tissues in the range from visible to near-infrared wavelength (500-760 nm) based on diffuse reflectance spectroscopy using a digital RGB camera. In the proposed method, the multi-spectral reflectance images of in vivo exposed brain are reconstructed from the digital red, green blue images using the Wiener estimation algorithm. The Monte Carlo simulation-based multiple regression analysis for the absorbance spectra is then used to specify the absorption and scattering parameters of brain tissue. In this analysis, the concentration of oxygenated hemoglobin and that of deoxygenated hemoglobin are estimated as the absorption parameters whereas the scattering amplitude a and the scattering power b in the expression of ?s'=a?-b as the scattering parameters, respectively. The spectra of absorption and reduced scattering coefficients are reconstructed from the absorption and scattering parameters, and finally, the spectral images of absorption and reduced scattering coefficients are estimated. The estimated images of absorption coefficients were dominated by the spectral characteristics of hemoglobin. The estimated spectral images of reduced scattering coefficients showed a broad scattering spectrum, exhibiting larger magnitude at shorter wavelengths, corresponding to the typical spectrum of brain tissue published in the literature. In vivo experiments with exposed brain of rats during CSD confirmed the possibility of the method to evaluate both hemodynamics and changes in tissue morphology due to electrical depolarization.

  9. Sub-wavelength resolution of MMW imaging systems using extremely inexpensive scanning Glow Discharge Detector (GDD) double row camera

    NASA Astrophysics Data System (ADS)

    Kopeika, N. S.; Abramovich, A.; Levanon, A.; Akram, A.; Rozban, D.; Yitzhaky, Y.; Yadid-Pecht, O.; Belenky, A.

    2012-06-01

    The properties of terahertz (THz) radiation are well known. They penetrate well most non-conducting media; there are no known biological hazards, and atmospheric attenuation and scattering is lower than visual and IR radiation. Thus THz imaging is very attractive for homeland security, biological, space, and industrial applications. In the other hand, the resolution of MMW images is lower comparing to IR and visual due to longer wavelength. Furthermore, the diffraction effects are more noticeable in THz and MMW imaging systems. Thus the MMW images are blurred and unclear and thus it is difficult to see the details and small objects. In recent experimental work with 8X8 Glow Discharge Detector (GDD) Focal Plane Array (FPA) we were able to improve the resolution of MMW images by using oversampling methods with basic DSP algorithms. In this work a super resolution method with basic DSP algorithms will be demonstrated using the 2X18 double row camera. MMW images with sub wavelength resolution will be shown using those methods and small details and small objects will be observed.

  10. Design and development of a position-sensitive ?-camera for SPECT imaging based on PCI electronics

    NASA Astrophysics Data System (ADS)

    Spanoudaki, V.; Giokaris, N. D.; Karabarbounis, A.; Loudos, G. K.; Maintas, D.; Papanicolas, C. N.; Paschalis, P.; Stiliaris, E.

    2004-07-01

    A position-sensitive ?-camera is being currently designed at IASA. This camera will be used experimentally (development mode) in order to obtain an integrated knowledge of its function and perhaps to improve its performance in parallel with an existing one, which has shown a very good performance in phantom, small animal, SPECT technique and is currently being tested for clinical applications. The new system is a combination of a PSPMT (Hamamatsu, R2486-05) and a PMT for simultaneous or independent acquisition of energy and position information, respectively. The resistive chain technique resulting in two signals at each ( X, Y) direction will perform the readout of the PSPMT's anode signals; the system is based on PCI electronics. Status of the system's development and the ongoing progress is presented.

  11. A posteriori correction of camera characteristics from large image data sets

    PubMed Central

    Afanasyev, Pavel; Ravelli, Raimond B. G.; Matadeen, Rishi; De Carlo, Sacha; van Duinen, Gijs; Alewijnse, Bart; Peters, Peter J.; Abrahams, Jan-Pieter; Portugal, Rodrigo V.; Schatz, Michael; van Heel, Marin

    2015-01-01

    Large datasets are emerging in many fields of image processing including: electron microscopy, light microscopy, medical X-ray imaging, astronomy, etc. Novel computer-controlled instrumentation facilitates the collection of very large datasets containing thousands of individual digital images. In single-particle cryogenic electron microscopy (“cryo-EM”), for example, large datasets are required for achieving quasi-atomic resolution structures of biological complexes. Based on the collected data alone, large datasets allow us to precisely determine the statistical properties of the imaging sensor on a pixel-by-pixel basis, independent of any “a priori” normalization routinely applied to the raw image data during collection (“flat field correction”). Our straightforward “a posteriori” correction yields clean linear images as can be verified by Fourier Ring Correlation (FRC), illustrating the statistical independence of the corrected images over all spatial frequencies. The image sensor characteristics can also be measured continuously and used for correcting upcoming images. PMID:26068909

  12. A posteriori correction of camera characteristics from large image data sets.

    PubMed

    Afanasyev, Pavel; Ravelli, Raimond B G; Matadeen, Rishi; De Carlo, Sacha; van Duinen, Gijs; Alewijnse, Bart; Peters, Peter J; Abrahams, Jan-Pieter; Portugal, Rodrigo V; Schatz, Michael; van Heel, Marin

    2015-01-01

    Large datasets are emerging in many fields of image processing including: electron microscopy, light microscopy, medical X-ray imaging, astronomy, etc. Novel computer-controlled instrumentation facilitates the collection of very large datasets containing thousands of individual digital images. In single-particle cryogenic electron microscopy ("cryo-EM"), for example, large datasets are required for achieving quasi-atomic resolution structures of biological complexes. Based on the collected data alone, large datasets allow us to precisely determine the statistical properties of the imaging sensor on a pixel-by-pixel basis, independent of any "a priori" normalization routinely applied to the raw image data during collection ("flat field correction"). Our straightforward "a posteriori" correction yields clean linear images as can be verified by Fourier Ring Correlation (FRC), illustrating the statistical independence of the corrected images over all spatial frequencies. The image sensor characteristics can also be measured continuously and used for correcting upcoming images. PMID:26068909

  13. Antenna-coupled microbolometer based uncooled 2D array and camera for 2D real-time terahertz imaging

    NASA Astrophysics Data System (ADS)

    Simoens, F.; Meilhan, J.; Gidon, S.; Lasfargues, G.; Lalanne Dera, J.; Ouvrier-Buffet, J. L.; Pocas, S.; Rabaud, W.; Guellec, F.; Dupont, B.; Martin, S.; Simon, A. C.

    2013-09-01

    CEA-Leti has developed a monolithic large focal plane array bolometric technology optimized for 2D real-time imaging in the terahertz range. Each pixel consists in a silicon microbolometer coupled to specific antennas and a resonant quarter-wavelength cavity. First prototypes of imaging arrays have been designed and manufactured for optimized sensing in the 1-3.5THz range where THz quantum cascade lasers are delivering high optical power. NEP in the order of 1 pW/sqrt(Hz) has been assessed at 2.5 THz. This paper reports the steps of this development, starting from the pixel level, to an array associated monolithically to its CMOS ROIC and finally a stand-alone camera. For each step, modeling, technological prototyping and experimental characterizations are presented.

  14. Characterization of the luminance and shape of ash particles at Sakurajima volcano, Japan, using CCD camera images

    NASA Astrophysics Data System (ADS)

    Miwa, Takahiro; Shimano, Taketo; Nishimura, Takeshi

    2015-01-01

    We develop a new method for characterizing the properties of volcanic ash at the Sakurajima volcano, Japan, based on automatic processing of CCD camera images. Volcanic ash is studied in terms of both luminance and particle shape. A monochromatic CCD camera coupled with a stereomicroscope is used to acquire digital images through three filters that pass red, green, or blue light. On single ash particles, we measure the apparent luminance, corresponding to 256 tones for each color (red, green, and blue) for each pixel occupied by ash particles in the image, and the average and standard deviation of the luminance. The outline of each ash particle is captured from a digital image taken under transmitted light through a polarizing plate. Also, we define a new quasi-fractal dimension ( D qf ) to quantify the complexity of the ash particle outlines. We examine two ash samples, each including about 1000 particles, which were erupted from the Showa crater of the Sakurajima volcano, Japan, on February 09, 2009 and January 13, 2010. The apparent luminance of each ash particle shows a lognormal distribution. The average luminance of the ash particles erupted in 2009 is higher than that of those erupted in 2010, which is in good agreement with the results obtained from component analysis under a binocular microscope (i.e., the number fraction of dark juvenile particles is lower for the 2009 sample). The standard deviations of apparent luminance have two peaks in the histogram, and the quasi-fractal dimensions show different frequency distributions between the two samples. These features are not recognized in the results of conventional qualitative classification criteria or the sphericity of the particle outlines. Our method can characterize and distinguish ash samples, even for ash particles that have gradual property changes, and is complementary to component analysis. This method also enables the relatively fast and systematic analysis of ash samples that is required for petrologic monitoring of ongoing activity, such as at the Sakurajima volcano.

  15. Assessment of the metrological performance of an in situ storage image sensor ultra-high speed camera for full-field deformation measurements

    NASA Astrophysics Data System (ADS)

    Rossi, Marco; Pierron, Fabrice; Forquin, Pascal

    2014-02-01

    Ultra-high speed (UHS) cameras allow us to acquire images typically up to about 1 million frames s-1 for a full spatial resolution of the order of 1 Mpixel. Different technologies are available nowadays to achieve these performances, an interesting one is the so-called in situ storage image sensor architecture where the image storage is incorporated into the sensor chip. Such an architecture is all solid state and does not contain movable devices as occurs, for instance, in the rotating mirror UHS cameras. One of the disadvantages of this system is the low fill factor (around 76% in the vertical direction and 14% in the horizontal direction) since most of the space in the sensor is occupied by memory. This peculiarity introduces a series of systematic errors when the camera is used to perform full-field strain measurements. The aim of this paper is to develop an experimental procedure to thoroughly characterize the performance of such kinds of cameras in full-field deformation measurement and identify the best operative conditions which minimize the measurement errors. A series of tests was performed on a Shimadzu HPV-1 UHS camera first using uniform scenes and then grids under rigid movements. The grid method was used as full-field measurement optical technique here. From these tests, it has been possible to appropriately identify the camera behaviour and utilize this information to improve actual measurements.

  16. The Automatically Triggered Video or Imaging Station (ATVIS): An Inexpensive Way to Catch Geomorphic Events on Camera

    NASA Astrophysics Data System (ADS)

    Wickert, A. D.

    2010-12-01

    To understand how single events can affect landscape change, we must catch the landscape in the act. Direct observations are rare and often dangerous. While video is a good alternative, commercially-available video systems for field installation cost 11,000, weigh ~100 pounds (45 kg), and shoot 640x480 pixel video at 4 frames per second. This is the same resolution as a cheap point-and-shoot camera, with a frame rate that is nearly an order of magnitude worse. To overcome these limitations of resolution, cost, and portability, I designed and built a new observation station. This system, called ATVIS (Automatically Triggered Video or Imaging Station), costs 450--500 and weighs about 15 pounds. It can take roughly 3 hours of 1280x720 pixel video, 6.5 hours of 640x480 video, or 98,000 1600x1200 pixel photos (one photo every 7 seconds for 8 days). The design calls for a simple Canon point-and-shoot camera fitted with custom firmware that allows 5V pulses through its USB cable to trigger it to take a picture or to initiate or stop video recording. These pulses are provided by a programmable microcontroller that can take input from either sensors or a data logger. The design is easily modifiable to a variety of camera and sensor types, and can also be used for continuous time-lapse imagery. We currently have prototypes set up at a gully near West Bijou Creek on the Colorado high plains and at tributaries to Marble Canyon in northern Arizona. Hopefully, a relatively inexpensive and portable system such as this will allow geomorphologists to supplement sensor networks with photo or video monitoring and allow them to see—and better quantify—the fantastic array of processes that modify landscapes as they unfold. Camera station set up at Badger Canyon, Arizona.Inset: view into box. Clockwise from bottom right: camera, microcontroller (blue), DC converter (red), solar charge controller, 12V battery. Materials and installation assistance courtesy of Ron Griffiths and the USGS Grand Canyon Monitoring and Research Center.

  17. A Novel 24 Ghz One-Shot Rapid and Portable Microwave Imaging System (Camera)

    NASA Technical Reports Server (NTRS)

    Ghasr, M.T.; Abou-Khousa, M.A.; Kharkovsky, S.; Zoughi, R.; Pommerenke, D.

    2008-01-01

    A novel 2D microwave imaging system at 24 GHz based on MST techniques. Enhanced sensitivity and SNR by utilizing PIN diode-loaded resonant slots. Specific slot and array design to increase transmission and reduce cross -coupling. Real-time imaging at a rate in excess of 30 images per second. Reflection as well transmission mode capabilities. Utility and application for electric field distribution mapping related to: Nondestructive Testing (NDT), imaging applications (SAR, Holography), and antenna pattern measurements.

  18. Traffic camera system development

    NASA Astrophysics Data System (ADS)

    Hori, Toshi

    1997-04-01

    The intelligent transportation system has generated a strong need for the development of intelligent camera systems to meet the requirements of sophisticated applications, such as electronic toll collection (ETC), traffic violation detection and automatic parking lot control. In order to achieve the highest levels of accuracy in detection, these cameras must have high speed electronic shutters, high resolution, high frame rate, and communication capabilities. A progressive scan interline transfer CCD camera, with its high speed electronic shutter and resolution capabilities, provides the basic functions to meet the requirements of a traffic camera system. Unlike most industrial video imaging applications, traffic cameras must deal with harsh environmental conditions and an extremely wide range of light. Optical character recognition is a critical function of a modern traffic camera system, with detection and accuracy heavily dependent on the camera function. In order to operate under demanding conditions, communication and functional optimization is implemented to control cameras from a roadside computer. The camera operates with a shutter speed faster than 1/2000 sec. to capture highway traffic both day and night. Consequently camera gain, pedestal level, shutter speed and gamma functions are controlled by a look-up table containing various parameters based on environmental conditions, particularly lighting. Lighting conditions are studied carefully, to focus only on the critical license plate surface. A unique light sensor permits accurate reading under a variety of conditions, such as a sunny day, evening, twilight, storms, etc. These camera systems are being deployed successfully in major ETC projects throughout the world.

  19. Multi-Focus Raw Bayer Pattern Image Fusion for Single-Chip Camera

    NASA Astrophysics Data System (ADS)

    Yang, Bin; Chen, Jibin

    2015-12-01

    In this paper, an efficient patch-based image fusion approach for raw images of single-chip imaging devices incorporated with the Bayer CFA pattern is presented. The multi-source raw Bayer pattern images are firstly parted into half overlapped patches. Then, the patches with maximum clarity measurement defined for raw Bayer pattern image are selected as the fused patches. Next, all the fused local patches are merged with weighted average method in order to reduce the blockness effect of fused raw Bayer pattern image. Finally, the real color fused image is obtained by gradient based demosaicing technology. The multi-source raw Bayer pattern data is fused before demosaicing, so that the multi-sensor system will be more efficient and the artifacts introduced in demosaicing processing do not accumulate in image fusion processing. For comparison, the raw images are also interpolated firstly, and then various image fusion methods are used to get the fused color images. Experimental results show that the proposed algorithm is valid and very effective.

  20. Photoreceptor counting and montaging of en-face retinal images from an adaptive optics fundus camera

    NASA Astrophysics Data System (ADS)

    Xue, Bai; Choi, Stacey S.; Doble, Nathan; Werner, John S.

    2007-05-01

    A fast and efficient method for quantifying photoreceptor density in images obtained with an en-face flood-illuminated adaptive optics (AO) imaging system is described. To improve accuracy of cone counting, en-face images are analyzed over extended areas. This is achieved with two separate semiautomated algorithms: (1) a montaging algorithm that joins retinal images with overlapping common features without edge effects and (2) a cone density measurement algorithm that counts the individual cones in the montaged image. The accuracy of the cone density measurement algorithm is high, with >97% agreement for a simulated retinal image (of known density, with low contrast) and for AO images from normal eyes when compared with previously reported histological data. Our algorithms do not require spatial regularity in cone packing and are, therefore, useful for counting cones in diseased retinas, as demonstrated for eyes with Stargardt's macular dystrophy and retinitis pigmentosa.

  1. Dual cameras acquisition and display system of retina-like sensor camera and rectangular sensor camera

    NASA Astrophysics Data System (ADS)

    Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu

    2015-04-01

    For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.

  2. Temporal variations in the cloud cover of Venus as detected from Venus Monitoring Camera Images on Venus Express Orbiter

    NASA Astrophysics Data System (ADS)

    Limaye, S. S.; Markiewicz, W. J.; Krauss, R. J.

    2014-12-01

    The Venus Monitoring Camera (VMC) on Venus Express [1] has been collecting images of the planet since orbit insertion in April 2006 through four narrow band pass (50 nm halfwidth) with center wavelengths of 365, 550, 950 and 1050 nm [2]. With varying range to the planet during the spacecraft's elliptical, near polar orbit, VMC obtains views of the day side southern hemisphere ( ~ 72,500 km) and the limb when it is furthest away from the planet, and can see a fraction of the planet's sun-lit limb northern latitudes when the spacecraft is closer to the planet ( >~ 25,000 km). We use these images to look at the temporal behavior of the normalized intensity and unit slant optical depth (location of the bright limb) at four wavelengths during April 2006 - March 2014. We detect correlated changes in the normalized brightness and the altitude of the unit optical depth over this period. Images were normalized using Minnaert function to account for the varying scattering geometry in order to detect changes in the reflectivity of the cloud cover at selected locations in local solar time. The unit optical depth was determined from the location of the planet's bright limb, taken to be where the brightness gradient is maximum along the bright limb azimuth. The changes observed appear to be quasi periodic. References [1] H. Svedhem,D.V. Titov, F.W. Taylor, O. Witasse, The Venus Express mission, Nature 450, 629-632, 2007. [2] Markiewicz, W. J. et al. Venus monitoring camera for Venus Express. Planet. Space Sci. 55, 1701-1711, 2007.

  3. a Modified Projective Transformation Scheme for Mosaicking Multi-Camera Imaging System Equipped on a Large Payload Fixed-Wing Uas

    NASA Astrophysics Data System (ADS)

    Jhan, J. P.; Li, Y. T.; Rau, J. Y.

    2015-03-01

    In recent years, Unmanned Aerial System (UAS) has been applied to collect aerial images for mapping, disaster investigation, vegetation monitoring and etc. It is a higher mobility and lower risk platform for human operation, but the low payload and short operation time reduce the image collection efficiency. In this study, one nadir and four oblique consumer grade DSLR cameras composed multiple camera system is equipped on a large payload UAS, which is designed to collect large ground coverage images in an effective way. The field of view (FOV) is increased to 127 degree, which is thus suitable to collect disaster images in mountainous area. The synthetic acquired five images are registered and mosaicked as larger format virtual image for reducing the number of images, post processing time, and for easier stereo plotting. Instead of traditional image matching and applying bundle adjustment method to estimate transformation parameters, the IOPs and ROPs of multiple cameras are calibrated and derived the coefficients of modified projective transformation (MPT) model for image mosaicking. However, there are some uncertainty of indoor calibrated IOPs and ROPs since the different environment conditions as well as the vibration of UAS, which will cause misregistration effect of initial MPT results. Remaining residuals are analysed through tie points matching on overlapping area of initial MPT results, in which displacement and scale difference are introduced and corrected to modify the ROPs and IOPs for finer registration results. In this experiment, the internal accuracy of mosaic image is better than 0.5 pixels after correcting the systematic errors. Comparison between separate cameras and mosaic images through rigorous aerial triangulation are conducted, in which the RMSE of 5 control and 9 check points is less than 5 cm and 10 cm in planimetric and vertical directions, respectively, for all cases. It proves that the designed imaging system and the proposed scheme have potential to create large scale topographic map.

  4. Electro-optical testing of fully depleted CCD image sensors for the Large Synoptic Survey Telescope camera

    NASA Astrophysics Data System (ADS)

    Doherty, Peter E.; Antilogus, Pierre; Astier, Pierre; Chiang, James; Gilmore, D. Kirk; Guyonnet, Augustin; Huang, Dajun; Kelly, Heather; Kotov, Ivan; Kubanek, Petr; Nomerotski, Andrei; O'Connor, Paul; Rasmussen, Andrew; Riot, Vincent J.; Stubbs, Christopher W.; Takacs, Peter; Tyson, J. Anthony; Vetter, Kurt

    2014-07-01

    The LSST Camera science sensor array will incorporate 189 large format Charge Coupled Device (CCD) image sensors. Each CCD will include over 16 million pixels and will be divided into 16 equally sized segments and each segment will be read through a separate output amplifier. The science goals of the project require CCD sensors with state of the art performance in many aspects. The broad survey wavelength coverage requires fully depleted, 100 micrometer thick, high resistivity, bulk silicon as the imager substrate. Image quality requirements place strict limits on the image degradation that may be caused by sensor effects: optical, electronic, and mechanical. In this paper we discuss the design of the prototype sensors, the hardware and software that has been used to perform electro-optic testing of the sensors, and a selection of the results of the testing to date. The architectural features that lead to internal electrostatic fields, the various effects on charge collection and transport that are caused by them, including charge diffusion and redistribution, effects on delivered PSF, and potential impacts on delivered science data quality are addressed.

  5. Ringfield lithographic camera

    DOEpatents

    Sweatt, William C. (Albuquerque, NM)

    1998-01-01

    A projection lithography camera is presented with a wide ringfield optimized so as to make efficient use of extreme ultraviolet radiation from a large area radiation source (e.g., D.sub.source .apprxeq.0.5 mm). The camera comprises four aspheric mirrors optically arranged on a common axis of symmetry with an increased etendue for the camera system. The camera includes an aperture stop that is accessible through a plurality of partial aperture stops to synthesize the theoretical aperture stop. Radiation from a mask is focused to form a reduced image on a wafer, relative to the mask, by reflection from the four aspheric mirrors.

  6. Camera, handlens, and microscope optical system for imaging and coupled optical spectroscopy

    NASA Technical Reports Server (NTRS)

    Mungas, Greg S. (Inventor); Boynton, John (Inventor); Sepulveda, Cesar A. (Inventor); Nunes de Sepulveda, Alicia (Inventor); Gursel, Yekta (Inventor)

    2011-01-01

    An optical system comprising two lens cells, each lens cell comprising multiple lens elements, to provide imaging over a very wide image distance and within a wide range of magnification by changing the distance between the two lens cells. An embodiment also provides scannable laser spectroscopic measurements within the field-of-view of the instrument.

  7. HUBBLE SPACE TELESCOPE ADVANCED CAMERA FOR SURVEYS CORONAGRAPHIC IMAGING OF THE AU MICROSCOPII DEBRIS DISK

    E-print Network

    DEBRIS DISK John E. Krist,1 D. R. Ardila,2 D. A. Golimowski,2 M. Clampin,3 H. C. Ford,2 G. D. Illingworth for Surveys multicolor coronagraphic images of the recently discovered edge-on debris disk around the nearby grains compared with other imaged debris disks that have more neutral or red colors. This may be due

  8. Camera, handlens, and microscope optical system for imaging and coupled optical spectroscopy

    NASA Technical Reports Server (NTRS)

    Mungas, Greg S. (Inventor); Boynton, John (Inventor); Sepulveda, Cesar A. (Inventor); Nunes de Sepulveda, legal representative, Alicia (Inventor); Gursel, Yekta (Inventor)

    2012-01-01

    An optical system comprising two lens cells, each lens cell comprising multiple lens elements, to provide imaging over a very wide image distance and within a wide range of magnification by changing the distance between the two lens cells. An embodiment also provides scannable laser spectroscopic measurements within the field-of-view of the instrument.

  9. Eruptive patterns and structure of Isla Fernandina, Galapagos Islands, from SPOT-1 HRV and large format camera images

    NASA Technical Reports Server (NTRS)

    Munro, Duncan C.; Mouginis-Mark, Peter J.

    1990-01-01

    SPOT-1 HRV, and large format-camera images were used to investigate the distribution and structure of erupted materials on Isla Fernandina, Galapagos Islands. Maps of lava flows, fissures, cones and topography derived from these data allow the first study of the entire subaerial segment of this geographically remote and ecologically sensitive volcano. No significant departure from a uniform distribution of erupted lava with azimuth can be detected. Short (less than 4 km) lava flows commonly have their source in the summit region and longer (greater than 8 km) lava flows originate from vents at lower elevations. Catastrophic landslides are proposed as a possible explanation for the asymmetry of the coastline with respect to the caldera.

  10. Liquid lens enabling real-time focus and tilt compensation for optical image stabilization in camera modules

    NASA Astrophysics Data System (ADS)

    Simon, Eric; Craen, Pierre; Gaton, Hilario; Jacques-Sermet, Olivier; Laune, Frédéric; Legrand, Julien; Maillard, Mathieu; Tallaron, Nicolas; Verplanck, Nicolas; Berge, Bruno

    2010-05-01

    A new generation of liquid lenses based on electrowetting has been developed, using a multi-electrode design, enabling to induce optical tilt and focus corrections in the same component. The basic principle is to rely on a conical shape for supporting the liquid interface, the conical shape insuring a restoring force for the liquid liquid interface to come at the center position. The multi-electrode design enables to induce an average tilt of the liquid liquid interface when a bias voltage is applied to the different electrodes. This tilt is reversible, vanishing when voltage bias is cancelled. Possible application of this new lens component is the realization of miniature camera featuring auto-focus and optical image stabilization (OIS) without any mobile mechanical part. Experimental measurements of actual performances of liquid lens component will be presented : focus and tilt amplitude, residual optical wave front error and response time.

  11. A dual charge-coupled device /CCD/, astronomical spectrometer and direct imaging camera. I - Optical and detector systems

    NASA Technical Reports Server (NTRS)

    Meyer, S. S.; Ricker, G. R.

    1980-01-01

    The MASCOT (MIT Astronomical Spectrometer/Camera for Optical Telescopes), an instrument capable of simultaneously performing both direct imaging and spectrometry of faint objects, is examined. An optical layout is given of the instrument which uses two CCD's mounted on the same temperature regulated detector block. Two sources of noise on the signal are discussed: (1) the CCD readout noise, which results in a constant uncertainty in the number of electrons collected from each pixel; and (2) the photon counting noise. The sensitivity of the device is limited by the sky brightness, the overall quantum efficiency, the resolution, and the readout noise of the CCD. Therefore, total system efficiency is calculated at about 15%.

  12. First demonstration of imaging cosmic muons in a two-phase Liquid Argon TPC using an EMCCD camera and a THGEM

    NASA Astrophysics Data System (ADS)

    Mavrokoridis, K.; Carroll, J.; McCormick, K. J.; Paudyal, P.; Roberts, A.; Smith, N. A.; Touramanis, C.

    2015-10-01

    Colossal two-phase Liquid Argon Time Projection Chambers (LAr TPCs) are a proposed option for future long-baseline neutrino experiments. This study illustrates the feasibility of using an EMCCD camera to capture light induced by single cosmic events in a two-phase LAr TPC employing a THGEM. An Andor iXon Ultra 897 EMCCD camera was externally mounted via a borosilicate glass viewport on the Liverpool two-phase LAr TPC. The camera successfully captured the secondary scintillation light produced at the THGEM holes that had been induced by cosmic events. The light collection capability of the camera for various EMCCD gains was assessed. For a THGEM gain of 64 and an EMCCD gain of 1000, clear images were captured with an average signal-to-noise ratio of 6. Preliminary 3D reconstruction of straight cosmic muon tracks has been performed by combining the camera images, PMT signals and THGEM charge data. Reconstructed cosmic muon tracks were used to determine THGEM gain and to calibrate the intensity levels of the EMCCD image.

  13. Evaluation of Sedimentary and Terrain Parameters from MARDI Camera Imaging and Stereogrammetry at Gale Crater

    NASA Astrophysics Data System (ADS)

    Minitti, M. E.; Garvin, J. B.; Yingst, R. A.; Maki, J.

    2014-12-01

    The Mars Descent Imager (MARDI) provides mm-resolution imaging of an ~0.92 m by 0.64 cm patch of the Martian surface beneath the rover. Since Sol 310, MARDI has acquired images under twilight illumination to optimize image quality for quantitative sedimentology analysis. This imaging, carried out after each rover drive, permits systematic measurement of clast size distribution, inter-clast spacing distribution, and surface areal coverage. Sedimentological parameters derived from these observations include clast sorting (mean/var of size) and other characteristics related to transport mechanisms or in-place weathering. Measured surface parameters (e.g., clast sorting) can be correlated with surface units qualitatively defined (e.g., smooth, rocky, outcrop) in images obtained by HiRISE, MSL Mastcam and MSL Navcam. In addition, a preliminary longitudinal analysis has been conducted to examine possible source regions for the bulk of the gravels by correlating sedimentary parameters with distance from either Peace Valles or Mt. Sharp, with evidence of trends associated with inter-clast spacing. Sets of stereo MARDI images that allow the creation of high vertical precision (±2 mm) digital elevation models (DEMs) have also been acquired via two techniques. First, MARDI images taken at each step of comprehensive wheel imaging activities yield 5 MARDI images with ~50% overlap. Second, two MARDI sidewalk video imaging mode (SVIM) experiments yielded a continuous set of images during a rover drive with 75-82% overlap. DEMs created via either technique can be used to quantify terrain properties and measure clast cross-sectional shapes. The microtopographic properties of the Sol 651 surface (e.g., mean/sigma of local elevation and slopes) and its sub-cm texture as measured in the SVIM-derived DEM are consistent with the characterization of the terrain as smooth with few rocks >20 cm. Characterization of the Sol 691 surface as rockier than the Sol 651 surface is supported by a DEM with larger local slopes at cm scales (mean ~40°), slope variances over 20° and cm-scale height variance in excess of 2 cm. Ultimately, the analysis of clast sizes, spatial patterns, areal cover, and 3D shapes is expected to result in the identification of sub-populations, potentially tied to depositional events in the region and their sources.

  14. An algorithm of a real time image tracking system using a camera with pan/tilt motors on an embedded system

    NASA Astrophysics Data System (ADS)

    Kim, Hie-Sik; Nam, Chul; Ha, Kwan-Yong; Ayurzana, Odgeral; Kwon, Jong-Won

    2005-12-01

    The embedded systems have been applied to many fields, including households and industrial sites. The user interface technology with simple display on the screen was implemented more and more. The user demands are increasing and the system has more various applicable fields due to a high penetration rate of the Internet. Therefore, the demand for embedded system is tend to rise. An embedded system for image tracking was implemented. This system is used a fixed IP for the reliable server operation on TCP/IP networks. Using an USB camera on the embedded Linux system developed a real time broadcasting of video image on the Internet. The digital camera is connected at the USB host port of the embedded board. All input images from the video camera are continuously stored as a compressed JPEG file in a directory at the Linux web-server. And each frame image data from web camera is compared for measurement of displacement Vector. That used Block matching algorithm and edge detection algorithm for past speed. And the displacement vector is used at pan/tilt motor control through RS232 serial cable. The embedded board utilized the S3C2410 MPU, which used the ARM 920T core form Samsung. The operating system was ported to embedded Linux kernel and mounted of root file system. And the stored images are sent to the client PC through the web browser. It used the network function of Linux and it developed a program with protocol of the TCP/IP.

  15. High-Resolution Images with Minimum Energy Dissipation and Maximum Field-of-View in Camera-Based Wireless Multimedia Sensor Networks

    PubMed Central

    Aghdasi, Hadi S.; Bisadi, Pouya; Moghaddam, Mohsen Ebrahimi; Abbaspour, Maghsoud

    2009-01-01

    High-resolution images with wide field of view are important in realizing many applications of wireless multimedia sensor networks. Previous works that generally use multi-tier topology and provide such images by increasing the capabilities of camera sensor nodes lead to an increase in network cost. On the other hand, the resulting energy consumption is a considerable issue that has not been seriously considered in previous works. In this paper, high-resolution images with wide field of view are generated without increasing the total cost of network and with minimum energy dissipation. This is achieved by using image stitching in WMSNs, designing a two-tier network topology with new structure, and proposing a camera selection algorithm. In the proposed two-tier structure, low cost camera sensor nodes are used only in the lower-tier and sensor nodes without camera are considered in the upper-tier which decreases total network cost as much as possible. Also, since a simplified image stitching method is implemented and a new algorithm for selecting active nodes is utilized, energy dissipation in the network is decreased by applying the proposed methods. The results of simulations supported the preceding statements. PMID:22454591

  16. An algorithm for automated detection, localization and measurement of local calcium signals from camera-based imaging.

    PubMed

    Ellefsen, Kyle L; Settle, Brett; Parker, Ian; Smith, Ian F

    2014-09-01

    Local Ca(2+) transients such as puffs and sparks form the building blocks of cellular Ca(2+) signaling in numerous cell types. They have traditionally been studied by linescan confocal microscopy, but advances in TIRF microscopy together with improved electron-multiplied CCD (EMCCD) cameras now enable rapid (>500 frames s(-1)) imaging of subcellular Ca(2+) signals with high spatial resolution in two dimensions. This approach yields vastly more information (ca. 1 Gb min(-1)) than linescan imaging, rendering visual identification and analysis of local events imaged both laborious and subject to user bias. Here we describe a routine to rapidly automate identification and analysis of local Ca(2+) events. This features an intuitive graphical user-interfaces and runs under Matlab and the open-source Python software. The underlying algorithm features spatial and temporal noise filtering to reliably detect even small events in the presence of noisy and fluctuating baselines; localizes sites of Ca(2+) release with sub-pixel resolution; facilitates user review and editing of data; and outputs time-sequences of fluorescence ratio signals for identified event sites along with Excel-compatible tables listing amplitudes and kinetics of events. PMID:25047761

  17. An algorithm for automated detection, localization and measurement of local calcium signals from camera-based imaging

    PubMed Central

    Ellefsen, Kyle; Settle, Brett; Parker, Ian; Smith, Ian

    2014-01-01

    Summary Local Ca2+ transients such as puffs and sparks form the building blocks of cellular Ca2+ signaling in numerous cell types. They have traditionally been studied by line scan confocal microscopy, but advances in TIRF microscopy together with improved electron-multiplied CCD (EMCCD) camera snow enable rapid (>500 frames s?1) imaging of subcellular Ca2+ signals with high spatial resolution in two dimensions. This approach yields vastly more information (ca. 1GB per minute) than line scan imaging, rendering visual identification and analysis of local events imaged both laborious and subject to user bias. Here we describe a routine to rapidly automate identification and analysis of local Ca2+ events. This features an intuitive graphical user-interfaces and runs under Matlab and the open-source Python software. The underlying algorithm features spatial and temporal noise filtering to reliably detect even small events in the presence of noisy and fluctuating baselines; localizes sites of Ca2+ release with sub-pixel resolution; facilitates user review and editing of data; and outputs time-sequences of fluorescence ratio signals for identified event sites along with Excel-compatible tables listing amplitudes and kinetics of events. PMID:25047761

  18. Using a depth-sensing infrared camera system to access and manipulate medical imaging from within the sterile operating field

    PubMed Central

    Strickland, Matt; Tremaine, Jamie; Brigley, Greg; Law, Calvin

    2013-01-01

    Background As surgical procedures become increasingly dependent on equipment and imaging, the need for sterile members of the surgical team to have unimpeded access to the nonsterile technology in their operating room (OR) is of growing importance. To our knowledge, our team is the first to use an inexpensive infrared depth-sensing camera (a component of the Microsoft Kinect) and software developed in-house to give surgeons a touchless, gestural interface with which to navigate their picture archiving and communication systems intraoperatively. Methods The system was designed and developed with feedback from surgeons and OR personnel and with consideration of the principles of aseptic technique and gestural controls in mind. Simulation was used for basic validation before trialing in a pilot series of 6 hepatobiliary-pancreatic surgeries. Results The interface was used extensively in 2 laparoscopic and 4 open procedures. Surgeons primarily used the system for anatomic correlation, real-time comparison of intraoperative ultrasound with preoperative computed tomography and magnetic resonance imaging scans and for teaching residents and fellows. Conclusion The system worked well in a wide range of lighting conditions and procedures. It led to a perceived increase in the use of intraoperative image consultation. Further research should be focused on investigating the usefulness of touchless gestural interfaces in different types of surgical procedures and its effects on operative time. PMID:23706851

  19. A study of the massive star-forming region M8 using images from the Spitzer Infrared Array Camera

    NASA Astrophysics Data System (ADS)

    Kumar, Dewangan Lokesh; Anandarao, B. G.

    2010-09-01

    We present photometry and images (3.6, 4.5, 5.8 and 8.0 ?m) from the Spitzer Infrared Array Camera (IRAC) of the star-forming region Messier 8 (M8). The IRAC photometry reveals ongoing star formation in the M8 complex, with 64 class 0/I and 168 class II sources identified in several locations in the vicinity of submm gas cores/clumps. Nearly 60 per cent of these young stellar objects (YSOs) occur in about seven small clusters. The spatial surface density of the clustered YSOs is determined to be about 10-20 YSOs pc-2. Fresh star formation by the process of `collect and collapse' might have been triggered by the expanding HII regions and winds from massive stars. IRAC ratio images are generated and studied in order to identify possible diagnostic emission regions in M8. The image of 4.5/8.0 ?m reveals a Br? counterpart of the optical Hourglass HII region, while the ratio 8.0/4.5 ?m indicates PAH emission in a cavity-like structure to the east of the Hourglass. The ratio maps of 3.6/4.5, 5.8/4.5 and 8.0/4.5 ?m seem to identify PAH emission regions in the sharp ridges and filamentary structures seen east to west and north-east to south-west in the M8 complex.

  20. A three-camera imaging microscope for high-speed single-molecule tracking and super-resolution imaging in living cells

    NASA Astrophysics Data System (ADS)

    English, Brian P.; Singer, Robert H.

    2015-08-01

    Our aim is to develop quantitative single-molecule assays to study when and where molecules are interacting inside living cells and where enzymes are active. To this end we present a three-camera imaging microscope for fast tracking of multiple interacting molecules simultaneously, with high spatiotemporal resolution. The system was designed around an ASI RAMM frame using three separate tube lenses and custom multi-band dichroics to allow for enhanced detection efficiency. The frame times of the three Andor iXon Ultra EMCCD cameras are hardware synchronized to the laser excitation pulses of the three excitation lasers, such that the fluorophores are effectively immobilized during frame acquisitions and do not yield detections that are motion-blurred. Stroboscopic illumination allows robust detection from even rapidly moving molecules while minimizing bleaching, and since snapshots can be spaced out with varying time intervals, stroboscopic illumination enables a direct comparison to be made between fast and slow molecules under identical light dosage. We have developed algorithms that accurately track and co-localize multiple interacting biomolecules. The three-color microscope combined with our co-movement algorithms have made it possible for instance to simultaneously image and track how the chromosome environment affects diffusion kinetics or determine how mRNAs diffuse during translation. Such multiplexed single-molecule measurements at a high spatiotemporal resolution inside living cells will provide a major tool for testing models relating molecular architecture and biological dynamics.

  1. Camera artifacts in IUE spectra

    NASA Technical Reports Server (NTRS)

    Bruegman, O. W.; Crenshaw, D. M.

    1994-01-01

    This study of emission line mimicking features in the IUE cameras has produced an atlas of artifiacts in high-dispersion images with an accompanying table of prominent artifacts and a table of prominent artifacts in the raw images along with a medium image of the sky background for each IUE camera.

  2. Process control of laser conduction welding by thermal imaging measurement with a color camera.

    PubMed

    Bardin, Fabrice; Morgan, Stephen; Williams, Stewart; McBride, Roy; Moore, Andrew J; Jones, Julian D C; Hand, Duncan P

    2005-11-10

    Conduction welding offers an alternative to keyhole welding. Compared with keyhole welding, it is an intrinsically stable process because vaporization phenomena are minimal. However, as with keyhole welding, an on-line process-monitoring system is advantageous for quality assurance to maintain the required penetration depth, which in conduction welding is more sensitive to changes in heat sinking. The maximum penetration is obtained when the surface temperature is just below the boiling point, and so we normally wish to maintain the temperature at this level. We describe a two-color optical system that we have developed for real-time temperature profile measurement of the conduction weld pool. The key feature of the system is the use of a complementary metal-oxide semiconductor standard color camera leading to a simplified low-cost optical setup. We present and discuss the real-time temperature measurement and control performance of the system when a defocused beam from a high power Nd:YAG laser is used on 5 mm thick stainless steel workpieces. PMID:16294956

  3. Turbulence studies by fast camera imaging experiments in the TJII stellarator

    NASA Astrophysics Data System (ADS)

    Carralero, D.; de la Cal, E.; de Pablos, J. L.; de Coninck, A.; Alonso, J. A.; Hidalgo, C.; van Milligen, B. Ph.; Pedrosa, M. A.

    2009-06-01

    Experimental studies of turbulence and plasma-wall interaction by fast visible imaging in the TJII stellarator are presented. Visualization of low intensity phenomena was possible by the installation of image intensifiers, allowing the direct observation of turbulent transport events in the scrape off layer (SOL). The observation of turbulent structures propagating poloidally at the plasma edge has shown enough periodicity to allow correlation between them with repetition rates around 10 kHz, as a quasicoherent mode. The analysis of plasma-wall interaction by filtered imaging have shown that there is a change in the 'preferred' plasma-wall interaction area, going from a hardcore toroidally limited plasma to a poloidally locally limited one in the transition from ECRH to NBI heated plasmas.

  4. Gamma camera radionuclide images: improved contrast with energy-weighted acquisition

    SciTech Connect

    Halama, J.R.; Henkin, R.E.; Friend, L.E.

    1988-11-01

    An energy-weighted acquisition (EWA) technique has been developed that utilizes all scintillation events, weighting their contributions depending on their energy, to formulate a radionuclide image. Photopeak events from primary radiation contribute positively; scatter events contribute negatively, providing for scatter subtraction and improved image contrast. EWA is employed with an on-line weighted-acquisition module (WAM) as the data are acquired, rather than as a postprocessing technique. EWA was compared with normal window imaging in patients and in phantoms. For gallium-67 and thallium-201, contrast improved by as much as 40%. A much smaller improvement in contrast was observed with technetium-99m due to its ideal monoenergetic emissions. Single photon emission computed tomographic studies also showed improved contrast and were without artifact. EWA has great promise, and with further development quantitative scatter correction may be possible.

  5. Observation of Geocorona using Lyman Alpha Imaging CAmera (LAICA) onboard the very small deep space explorer PROCYON

    NASA Astrophysics Data System (ADS)

    Kameda, Shingo; Yoshikawa, Ichiro; Taguchi, Makoto; Sato, Masaki; Kuwabara, Masaki

    Exospheric hydrogen atoms resonantly scatter solar ultraviolet radiation at the wavelength of 121.567nm causing an ultraviolet glow. It is so called "geocorona". The past observational results suggest that geocorona extends to an altitude of about 20R _{E}, where the intensity of geocoronal emission is comparable with that of interplanetary hydrogen emission. Recently, Bailey and Gruntman (2013) newly reported abrupt temporary increases (from 6% to 17%) in the total number of hydrogen atoms in the spherical shell from a geocentric distance of 3 to 8 R _{E} during geomagnetic storms. However, the relation between hydrogen exosphere at high altitude and geomagnetic activity is still unclear. Past observation of the geocorona has mainly been performed using earth orbiters. Its altitude, e. g., 8 R _{E} is not adequately high for the observation of geocorona at high altitude. Observation of geocorona from deep space has been conducted in the Mariner 5, Apollo 16, and Nozomi mission. Among them, only Apollo 16 has a 2-D imager. Its FOV was about 10RE and was not wide enough for imaging the whole geocorona expanding to 20R _{E}. In June 2013, we proposed the LAICA (Lyman Alpha Imaging CAmera) instrument onboard the very small deep space explorer PROCYON, which is planned to be launched in Dec 2014. Its FOV ( 25R _{E}) is wide enough for imaging of the whole geocoronal distribution. We are planning to observe geocorona for more than one week with the temporal resolution of 2h. LAICA was approved in Oct 2013 and its development is now on-going. In this presentation, we will introduce the scientific objectives of LAICA and report the test result of the flight model.

  6. Performance of the low light level CCD camera for speckle imaging

    E-print Network

    Saha, S K

    2002-01-01

    A new generation CCD detector called low light level CCD (L3CCD) that performs like an intensified CCD without incorporating a micro channel plate (MCP) for light amplification was procured and tested. A series of short exposure images with millisecond integration time has been obtained. The L3CCD is cooled to about $-80^\\circ$ C by Peltier cooling.

  7. A simulation tool for evaluating digital camera image quality Joyce E. Farrella

    E-print Network

    Wandell, Brian A.

    -1648 b Department of Psychology, Stanford University, Stanford, CA 94305 c Department of Electrical color tools and metrics based on international standards (chromaticity coordinates, CIELAB and others) that assist the engineer in evaluating the color accuracy and quality of the rendered image. Keywords: Digital

  8. Proposal and verification of two methods for evaluation of the human iris video-camera images

    NASA Astrophysics Data System (ADS)

    Machala, L.; Pospisil, J.

    This article presents the proposal of two new methods of the statistical and computer evaluation of the iris structure of a human eye in view of identification of a person, based partly on the correlation analysis and partly on the direct comparison of commensurable regions of digitized iris images. The results of the measurements by the mentioned methods are presented, compared and evaluated subsequently.

  9. Performance of the low light level CCD camera for speckle imaging

    E-print Network

    S. K. Saha; V. Chinnappan

    2002-09-20

    A new generation CCD detector called low light level CCD (L3CCD) that performs like an intensified CCD without incorporating a micro channel plate (MCP) for light amplification was procured and tested. A series of short exposure images with millisecond integration time has been obtained. The L3CCD is cooled to about $-80^\\circ$ C by Peltier cooling.

  10. Study on key techniques for camera-based hydrological record image digitization

    NASA Astrophysics Data System (ADS)

    Li, Shijin; Zhan, Di; Hu, Jinlong; Gao, Xiangtao; Bo, Ping

    2015-10-01

    With the development of information technology, the digitization of scientific or engineering drawings has received more and more attention. In hydrology, meteorology, medicine and mining industry, the grid drawing sheet is commonly used to record the observations from sensors. However, these paper drawings may be destroyed and contaminated due to improper preservation or overuse. Further, it will be a heavy workload and prone to error if these data are manually transcripted into the computer. Hence, in order to digitize these drawings, establishing the corresponding data base will ensure the integrity of data and provide invaluable information for further research. This paper presents an automatic system for hydrological record image digitization, which consists of three key techniques, i.e., image segmentation, intersection point localization and distortion rectification. First, a novel approach to the binarization of the curves and grids in the water level sheet image has been proposed, which is based on the fusion of gradient and color information adaptively. Second, a fast search strategy for cross point location is invented and point-by-point processing is thus avoided, with the help of grid distribution information. And finally, we put forward a local rectification method through analyzing the central portions of the image and utilizing the domain knowledge of hydrology. The processing speed is accelerated, while the accuracy is still satisfying. Experiments on several real water level records show that our proposed techniques are effective and capable of recovering the hydrological observations accurately.

  11. Adaptive and Hybrid Genetic Approaches for Estimating the Camera Motion from Image Point Correspondences

    E-print Network

    Barreto, Joao

    Correspondences Francisco Vasconcelos DEEC/ISR University of Coimbra Coimbra, Portugal fpv@isr.uc.pt Carlos Henggeler Antunes DEEC/INESC University of Coimbra Coimbra, Portugal ch@deec.uc.pt João P. Barreto DEEC/ISR University of Coimbra Coimbra, Portugal jpbar@deec.uc.pt ABSTRACT Rigid motion estimation from image point

  12. First Demonstration of Imaging Cosmic Muons in a Two-Phase Liquid Argon TPC using an EMCCD Camera and a THGEM

    E-print Network

    Mavrokoridis, K; McCormick, K J; Paudyal, P; Roberts, A; Smith, N A; Touramanis, C

    2015-01-01

    Colossal two-phase Liquid Argon Time Projection Chambers (LAr TPCs) are a proposed option for future long-baseline neutrino experiments. This study illustrates the feasibility of using an EMCCD camera to capture light induced by single cosmic events in a two-phase LAr TPC employing a THGEM. An Andor iXon Ultra 897 EMCCD camera was externally mounted via a borosilicate glass viewport on the Liverpool two-phase LAr TPC. The camera successfully captured the secondary scintillation light produced at the THGEM holes that had been induced by cosmic events. The light collection capability of the camera for various EMCCD gains was assessed. For a THGEM gain of 64 and an EMCCD gain of 1000, clear images were captured with an average signal-to-noise ratio of 6. Preliminary 3D reconstruction of straight cosmic muon tracks has been performed by combining the camera images, PMT signals and THGEM charge data. Reconstructed cosmic muon tracks were used to determine THGEM gain and to calibrate the intensity levels of the EMC...

  13. Automated Camera Array Fine Calibration

    NASA Technical Reports Server (NTRS)

    Clouse, Daniel; Padgett, Curtis; Ansar, Adnan; Cheng, Yang

    2008-01-01

    Using aerial imagery, the JPL FineCalibration (JPL FineCal) software automatically tunes a set of existing CAHVOR camera models for an array of cameras. The software finds matching features in the overlap region between images from adjacent cameras, and uses these features to refine the camera models. It is not necessary to take special imagery of a known target and no surveying is required. JPL FineCal was developed for use with an aerial, persistent surveillance platform.

  14. Assessment of a Monte-Carlo simulation of SPECT recordings from a new-generation heart-centric semiconductor camera: from point sources to human images

    NASA Astrophysics Data System (ADS)

    Imbert, Laetitia; Galbrun, Ernest; Odille, Freddy; Poussier, Sylvain; Noel, Alain; Wolf, Didier; Karcher, Gilles; Marie, Pierre-Yves

    2015-02-01

    Geant4 application for tomographic emission (GATE), a Monte-Carlo simulation platform, has previously been used for optimizing tomoscintigraphic images recorded with scintillation Anger cameras but not with the new-generation heart-centric cadmium-zinc-telluride (CZT) cameras. Using the GATE platform, this study aimed at simulating the SPECT recordings from one of these new CZT cameras and to assess this simulation by direct comparison between simulated and actual recorded data, ranging from point sources to human images. Geometry and movement of detectors, as well as their respective energy responses, were modeled for the CZT ‘D.SPECT’ camera in the GATE platform. Both simulated and actual recorded data were obtained from: (1) point and linear sources of 99mTc for compared assessments of detection sensitivity and spatial resolution, (2) a cardiac insert filled with a 99mTc solution for compared assessments of contrast-to-noise ratio and sharpness of myocardial borders and (3) in a patient with myocardial infarction using segmented cardiac magnetic resonance imaging images. Most of the data from the simulated images exhibited high concordance with the results of actual images with relative differences of only: (1) 0.5% for detection sensitivity, (2) 6.7% for spatial resolution, (3) 2.6% for contrast-to-noise ratio and 5.0% for sharpness index on the cardiac insert placed in a diffusing environment. There was also good concordance between actual and simulated gated-SPECT patient images for the delineation of the myocardial infarction area, although the quality of the simulated images was clearly superior with increases around 50% for both contrast-to-noise ratio and sharpness index. SPECT recordings from a new heart-centric CZT camera can be simulated with the GATE software with high concordance relative to the actual physical properties of this camera. These simulations may be conducted up to the stage of human SPECT-images even if further refinement is needed in this setting.

  15. High-resolution topomapping of candidate MER landing sites with Mars Orbiter Camera narrow-angle images

    USGS Publications Warehouse

    Kirk, R.L.; Howington-Kraus, E.; Redding, B.; Galuszka, D.; Hare, T.M.; Archinal, B.A.; Soderblom, L.A.; Barrett, J.M.

    2003-01-01

    We analyzed narrow-angle Mars Orbiter Camera (MOC-NA) images to produce high-resolution digital elevation models (DEMs) in order to provide topographic and slope information needed to assess the safety of candidate landing sites for the Mars Exploration Rovers (MER) and to assess the accuracy of our results by a variety of tests. The mapping techniques developed also support geoscientific studies and can be used with all present and planned Mars-orbiting scanner cameras. Photogrammetric analysis of MOC stereopairs yields DEMs with 3-pixel (typically 10 m) horizontal resolution, vertical precision consistent with ???0.22 pixel matching errors (typically a few meters), and slope errors of 1-3??. These DEMs are controlled to the Mars Orbiter Laser Altimeter (MOLA) global data set and consistent with it at the limits of resolution. Photoclinometry yields DEMs with single-pixel (typically ???3 m) horizontal resolution and submeter vertical precision. Where the surface albedo is uniform, the dominant error is 10-20% relative uncertainty in the amplitude of topography and slopes after "calibrating" photoclinometry against a stereo DEM to account for the influence of atmospheric haze. We mapped portions of seven candidate MER sites and the Mars Pathfinder site. Safety of the final four sites (Elysium, Gusev, Isidis, and Meridiani) was assessed by mission engineers by simulating landings on our DEMs of "hazard units" mapped in the sites, with results weighted by the probability of landing on those units; summary slope statistics show that most hazard units are smooth, with only small areas of etched terrain in Gusev crater posing a slope hazard.

  16. Camera Optics.

    ERIC Educational Resources Information Center

    Ruiz, Michael J.

    1982-01-01

    The camera presents an excellent way to illustrate principles of geometrical optics. Basic camera optics of the single-lens reflex camera are discussed, including interchangeable lenses and accessories available to most owners. Several experiments are described and results compared with theoretical predictions or manufacturer specifications.…

  17. Test measurements with a new neutron imaging alignment camera at ISIS

    NASA Astrophysics Data System (ADS)

    Bartoli, L.; Aliotta, F.; Grazzi, F.; Salvato, G.; Vasi, C. S.; Zoppi, M.

    2008-10-01

    A low-cost neutron imaging device has been tested on the Italian Neutron Experimental Station (INES) beamline at ISIS, the pulsed neutron source of Rutherford Appleton Laboratory (UK). This was originally planned for alignment purposes but it turned out that its use could be extended to neutron radiography. The preliminary results are promising and the present prototype is proposed as a basis for developing a useful and quick device for beam monitoring and sample positioning on ISIS instruments.

  18. Wide FastCam: a wide field imaging camera for the TCS

    NASA Astrophysics Data System (ADS)

    Murga, Gaizka; Oscoz, Alejandro; López, Roberto; Campo, Ramón; Etxegarai, Urtats; Pallé, Enric

    2014-08-01

    The FastCam instrument, jointly developed by the IAC and the UPCT, allows, in real-time, acquisition, selection and storage of images with a resolution that reaches the diffraction limit of medium-sized telescopes. FastCam incorporates a specially designed software package to analyze series of tens of thousands of images in parallel with the data acquisition at the telescope. This instrument, well tested and used, has lead to another instrument with slightly different characteristics: Wide FastCam. Although it uses the same software for data acquisition, this time the objective does not look for lucky imaging but fast observations (some frames per second) in a much larger field of view. Wide FastCam consists of a 1k x 1k EMCCD detector and different optics offering a ~8 arcmin FOV. IDOM collaborated with IAC in the design of a high stability optical bench for the implementation of FastCam at the Telescopio Carlos Sánchez (TCS) and is currently collaborating in the implementation of Wide FastCam at the same telescope.

  19. Noninvasive imaging of human skin hemodynamics using a digital red-green-blue camera

    NASA Astrophysics Data System (ADS)

    Nishidate, Izumi; Tanaka, Noriyuki; Kawase, Tatsuya; Maeda, Takaaki; Yuasa, Tomonori; Aizu, Yoshihisa; Yuasa, Tetsuya; Niizeki, Kyuichi

    2011-08-01

    In order to visualize human skin hemodynamics, we investigated a method that is specifically developed for the visualization of concentrations of oxygenated blood, deoxygenated blood, and melanin in skin tissue from digital RGB color images. Images of total blood concentration and oxygen saturation can also be reconstructed from the results of oxygenated and deoxygenated blood. Experiments using tissue-like agar gel phantoms demonstrated the ability of the developed method to quantitatively visualize the transition from an oxygenated blood to a deoxygenated blood in dermis. In vivo imaging of the chromophore concentrations and tissue oxygen saturation in the skin of the human hand are performed for 14 subjects during upper limb occlusion at 50 and 250 mm Hg. The response of the total blood concentration in the skin acquired by this method and forearm volume changes obtained from the conventional strain-gauge plethysmograph were comparable during the upper arm occlusion at pressures of both 50 and 250 mm Hg. The results presented in the present paper indicate the possibility of visualizing the hemodynamics of subsurface skin tissue.

  20. Cosmic Infrared Background Fluctuations in Deep Spitzer Infrared Array Camera Images: Data Processing and Analysis

    NASA Technical Reports Server (NTRS)

    Arendt, Richard; Kashlinsky, A.; Moseley, S.; Mather, J.

    2010-01-01

    This paper provides a detailed description of the data reduction and analysis procedures that have been employed in our previous studies of spatial fluctuation of the cosmic infrared background (CIB) using deep Spitzer Infrared Array Camera observations. The self-calibration we apply removes a strong instrumental signal from the fluctuations that would otherwise corrupt the results. The procedures and results for masking bright sources and modeling faint sources down to levels set by the instrumental noise are presented. Various tests are performed to demonstrate that the resulting power spectra of these fields are not dominated by instrumental or procedural effects. These tests indicate that the large-scale ([greater, similar]30') fluctuations that remain in the deepest fields are not directly related to the galaxies that are bright enough to be individually detected. We provide the parameterization of these power spectra in terms of separate instrument noise, shot noise, and power-law components. We discuss the relationship between fluctuations measured at different wavelengths and depths, and the relations between constraints on the mean intensity of the CIB and its fluctuation spectrum. Consistent with growing evidence that the [approx]1-5 [mu]m mean intensity of the CIB may not be as far above the integrated emission of resolved galaxies as has been reported in some analyses of DIRBE and IRTS observations, our measurements of spatial fluctuations of the CIB intensity indicate the mean emission from the objects producing the fluctuations is quite low ([greater, similar]1 nW m-2 sr-1 at 3-5 [mu]m), and thus consistent with current [gamma]-ray absorption constraints. The source of the fluctuations may be high-z Population III objects, or a more local component of very low luminosity objects with clustering properties that differ from the resolved galaxies. Finally, we discuss the prospects of the upcoming space-based surveys to directly measure the epochs inhabited by the populations producing these source-subtracted CIB fluctuations, and to isolate the individual fluxes of these populations.

  1. Measurement of marine picoplankton cell size by using a cooled, charge-coupled device camera with image-analyzed fluorescence microscopy

    SciTech Connect

    Viles, C.L.; Sieracki, M.E. )

    1992-02-01

    Accurate measurement of the biomass and size distribution of picoplankton cells (0.2 to 2.0 {mu}m) is paramount in characterizing their contribution to the oceanic food web and global biogeochemical cycling. Image-analyzed fluorescence microscopy, usually based on video camera technology, allows detailed measurements of individual cells to be taken. The application of an imaging system employing a cooled, slow-scan charge-coupled device (CCD) camera to automated counting and sizing of individual picoplankton cells from natural marine samples is described. A slow-scan CCD-based camera was compared to a video camera and was superior for detecting and sizing very small, dim particles such as fluorochrome-stained bacteria. Several edge detection methods for accurately measuring picoplankton cells were evaluated. Standard fluorescent microspheres and a Sargasso Sea surface water picoplankton population were used in the evaluation. Global thresholding was inappropriate for these samples. Methods used previously in image analysis of nanoplankton cells (2 to 20 {mu}m) also did not work well with the smaller picoplankton cells. A method combining an edge detector and an adaptive edge strength operator worked best for rapidly generating accurate cell sizes. A complete sample analysis of more than 1,000 cells averages about 50 min and yields size, shape, and fluorescence data for each cell. With this system, the entire size range of picoplankton can be counted and measured.

  2. Automated image acquisition and processing using a new generation of 4K x 4K CCD cameras for cryo electron microscopic studies of macromolecular assemblies.

    PubMed

    Zhang, Peijun; Borgnia, Mario J; Mooney, Paul; Shi, Dan; Pan, Ming; O'Herron, Philip; Mao, Albert; Brogan, David; Milne, Jacqueline L S; Subramaniam, Sriram

    2003-08-01

    We have previously reported the development of AutoEM, a software package for semi-automated acquisition of data from a transmission electron microscope. In continuing efforts to improve the speed of structure determination of macromolecular assemblies by electron microscopy, we report here on the performance of a new generation of 4 K CCD cameras for use in cryo electron microscopic applications. We demonstrate that at 120 kV, and at a nominal magnification of 67000 x, power spectra and signal-to-noise ratios for the new 4 K CCD camera are comparable to values obtained for film images scanned using a Zeiss scanner to resolutions as high as approximately 1/6.5A(-1). The specimen area imaged for each exposure on the 4 K CCD is about one-third of the area that can be recorded with a similar exposure on film. The CCD camera also serves the purpose of recording images at low magnification from the center of the hole to measure the thickness of vitrified ice in the hole. The performance of the camera is satisfactory under the low-dose conditions used in cryo electron microscopy, as demonstrated here by the determination of a three-dimensional map at 15 A for the catalytic core of the 1.8 MDa Bacillus stearothermophilus icosahedral pyruvate dehydrogenase complex, and its comparison with the previously reported atomic model for this complex obtained by X-ray crystallography. PMID:12972350

  3. Imaging with organic indicators and high-speed charge-coupled device cameras in neurons: some applications where these classic techniques have advantages.

    PubMed

    Ross, William N; Miyazaki, Kenichi; Popovic, Marko A; Zecevic, Dejan

    2015-04-01

    Dynamic calcium and voltage imaging is a major tool in modern cellular neuroscience. Since the beginning of their use over 40 years ago, there have been major improvements in indicators, microscopes, imaging systems, and computers. While cutting edge research has trended toward the use of genetically encoded calcium or voltage indicators, two-photon microscopes, and in vivo preparations, it is worth noting that some questions still may be best approached using more classical methodologies and preparations. In this review, we highlight a few examples in neurons where the combination of charge-coupled device (CCD) imaging and classical organic indicators has revealed information that has so far been more informative than results using the more modern systems. These experiments take advantage of the high frame rates, sensitivity, and spatial integration of the best CCD cameras. These cameras can respond to the faster kinetics of organic voltage and calcium indicators, which closely reflect the fast dynamics of the underlying cellular events. PMID:26157996

  4. Imaging performance comparison between a LaBr{sub 3}:Ce scintillator based and a CdTe semiconductor based photon counting compact gamma camera

    SciTech Connect

    Russo, P.; Mettivier, G.; Pani, R.; Pellegrini, R.; Cinti, M. N.; Bennati, P.

    2009-04-15

    The authors report on the performance of two small field of view, compact gamma cameras working in single photon counting in planar imaging tests at 122 and 140 keV. The first camera is based on a LaBr{sub 3}:Ce scintillator continuous crystal (49x49x5 mm{sup 3}) assembled with a flat panel multianode photomultiplier tube with parallel readout. The second one belongs to the class of semiconductor hybrid pixel detectors, specifically, a CdTe pixel detector (14x14x1 mm{sup 3}) with 256x256 square pixels and a pitch of 55 {mu}m, read out by a CMOS single photon counting integrated circuit of the Medipix2 series. The scintillation camera was operated with selectable energy window while the CdTe camera was operated with a single low-energy detection threshold of about 20 keV, i.e., without energy discrimination. The detectors were coupled to pinhole or parallel-hole high-resolution collimators. The evaluation of their overall performance in basic imaging tasks is presented through measurements of their detection efficiency, intrinsic spatial resolution, noise, image SNR, and contrast recovery. The scintillation and CdTe cameras showed, respectively, detection efficiencies at 122 keV of 83% and 45%, intrinsic spatial resolutions of 0.9 mm and 75 {mu}m, and total background noises of 40.5 and 1.6 cps. Imaging tests with high-resolution parallel-hole and pinhole collimators are also reported.

  5. Mechanical design of the IRIS2 infrared imaging camera and spectrograph

    NASA Astrophysics Data System (ADS)

    Smith, Greg; Churilov, Vladimir

    2003-03-01

    We describe the mechanical, optomechanical and thermal design and development of the IRIS2, an infrared imager and spectrograph for operation at the Cassegrain focus of the Anglo Australian Telescope. IRIS2 is reconfigured by four encoded worm driven wheels which carry slits and slit masks, filters, cold stops, and grisms, and a pupil imager. A detector translator provides fine focus. The instrument is housed in a split, or dual, vacuum vessel. Helium cryo-coolers provide operational cooling, but to reduce turn around time during commissioning and maintenance a liquid nitrogen pre-cooling system has been implemented in the main vessel. The slit wheel is housed in a separate, smaller vessel, which may be thermally cycled when new slit masks are installed, while the rest of the instrument remains at operational temperature. The common plate between the vessels serves as the structural base on which the instrument is assembled. Matched trusses on opposite sides of the plate minimize the relative deflection between the slit wheel assembly and the spectrograph optics.

  6. Molecular Shocks Associated with Massive Young Stars: CO Line Images with a New Far-Infrared Spectroscopic Camera on the Kuiper Airborne Observatory

    NASA Technical Reports Server (NTRS)

    Watson, Dan M.

    1997-01-01

    Under the terms of our contract with NASA Ames Research Center, the University of Rochester (UR) offers the following final technical report on grant NAG 2-958, Molecular shocks associated with massive young stars: CO line images with a new far-infrared spectroscopic camera, given for implementation of the UR Far-Infrared Spectroscopic Camera (FISC) on the Kuiper Airborne Observatory (KAO), and use of this camera for observations of star-formation regions 1. Two KAO flights in FY 1995, the final year of KAO operations, were awarded to this program, conditional upon a technical readiness confirmation which was given in January 1995. The funding period covered in this report is 1 October 1994 - 30 September 1996. The project was supported with $30,000, and no funds remained at the conclusion of the project.

  7. The Multi-Temporal Database of High Resolution Stereo Camera (HRSC) and Planetary Images of Mars (MUTED): A Tool to Support the Identification of Surface Changes

    NASA Astrophysics Data System (ADS)

    Erkeling, G.; Luesebrink, D.; Hiesinger, H.; Reiss, D.; Jaumann, R.

    2015-10-01

    Image data transmitted to Earth by Martian spacecraft since the 1970s, for example by Mariner and Viking, Mars Global Surveyor (MGS), Mars Express (MEx) and the Mars Reconnaissance Orbiter (MRO) showed, that the surface of Mars has changed dramatically and actually is continually changing [e.g., 1-8]. The changes are attributed to a large variety of atmospherical, geological and morphological processes, including eolian processes [9,10], mass wasting processes [11], changes of the polar caps [12] and impact cratering processes [13]. The detection of surface changes in planetary image data is closely related to the spatial and temporal availability of images in a specific region. While previews of the images are available at ESA's Planetary Science Archive (PSA), through the NASA Planetary Data System (PDS) and via other less frequently used databases, there is no possibility to quickly and conveniently see the spatial and temporal availability of HRSC images and other planetary image data in a specific region, which is important to detect the surface changes that occurred between two or more images. In addition, it is complicated to get an overview of the image quality and label information for images covering the same area. However, the investigation of surface changes represents a key element in martian research and has implications for the geologic, morphologic and climatic evolution of Mars. In order to address these issues, we developed the "Multi- Temporal Database of High Resolution Stereo Camera (HRSC) Images" (MUTED), which represents a tool for the identification of the spatial and multi-temporal coverage of planetary image data from Mars. Scientists will be able to identify the location, number, and time range of acquisition of overlapping HRSC images. MUTED also includes images of other planetary image datasets such as those of the Context Camera (CTX), the Mars Orbiter Camera (MOC), the Thermal Emission Imaging System (THEMIS), and the High Resolution Imaging Science Experiment (HiRISE). The database supports the identification and analysis of surface changes and short-lived surface processes on Mars based on fast automatic database queries. From the multi-temporal planetary image database and the multi-temporal observations we will better understand the interactions between the surface of Mars and external forces, including the atmosphere. MUTED will be available for the scientific community via the Institut für Planetologie (IfP) Muenster.

  8. Ringfield lithographic camera

    DOEpatents

    Sweatt, W.C.

    1998-09-08

    A projection lithography camera is presented with a wide ringfield optimized so as to make efficient use of extreme ultraviolet radiation from a large area radiation source (e.g., D{sub source} {approx_equal} 0.5 mm). The camera comprises four aspheric mirrors optically arranged on a common axis of symmetry. The camera includes an aperture stop that is accessible through a plurality of partial aperture stops to synthesize the theoretical aperture stop. Radiation from a mask is focused to form a reduced image on a wafer, relative to the mask, by reflection from the four aspheric mirrors. 11 figs.

  9. Kitt Peak speckle camera

    NASA Technical Reports Server (NTRS)

    Breckinridge, J. B.; Mcalister, H. A.; Robinson, W. G.

    1979-01-01

    The speckle camera in regular use at Kitt Peak National Observatory since 1974 is described in detail. The design of the atmospheric dispersion compensation prisms, the use of film as a recording medium, the accuracy of double star measurements, and the next generation speckle camera are discussed. Photographs of double star speckle patterns with separations from 1.4 sec of arc to 4.7 sec of arc are shown to illustrate the quality of image formation with this camera, the effects of seeing on the patterns, and to illustrate the isoplanatic patch of the atmosphere.

  10. Probabilistic models and numerical calculation of system matrix and sensitivity in list-mode MLEM 3D reconstruction of Compton camera images.

    PubMed

    Maxim, Voichita; Lojacono, Xavier; Hilaire, Estelle; Krimmer, Jochen; Testa, Etienne; Dauvergne, Denis; Magnin, Isabelle; Prost, Rémy

    2016-01-01

    This paper addresses the problem of evaluating the system matrix and the sensitivity for iterative reconstruction in Compton camera imaging. Proposed models and numerical calculation strategies are compared through the influence they have on the three-dimensional reconstructed images. The study attempts to address four questions. First, it proposes an analytic model for the system matrix. Second, it suggests a method for its numerical validation with Monte Carlo simulated data. Third, it compares analytical models of the sensitivity factors with Monte Carlo simulated values. Finally, it shows how the system matrix and the sensitivity calculation strategies influence the quality of the reconstructed images. PMID:26639159

  11. Imaging, Mapping and Monitoring Environmental Radionuclide Transport Using Compton-Geometry Gamma Camera

    NASA Astrophysics Data System (ADS)

    Bridge, J. W.; Dormand, J.; Cooper, J.; Judson, D.; Boston, A. J.; Bankhead, M.; Onda, Y.

    2014-12-01

    The legacy to-date of the nuclear disaster at Fukushima Dai-ichi, Japan, has emphasised the fundamental importance of high quality radiation measurements in soils and plant systems. Current-generation radiometers based on coded-aperture collimation are limited in their ability to locate sources of radiation in three dimensions, and require a relatively long measurement time due to the poor efficiency of the collimation system. The quality of data they can provide to support biogeochemical process models in such systems is therefore often compromised. In this work we report proof-of-concept experiments demonstrating the potential of an alternative approach in the measurement of environmentally-important radionuclides (in particular 137Cs) in quartz sand and soils from the Fukushima exclusion zone. Compton-geometry imaging radiometers harness the scattering of incident radiation between two detectors to yield significant improvements in detection efficiency, energy resolution and spatial location of radioactive sources in a 180° field of view. To our knowledge we are reporting its first application to environmentally-relevant systems at low activity, dispersed sources, with significant background radiation and, crucially, movement over time. We are using a simple laboratory column setup to conduct one-dimensional transport experiments for 139Ce and 137Cs in quartz sand and in homogenized repacked Fukushima soils. Polypropylene columns 15 cm length with internal diameter 1.6 cm were filled with sand or soil and saturated slowly with tracer-free aqueous solutions. Radionuclides were introduced as 2mL pulses (step-up step-down) at the column inlet. Data were collected continuously throughout the transport experiment and then binned into sequential time intervals to resolve the total activity in the column and its progressive movement through the sand/soil. The objective of this proof-of-concept work is to establish detection limits, optimise image reconstruction algorithms, and develop a novel approach to time-lapse quantification of radionuclide dynamics in the soil-plant system. The aim is to underpin the development of a new generation of Compton radiometers equipped to provide high resolution, dynamic measurements of radionuclides in terrestrial biogeochemical environments.

  12. The nuclear regions of NGC 3311 and NGC 7768 imaged with the Hubble Space Telescope Planetary Camera

    NASA Technical Reports Server (NTRS)

    Grillmair, Carl J.; Faber, S.M.; Lauer, Tod R.; Baum, William A.; Lynds, Roger C.; O'Neil, Earl J., Jr.; Shaya, Edward J.

    1994-01-01

    We present high-resolution, V band images of the central regions of the brightest cluster ellipticals NGC 3311 and NGC 7768 taken with the Planetary Camera of the Hubble Space Telescope. The nuclei of both galaxies are found to be obscured by dust, though the morphology of the dust is quite different in the two cases. The dust cloud which obscures the central 3 arcsec of NGC 3311 is complex and irregular, while the central region of NGC 7768 contains a disk of material similar in appearance and scale to that recently observed in HST images of NGC 4261. The bright, relatively blue source detected in ground-based studies of NGC 3311 is marginally resolved and is likely to be a site of ongoing star formation. We examine the distribution of globular clusters in the central regions of NGC 3311. The gradient in the surface density profile of the cluster system is significantly shallower than that found by previous investigators at larger radii. We find a core radius for the cluster distribution of 12 plus or minus 3 kpc, which is even larger than the core radius of the globular cluster system surrounding M87. It is also an order of magnitude larger than the upper limit on the core radius of NGC 3311's stellar light and suggests that the central field-star population and the globular cluster system are dynamically distinct. We briefly discuss possible sources for the cold/warm interstellar material in early-type galaxies. While the issue has not been resolved, models which involve galactic wind failure appear to be mo st naturally consistent with the observations.

  13. Colloidal quantum dot Vis-SWIR imaging: demonstration of a focal plane array and camera prototype (Presentation Recording)

    NASA Astrophysics Data System (ADS)

    Klem, Ethan J. D.; Gregory, Christopher W.; Temple, Dorota S.; Lewis, Jay S.

    2015-08-01

    RTI has developed a photodiode technology based on solution-processed PbS colloidal quantum dots (CQD). These devices are capable of providing low-cost, high performance detection across the Vis-SWIR spectral range. At the core of this technology is a heterojunction diode structure fabricated using techniques well suited to wafer-scale fabrication, such as spin coating and thermal evaporation. This enables RTI's CQD diodes to be processed at room temperature directly on top of read-out integrated circuits (ROIC), without the need for the hybridization step required by traditional SWIR detectors. Additionally, the CQD diodes can be fabricated on ROICs designed for other detector material systems, effectively allowing rapid prototype demonstrations of CQD focal plane arrays at low cost and on a wide range of pixel pitches and array sizes. We will show the results of fabricating CQD arrays directly on top of commercially available ROICs. Specifically, the ROICs are a 640 x 512 pixel format with 15 µm pitch, originally developed for InGaAs detectors. We will show that minor modifications to the surface of these ROICs make them suitable for use with our CQD detectors. Once completed, these FPAs are then assembled into a demonstration camera and their imaging performance is evaluated. In addition, we will discuss recent advances in device architecture and processing resulting in devices with room temperature dark currents of 2-5 nA/cm^2 and sensitivity from 350 nm to 1.7 ?m. This combination of high performance, dramatic cost reduction, and multi-band sensitivity is ideally suited to expand the use of SWIR imaging in current applications, as well as to address applications which require a multispectral sensitivity not met by existing technologies.

  14. In vivo multispectral imaging of the absorption and scattering properties of exposed brain using a digital red-green-blue camera

    NASA Astrophysics Data System (ADS)

    Yoshida, Keiichiro; Ishizuka, Tomohiro; Mizushima, Chiharu; Nishidate, Izumi; Kawauchi, Satoko; Sato, Shunichi; Sato, Manabu

    2015-04-01

    To evaluate multi-spectral images of the absorption and scattering properties in the cerebral cortex of rat brain, we investigated spectral reflectance images estimated by the Wiener estimation method using a digital red-green-blue camera. A Monte Carlo simulation-based multiple regression analysis for the corresponding spectral absorbance images at nine wavelengths (500, 520, 540, 560, 570, 580, 600, 730, and 760 nm) was then used to specify the absorption and scattering parameters. The spectral images of absorption and reduced scattering coefficients were reconstructed from the absorption and scattering parameters. We performed in vivo experiments on exposed rat brain to confirm the feasibility of this method. The estimated images of the absorption coefficients were dominated by hemoglobin spectra. The estimated images of the reduced scattering coefficients had a broad scattering spectrum, exhibiting a larger magnitude at shorter wavelengths, corresponding to the typical spectrum of brain tissue published in the literature.

  15. A model-based approach for detection of runways and other objects in image sequences acquired using an on-board camera

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar; Devadiga, Sadashiva; Tang, Yuan-Liang

    1994-01-01

    This research was initiated as a part of the Advanced Sensor and Imaging System Technology (ASSIST) program at NASA Langley Research Center. The primary goal of this research is the development of image analysis algorithms for the detection of runways and other objects using an on-board camera. Initial effort was concentrated on images acquired using a passive millimeter wave (PMMW) sensor. The images obtained using PMMW sensors under poor visibility conditions due to atmospheric fog are characterized by very low spatial resolution but good image contrast compared to those images obtained using sensors operating in the visible spectrum. Algorithms developed for analyzing these images using a model of the runway and other objects are described in Part 1 of this report. Experimental verification of these algorithms was limited to a sequence of images simulated from a single frame of PMMW image. Subsequent development and evaluation of algorithms was done using video image sequences. These images have better spatial and temporal resolution compared to PMMW images. Algorithms for reliable recognition of runways and accurate estimation of spatial position of stationary objects on the ground have been developed and evaluated using several image sequences. These algorithms are described in Part 2 of this report. A list of all publications resulting from this work is also included.

  16. The 8.3 a