Sample records for interferometric imaging camera

  1. Simultaneous off-axis multiplexed holography and regular fluorescence microscopy of biological cells.

    PubMed

    Nygate, Yoav N; Singh, Gyanendra; Barnea, Itay; Shaked, Natan T

    2018-06-01

    We present a new technique for obtaining simultaneous multimodal quantitative phase and fluorescence microscopy of biological cells, providing both quantitative phase imaging and molecular specificity using a single camera. Our system is based on an interferometric multiplexing module, externally positioned at the exit of an optical microscope. In contrast to previous approaches, the presented technique allows conventional fluorescence imaging, rather than interferometric off-axis fluorescence imaging. We demonstrate the presented technique for imaging fluorescent beads and live biological cells.

  2. Maturing CCD Photon-Counting Technology for Space Flight

    NASA Technical Reports Server (NTRS)

    Mallik, Udayan; Lyon, Richard; Petrone, Peter; McElwain, Michael; Benford, Dominic; Clampin, Mark; Hicks, Brian

    2015-01-01

    This paper discusses charge blooming and starlight saturation - two potential technical problems - when using an Electron Multiplying Charge Coupled Device (EMCCD) type detector in a high-contrast instrument for imaging exoplanets. These problems especially affect an interferometric type coronagraph - coronagraphs that do not use a mask to physically block starlight in the science channel of the instrument. These problems are presented using images taken with a commercial Princeton Instrument EMCCD camera in the Goddard Space Flight Center's (GSFC), Interferometric Coronagraph facility. In addition, this paper discusses techniques to overcome such problems. This paper also discusses the development and architecture of a Field Programmable Gate Array and Digital-to-Analog Converter based shaped clock controller for a photon-counting EMCCD camera. The discussion contained here will inform high-contrast imaging groups in their work with EMCCD detectors.

  3. Broadband quantitative phase microscopy with extended field of view using off-axis interferometric multiplexing.

    PubMed

    Girshovitz, Pinhas; Frenklach, Irena; Shaked, Natan T

    2015-11-01

    We propose a new portable imaging configuration that can double the field of view (FOV) of existing off-axis interferometric imaging setups, including broadband off-axis interferometers. This configuration is attached at the output port of the off-axis interferometer and optically creates a multiplexed interferogram on the digital camera, which is composed of two off-axis interferograms with straight fringes at orthogonal directions. Each of these interferograms contains a different FOV of the imaged sample. Due to the separation of these two FOVs in the spatial-frequency domain, they can be fully reconstructed separately, while obtaining two complex wavefronts from the sample at once. Since the optically multiplexed off-axis interferogram is recorded by the camera in a single exposure, fast dynamics can be recorded with a doubled imaging area. We used this technique for quantitative phase microscopy of biological samples with extended FOV. We demonstrate attaching the proposed module to a diffractive phase microscopy interferometer, illuminated by a broadband light source. The biological samples used for the experimental demonstrations include microscopic diatom shells, cancer cells, and flowing blood cells.

  4. A real-time ultrasonic field mapping system using a Fabry Pérot single pixel camera for 3D photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Huynh, Nam; Zhang, Edward; Betcke, Marta; Arridge, Simon R.; Beard, Paul; Cox, Ben

    2015-03-01

    A system for dynamic mapping of broadband ultrasound fields has been designed, with high frame rate photoacoustic imaging in mind. A Fabry-Pérot interferometric ultrasound sensor was interrogated using a coherent light single-pixel camera. Scrambled Hadamard measurement patterns were used to sample the acoustic field at the sensor, and either a fast Hadamard transform or a compressed sensing reconstruction algorithm were used to recover the acoustic pressure data. Frame rates of 80 Hz were achieved for 32x32 images even though no specialist hardware was used for the on-the-fly reconstructions. The ability of the system to obtain photocacoustic images with data compressions as low as 10% was also demonstrated.

  5. International Congress on High Speed Photography and Photonics, 17th, Pretoria, Republic of South Africa, Sept. 1-5, 1986, Proceedings. Volumes 1 & 2

    NASA Astrophysics Data System (ADS)

    McDowell, M. W.; Hollingworth, D.

    1986-01-01

    The present conference discusses topics in mining applications of high speed photography, ballistic, shock wave and detonation studies employing high speed photography, laser and X-ray diagnostics, biomechanical photography, millisec-microsec-nanosec-picosec-femtosec photographic methods, holographic, schlieren, and interferometric techniques, and videography. Attention is given to such issues as the pulse-shaping of ultrashort optical pulses, the performance of soft X-ray streak cameras, multiple-frame image tube operation, moire-enlargement motion-raster photography, two-dimensional imaging with tomographic techniques, photochron TV streak cameras, and streak techniques in detonics.

  6. Interferometric phase-contrast X-ray CT imaging of VX2 rabbit cancer at 35keV X-ray energy

    NASA Astrophysics Data System (ADS)

    Takeda, Tohoru; Wu, Jin; Tsuchiya, Yoshinori; Yoneyama, Akio; Lwin, Thet-Thet; Hyodo, Kazuyuki; Itai, Yuji

    2004-05-01

    Imaging of large objects at 17.7-keV low x-ray energy causes huge x-ray exposure to the objects even using interferometric phase-contrast x-ray CT (PCCT). Thus, we tried to obtain PCCT images at high x-ray energy of 35keV and examined the image quality using a formalin-fixed VX2 rabbit cancer specimen with 15-mm in diameter. The PCCT system consisted of an asymmetrically cut silicon (220) crystal, a monolithic x-ray interferometer, a phase-shifter, an object cell and an x-ray CCD camera. The PCCT at 35 keV clearly visualized various inner structures of VX2 rabbit cancer such as necrosis, cancer, the surrounding tumor vessels, and normal liver tissue. Besides, image-contrast was not degraded significantly. These results suggest that the PCCT at 35 KeV is sufficient to clearly depict the histopathological morphology of VX2 rabbit cancer specimen.

  7. Surface imaging microscope

    NASA Astrophysics Data System (ADS)

    Rogala, Eric W.; Bankman, Isaac N.

    2008-04-01

    The three-dimensional shapes of microscopic objects are becoming increasingly important for battlespace CBRNE sensing. Potential applications of microscopic 3D shape observations include characterization of biological weapon particles and manufacturing of micromechanical components. Aerosol signatures of stand-off lidar systems, using elastic backscatter or polarization, are dictated by the aerosol particle shapes and sizes that must be well characterized in the lab. A low-cost, fast instrument for 3D surface shape microscopy will be a valuable point sensor for biological particle sensing applications. Both the cost and imaging durations of traditional techniques such as confocal microscopes, atomic force microscopes, and electron scanning microscopes are too high. We investigated the feasibility of a low-cost, fast interferometric technique for imaging the 3D surface shape of microscopic objects at frame rates limited only by the camera in the system. The system operates at two laser wavelengths producing two fringe images collected simultaneously by a digital camera, and a specialized algorithm we developed reconstructs the surface map of the microscopic object. The current implementation assembled to test the concept and develop the new 3D reconstruction algorithm has 0.25 micron resolution in the x and y directions, and about 0.1 micron accuracy in the z direction, as tested on a microscopic glass test object manufactured with etching techniques. We describe the interferometric instrument, present the reconstruction algorithm, and discuss further development.

  8. High-repetition-rate interferometric Rayleigh scattering for flow-velocity measurements

    NASA Astrophysics Data System (ADS)

    Estevadeordal, Jordi; Jiang, Naibo; Cutler, Andrew D.; Felver, Josef J.; Slipchenko, Mikhail N.; Danehy, Paul M.; Gord, James R.; Roy, Sukesh

    2018-03-01

    High-repetition-rate interferometric-Rayleigh-scattering (IRS) velocimetry is demonstrated for non-intrusive, high-speed flow-velocity measurements. High temporal resolution is obtained with a quasi-continuous burst-mode laser that is capable of operating at 10-100 kHz, providing 10-ms bursts with pulse widths of 5-1000 ns and pulse energy > 100 mJ at 532 nm. Coupled with a high-speed camera system, the IRS method is based on imaging the flow field through an etalon with 8-GHz free spectral range and capturing the Doppler shift of the Rayleigh-scattered light from the flow at multiple points having constructive interference. The seed-laser linewidth permits a laser linewidth of < 150 MHz at 532 nm. The technique is demonstrated in a high-speed jet, and high-repetition-rate image sequences are shown.

  9. Lunar UV-visible-IR mapping interferometric spectrometer

    NASA Technical Reports Server (NTRS)

    Smith, W. Hayden; Haskin, L.; Korotev, R.; Arvidson, R.; Mckinnon, W.; Hapke, B.; Larson, S.; Lucey, P.

    1992-01-01

    Ultraviolet-visible-infrared mapping digital array scanned interferometers for lunar compositional surveys was developed. The research has defined a no-moving-parts, low-weight and low-power, high-throughput, and electronically adaptable digital array scanned interferometer that achieves measurement objectives encompassing and improving upon all the requirements defined by the LEXSWIG for lunar mineralogical investigation. In addition, LUMIS provides a new, important, ultraviolet spectral mapping, high-spatial-resolution line scan camera, and multispectral camera capabilities. An instrument configuration optimized for spectral mapping and imaging of the lunar surface and provide spectral results in support of the instrument design are described.

  10. Observing the Sun with micro-interferometric devices: a didactic experiment

    NASA Astrophysics Data System (ADS)

    Defrère, D.; Absil, O.; Hanot, C.; Riaud, P.; Magette, A.; Marion, L.; Wertz, O.; Finet, F.; Steenackers, M.; Habraken, S.; Surdej, A.; Surdej, J.

    2014-04-01

    Measuring the angular diameter of celestial bodies has long been the main purpose of stellar interferometry and was its historical motivation. Nowadays, stellar interferometry is widely used for various other scientific purposes that require very high angular resolution measurements. In terms of angular spatial scales probed, observing distant stars located 10 to 100~pc away with a large hectometric interferometer is equivalent to observing our Sun with a micrometric baseline. Based on this idea, we have manufactured a set of micro-interferometric devices and tested them on the sky. The micro-interferometers consist of a chrome layer deposited on a glass plate that has been drilled by laser lithography to produce micron-sized holes with configurations corresponding to proposed interferometer projects such as CARLINA, ELSA, KEOPS, and OVLA. In this paper, we describe these interferometric devices and present interferometric observations of the Sun made in the framework of Astrophysics lectures being taught at the Liège University. By means of a simple photographic camera placed behind a micro-interferometric device, we observed the Sun and derived its angular size. This experiment provides a very didactic way to easily obtain fringe patterns similar to those that will be obtained with future large imaging arrays. A program written in C also allows to reproduce the various point spread functions and fringe patterns observed with the micro-interferometric devices for different types of sources, including the Sun.

  11. High resolution imaging at Palomar

    NASA Technical Reports Server (NTRS)

    Kulkarni, Shrinivas R.

    1992-01-01

    For the last two years we have embarked on a program of understanding the ultimate limits of ground-based optical imaging. We have designed and fabricated a camera specifically for high resolution imaging. This camera has now been pressed into service at the prime focus of the Hale 5 m telescope. We have concentrated on two techniques: the Non-Redundant Masking (NRM) and Weigelt's Fully Filled Aperture (FFA) method. The former is the optical analog of radio interferometry and the latter is a higher order extension of the Labeyrie autocorrelation method. As in radio Very Long Baseline Interferometry (VLBI), both these techniques essentially measure the closure phase and, hence, true image construction is possible. We have successfully imaged binary stars and asteroids with angular resolution approaching the diffraction limit of the telescope and image quality approaching that of a typical radio VLBI map. In addition, we have carried out analytical and simulation studies to determine the ultimate limits of ground-based optical imaging, the limits of space-based interferometric imaging, and investigated the details of imaging tradeoffs of beam combination in optical interferometers.

  12. Aspheric and freeform surfaces metrology with software configurable optical test system: a computerized reverse Hartmann test

    NASA Astrophysics Data System (ADS)

    Su, Peng; Khreishi, Manal A. H.; Su, Tianquan; Huang, Run; Dominguez, Margaret Z.; Maldonado, Alejandro; Butel, Guillaume; Wang, Yuhao; Parks, Robert E.; Burge, James H.

    2014-03-01

    A software configurable optical test system (SCOTS) based on deflectometry was developed at the University of Arizona for rapidly, robustly, and accurately measuring precision aspheric and freeform surfaces. SCOTS uses a camera with an external stop to realize a Hartmann test in reverse. With the external camera stop as the reference, a coordinate measuring machine can be used to calibrate the SCOTS test geometry to a high accuracy. Systematic errors from the camera are carefully investigated and controlled. Camera pupil imaging aberration is removed with the external aperture stop. Imaging aberration and other inherent errors are suppressed with an N-rotation test. The performance of the SCOTS test is demonstrated with the measurement results from a 5-m-diameter Large Synoptic Survey Telescope tertiary mirror and an 8.4-m diameter Giant Magellan Telescope primary mirror. The results show that SCOTS can be used as a large-dynamic-range, high-precision, and non-null test method for precision aspheric and freeform surfaces. The SCOTS test can achieve measurement accuracy comparable to traditional interferometric tests.

  13. Interferometric phase measurement techniques for coherent beam combining

    NASA Astrophysics Data System (ADS)

    Antier, Marie; Bourderionnet, Jérôme; Larat, Christian; Lallier, Eric; Primot, Jérôme; Brignon, Arnaud

    2015-03-01

    Coherent beam combining of fiber amplifiers provides an attractive mean of reaching high power laser. In an interferometric phase measurement the beams issued for each fiber combined are imaged onto a sensor and interfere with a reference plane wave. This registration of interference patterns on a camera allows the measurement of the exact phase error of each fiber beam in a single shot. Therefore, this method is a promising candidate toward very large number of combined fibers. Based on this technique, several architectures can be proposed to coherently combine a high number of fibers. The first one based on digital holography transfers directly the image of the camera to spatial light modulator (SLM). The generated hologram is used to compensate the phase errors induced by the amplifiers. This architecture has therefore a collective phase measurement and correction. Unlike previous digital holography technique, the probe beams measuring the phase errors between the fibers are co-propagating with the phase-locked signal beams. This architecture is compatible with the use of multi-stage isolated amplifying fibers. In that case, only 20 pixels per fiber on the SLM are needed to obtain a residual phase shift error below λ/10rms. The second proposed architecture calculates the correction applied to each fiber channel by tracking the relative position of the interference finges. In this case, a phase modulator is placed on each channel. In that configuration, only 8 pixels per fiber on the camera is required for a stable close loop operation with a residual phase error of λ/20rms, which demonstrates the scalability of this concept.

  14. A laboratory verification sensor

    NASA Technical Reports Server (NTRS)

    Vaughan, Arthur H.

    1988-01-01

    The use of a variant of the Hartmann test is described to sense the coalignment of the 36 primary mirror segments of the Keck 10-meter Telescope. The Shack-Hartmann alignment camera is a surface-tilt-error-sensing device, operable with high sensitivity over a wide range of tilt errors. An interferometer, on the other hand, is a surface-height-error-sensing device. In general, if the surface height error exceeds a few wavelengths of the incident illumination, an interferogram is difficult to interpret and loses utility. The Shack-Hartmann aligment camera is, therefore, likely to be attractive as a development tool for segmented mirror telescopes, particularly at early stages of development in which the surface quality of developmental segments may be too poor to justify interferometric testing. The constraints are examined which would define the first-order properties of a Shack-Hartmann alignment camera and the precision and range of measurement one could expect to achieve with it are investigated. Fundamental constraints do arise, however, from consideration of geometrical imaging, diffraction, and the density of sampling of images at the detector array. Geometrical imagining determines the linear size of the image, and depends on the primary mirror diameter and the f-number of a lenslet. Diffraction is another constraint; it depends on the lenslet aperture. Finally, the sampling density at the detector array is important since the number of pixels in the image determines how accurately the centroid of the image can be measured. When these factors are considered under realistic assumptions it is apparent that the first order design of a Shack-Hartmann alignment camera is completely determined by the first-order constraints considered, and that in the case of a 20-meter telescope with seeing-limited imaging, such a camera, used with a suitable detector array, will achieve useful precision.

  15. Quantitative phase measurement for wafer-level optics

    NASA Astrophysics Data System (ADS)

    Qu, Weijuan; Wen, Yongfu; Wang, Zhaomin; Yang, Fang; Huang, Lei; Zuo, Chao

    2015-07-01

    Wafer-level-optics now is widely used in smart phone camera, mobile video conferencing or in medical equipment that require tiny cameras. Extracting quantitative phase information has received increased interest in order to quantify the quality of manufactured wafer-level-optics, detect defective devices before packaging, and provide feedback for manufacturing process control, all at the wafer-level for high-throughput microfabrication. We demonstrate two phase imaging methods, digital holographic microscopy (DHM) and Transport-of-Intensity Equation (TIE) to measure the phase of the wafer-level lenses. DHM is a laser-based interferometric method based on interference of two wavefronts. It can perform a phase measurement in a single shot. While a minimum of two measurements of the spatial intensity of the optical wave in closely spaced planes perpendicular to the direction of propagation are needed to do the direct phase retrieval by solving a second-order differential equation, i.e., with a non-iterative deterministic algorithm from intensity measurements using the Transport-of-Intensity Equation (TIE). But TIE is a non-interferometric method, thus can be applied to partial-coherence light. We demonstrated the capability and disability for the two phase measurement methods for wafer-level optics inspection.

  16. Interactive Fringe Analysis System: Applications To Moire Contourogram And Interferogram

    NASA Astrophysics Data System (ADS)

    Yatagai, T.; Idesawa, M.; Yamaashi, Y.; Suzuki, M.

    1982-10-01

    A general purpose fringe pattern processing facility was developed in order to analyze moire photographs used for scoliosis diagnoses and interferometric patterns in optical shops. A TV camera reads a fringe profile to be analyzed, and peaks of the fringe are detected by a microcomputer. Fringe peak correction and fringe order determination are performed with the man-machine interactive software developed. A light pen facility and an image digitizer are employed for interaction. In the case of two-dimensional fringe analysis, we analyze independently analysis lines parallel to each other and a reference line perpendicular to the parallel analysis lines. Fringe orders of parallel analysis lines are uniquely determined by using the fringe order of the reference line. Some results of analysis of moire contourograms, interferometric testing of silicon wafers, and holographic measurement of thermal deformation are presented.

  17. The LINC-NIRVANA fringe and flexure tracker: Linux real-time solutions

    NASA Astrophysics Data System (ADS)

    Wang, Yeping; Bertram, Thomas; Straubmeier, Christian; Rost, Steffen; Eckart, Andreas

    2006-06-01

    The correction of atmospheric differential piston and instrumental flexure effects is mandatory for optimum interferometric performance of the LBT NIR interferometric imaging camera LINC-NIRVANA. The task of the Fringe and Flexure Tracking System (FFTS) is to detect and correct these effects in a real-time closed loop. On a timescale of milliseconds, image data of the order of 4K bytes has to be retrieved from the FFTS detector, analyzed, and the results have to be sent to the control system. The need for a reliable communication between several processes within a confined period of time calls for solutions with good real-time performance. We investigated two soft real-time options for the Linux platform. The design we present takes advantage of several features that follow the POSIX standard with improved real-time performance, which were implemented in the new Linux kernel (2.6.12). Several concepts, such as synchronization, shared memory, and preemptive scheduling are considered and the performance of the most time-critical parts of the FFTS software is tested.

  18. Interferometric detection of nanoparticles

    NASA Astrophysics Data System (ADS)

    Hayrapetyan, Karen

    Interferometric surfaces enhance light scattering from nanoparticles through constructive interference of partial scattered waves. By placing the nanoparticles on interferometric surfaces tuned to a special surface phase interferometric condition, the particles are detectable in the dilute limit through interferometric image contrast in a heterodyne light scattering configuration, or through diffraction in a homodyne scattering configuration. The interferometric enhancement has applications for imaging and diffractive biosensors. We present a modified model based on Double Interaction (DI) to explore bead-based detection mechanisms using imaging, scanning and diffraction. The application goal of this work is to explore the trade-offs between the sensitivity and throughput among various detection methods. Experimentally we use thermal oxide on silicon to establish and control surface interferometric conditions. Surface-captured gold beads are detected using Molecular Interferometric Imaging (MI2) and Spinning-Disc Interferometry (SDI). Double-resonant enhancement of light scattering leads to high-contrast detection of 100 nm radius gold nanoparticles on an interferometric surface. The double-resonance condition is achieved when resonance (or anti-resonance) from an asymmetric Fabry-Perot substrate coincides with the Mie resonance of the gold nanoparticle. The double-resonance condition is observed experimentally using molecular interferometric imaging (MI2). An invisibility condition is identified for which the gold nanoparticles are optically cloaked by the interferometric surface.

  19. Reconstructed images of 4 Vesta.

    NASA Astrophysics Data System (ADS)

    Drummond, J.; Eckart, A.; Hege, E. K.

    The first glimpses of an asteroid's surface have been obtained from images of 4 Vesta reconstructed from speckle interferometric observations made with Harvard's PAPA camera coupled to Steward Observatory's 2.3 m telescope. Vesta is found to have a "normal" triaxial ellipsoid shape of 566(±15)×532(±15)×466(±15) km. Its rotational pole lies within 4° of ecliptic long. 327°, lat. = +55°. Reconstructed images obtained with the power spectra and Knox-Thompson cross-spectra reveal dark and bright patterns, reminiscent of the Moon. Three bright and three dark areas are visible, and when combined with an inferred seventh bright region not visible during the rotational phases covered during the authors' run, lead to lightcurves that match Vesta's lightcurve history.

  20. Interferometric inversion for passive imaging and navigation

    DTIC Science & Technology

    2017-05-01

    AFRL-AFOSR-VA-TR-2017-0096 Interferometric inversion for passive imaging and navigation Laurent Demanet MASSACHUSETTS INSTITUTE OF TECHNOLOGY Final...COVERED (From - To) Feb 2015-Jan 2017 4. TITLE AND SUBTITLE Interferometric inversion for passive imaging and navigation 5a. CONTRACT NUMBER...Grant title: Interferometric inversion for passive imaging and navigation • Grant number: FA9550-15-1-0078 • Period: Feburary 2015 - January 2017

  1. Flipping interferometry and its application for quantitative phase microscopy in a micro-channel.

    PubMed

    Roitshtain, Darina; Turko, Nir A; Javidi, Bahram; Shaked, Natan T

    2016-05-15

    We present a portable, off-axis interferometric module for quantitative phase microscopy of live cells, positioned at the exit port of a coherently illuminated inverted microscope. The module creates on the digital camera an interference pattern between the image of the sample and its flipped version. The proposed simplified module is based on a retro-reflector modification in an external Michelson interferometer. The module does not contain any lenses, pinholes, or gratings and its alignment is straightforward. Still, it allows full control of the off-axis angle and does not suffer from ghost images. As experimentally demonstrated, the module is useful for quantitative phase microscopy of live cells rapidly flowing in a micro-channel.

  2. Molecular interferometric imaging study of molecular interactions

    NASA Astrophysics Data System (ADS)

    Zhao, Ming; Wang, Xuefeng; Nolte, David

    2008-02-01

    Molecular Interferometric Imaging (MI2) is a sensitive detection platform for direct optical detection of immobilized biomolecules. It is based on inline common-path interferometry combined with far-field optical imaging. The substrate is a simple thermal oxide on a silicon surface with a thickness at or near the quadrature condition that produces a π/2 phase shift between the normal-incident wave reflected from the top oxide surface and the bottom silicon surface. The presence of immobilized or bound biomolecules on the surface produces a relative phase shift that is converted to a far-field intensity shift and is imaged by a reflective microscope onto a CCD camera. Shearing interferometry is used to remove the spatial 1/f noise from the illumination to achieve shot-noise-limited detection of surface dipole density profiles. The lateral resolution of this technique is diffraction limited at 0.4 micron, and the best longitudinal resolution is 10 picometers. The minimum detectable mass at the metrology limit is 2 attogram, which is 8 antibody molecules of size 150 kDa. The corresponding scaling mass sensitivity is 5 fg/mm compared with 1 pg/mm for typical SPR sensitivity. We have applied MI2 to immunoassay applications, and real-time binding kinetics has been measured for antibody-antigen reactions. The simplicity of the substrate and optical read-out make MI2 a promising analytical assay tool for high-throughput screening and diagnostics.

  3. Modified interferometric imaging condition for reverse-time migration

    NASA Astrophysics Data System (ADS)

    Guo, Xue-Bao; Liu, Hong; Shi, Ying

    2018-01-01

    For reverse-time migration, high-resolution imaging mainly depends on the accuracy of the velocity model and the imaging condition. In practice, however, the small-scale components of the velocity model cannot be estimated by tomographical methods; therefore, the wavefields are not accurately reconstructed from the background velocity, and the imaging process will generate artefacts. Some of the noise is due to cross-correlation of unrelated seismic events. Interferometric imaging condition suppresses imaging noise very effectively, especially the unknown random disturbance of the small-scale part. The conventional interferometric imaging condition is extended in this study to obtain a new imaging condition based on the pseudo-Wigner distribution function (WDF). Numerical examples show that the modified interferometric imaging condition improves imaging precision.

  4. The Fringe-Imaging Skin Friction Technique PC Application User's Manual

    NASA Technical Reports Server (NTRS)

    Zilliac, Gregory G.

    1999-01-01

    A personal computer application (CXWIN4G) has been written which greatly simplifies the task of extracting skin friction measurements from interferograms of oil flows on the surface of wind tunnel models. Images are first calibrated, using a novel approach to one-camera photogrammetry, to obtain accurate spatial information on surfaces with curvature. As part of the image calibration process, an auxiliary file containing the wind tunnel model geometry is used in conjunction with a two-dimensional direct linear transformation to relate the image plane to the physical (model) coordinates. The application then applies a nonlinear regression model to accurately determine the fringe spacing from interferometric intensity records as required by the Fringe Imaging Skin Friction (FISF) technique. The skin friction is found through application of a simple expression that makes use of lubrication theory to relate fringe spacing to skin friction.

  5. Full-field OCT: applications in ophthalmology

    NASA Astrophysics Data System (ADS)

    Grieve, Kate; Dubois, Arnaud; Paques, Michel; Le Gargasson, Jean-Francois; Boccara, Albert C.

    2005-04-01

    We present images of ocular tissues obtained using ultrahigh resolution full-field OCT. The experimental setup is based on the Linnik interferometer, illuminated by a tungsten halogen lamp. En face tomographic images are obtained in real-time without scanning by computing the difference of two phase-opposed interferometric images recorded by a high-resolution CCD camera. A spatial resolution of 0.7 μm × 0.9 μm (axial × transverse) is achieved thanks to the short source coherence length and the use of high numerical aperture microscope objectives. A detection sensitivity of 90 dB is obtained by means of image averaging and pixel binning. Whole unfixed eyes and unstained tissue samples (cornea, lens, retina, choroid and sclera) of ex vivo rat, mouse, rabbit and porcine ocular tissues were examined. The unprecedented resolution of our instrument allows cellular-level resolution in the cornea and retina, and visualization of individual fibers in the lens. Transcorneal lens imaging was possible in all animals, and in albino animals, transscleral retinal imaging was achieved. We also introduce our rapid acquisition full-field optical coherence tomography system designed to accommodate in vivo ophthalmologic imaging. The variations on the original system technology include the introduction of a xenon arc lamp as source, and rapid image acquisition performed by a high-speed CMOS camera, reducing acquisition time to 5 ms per frame.

  6. FOCAL PLANE WAVEFRONT SENSING USING RESIDUAL ADAPTIVE OPTICS SPECKLES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Codona, Johanan L.; Kenworthy, Matthew, E-mail: jlcodona@gmail.com

    2013-04-20

    Optical imperfections, misalignments, aberrations, and even dust can significantly limit sensitivity in high-contrast imaging systems such as coronagraphs. An upstream deformable mirror (DM) in the pupil can be used to correct or compensate for these flaws, either to enhance the Strehl ratio or suppress the residual coronagraphic halo. Measurement of the phase and amplitude of the starlight halo at the science camera is essential for determining the DM shape that compensates for any non-common-path (NCP) wavefront errors. Using DM displacement ripples to create a series of probe and anti-halo speckles in the focal plane has been proposed for space-based coronagraphsmore » and successfully demonstrated in the lab. We present the theory and first on-sky demonstration of a technique to measure the complex halo using the rapidly changing residual atmospheric speckles at the 6.5 m MMT telescope using the Clio mid-IR camera. The AO system's wavefront sensor measurements are used to estimate the residual wavefront, allowing us to approximately compute the rapidly evolving phase and amplitude of speckle halo. When combined with relatively short, synchronized science camera images, the complex speckle estimates can be used to interferometrically analyze the images, leading to an estimate of the static diffraction halo with NCP effects included. In an operational system, this information could be collected continuously and used to iteratively correct quasi-static NCP errors or suppress imperfect coronagraphic halos.« less

  7. A Pixel Correlation Technique for Smaller Telescopes to Measure Doubles

    NASA Astrophysics Data System (ADS)

    Wiley, E. O.

    2013-04-01

    Pixel correlation uses the same reduction techniques as speckle imaging but relies on autocorrelation among captured pixel hits rather than true speckles. A video camera operating at speeds (8-66 milliseconds) similar to lucky imaging to capture 400-1,000 video frames. The AVI files are converted to bitmap images and analyzed using the interferometric algorithms in REDUC using all frames. This results in a series of corellograms from which theta and rho can be measured. Results using a 20 cm (8") Dall-Kirkham working at f22.5 are presented for doubles with separations between 1" to 5.7" under average seeing conditions. I conclude that this form of visualizing and analyzing visual double stars is a viable alternative to lucky imaging that can be employed by telescopes that are too small in aperture to capture a sufficient number of speckles for true speckle interferometry.

  8. Multi-Component, Multi-Point Interferometric Rayleigh/Mie Doppler Velocimeter

    NASA Technical Reports Server (NTRS)

    Danehy, Paul M.; Lee, Joseph W.; Bivolaru, Daniel

    2012-01-01

    An interferometric Rayleigh scattering system was developed to enable the measurement of multiple, orthogonal velocity components at several points within very-high-speed or high-temperature flows. The velocity of a gaseous flow can be optically measured by sending laser light into the gas flow, and then measuring the scattered light signal that is returned from matter within the flow. Scattering can arise from either gas molecules within the flow itself, known as Rayleigh scattering, or from particles within the flow, known as Mie scattering. Measuring Mie scattering is the basis of all commercial laser Doppler and particle imaging velocimetry systems, but particle seeding is problematic when measuring high-speed and high-temperature flows. The velocimeter is designed to measure the Doppler shift from only Rayleigh scattering, and does not require, but can also measure, particles within the flow. The system combines a direct-view, large-optic interferometric setup that calculates the Doppler shift from fringe patterns collected with a digital camera, and a subsystem to capture and re-circulate scattered light to maximize signal density. By measuring two orthogonal components of the velocity at multiple positions in the flow volume, the accuracy and usefulness of the flow measurement increase significantly over single or nonorthogonal component approaches.

  9. Interferometric Dynamic Measurement: Techniques Based on High-Speed Imaging or a Single Photodetector

    PubMed Central

    Fu, Yu; Pedrini, Giancarlo

    2014-01-01

    In recent years, optical interferometry-based techniques have been widely used to perform noncontact measurement of dynamic deformation in different industrial areas. In these applications, various physical quantities need to be measured in any instant and the Nyquist sampling theorem has to be satisfied along the time axis on each measurement point. Two types of techniques were developed for such measurements: one is based on high-speed cameras and the other uses a single photodetector. The limitation of the measurement range along the time axis in camera-based technology is mainly due to the low capturing rate, while the photodetector-based technology can only do the measurement on a single point. In this paper, several aspects of these two technologies are discussed. For the camera-based interferometry, the discussion includes the introduction of the carrier, the processing of the recorded images, the phase extraction algorithms in various domains, and how to increase the temporal measurement range by using multiwavelength techniques. For the detector-based interferometry, the discussion mainly focuses on the single-point and multipoint laser Doppler vibrometers and their applications for measurement under extreme conditions. The results show the effort done by researchers for the improvement of the measurement capabilities using interferometry-based techniques to cover the requirements needed for the industrial applications. PMID:24963503

  10. Interferometric imaging of acoustical phenomena using high-speed polarization camera and 4-step parallel phase-shifting technique

    NASA Astrophysics Data System (ADS)

    Ishikawa, K.; Yatabe, K.; Ikeda, Y.; Oikawa, Y.; Onuma, T.; Niwa, H.; Yoshii, M.

    2017-02-01

    Imaging of sound aids the understanding of the acoustical phenomena such as propagation, reflection, and diffraction, which is strongly required for various acoustical applications. The imaging of sound is commonly done by using a microphone array, whereas optical methods have recently been interested due to its contactless nature. The optical measurement of sound utilizes the phase modulation of light caused by sound. Since light propagated through a sound field changes its phase as proportional to the sound pressure, optical phase measurement technique can be used for the sound measurement. Several methods including laser Doppler vibrometry and Schlieren method have been proposed for that purpose. However, the sensitivities of the methods become lower as a frequency of sound decreases. In contrast, since the sensitivities of the phase-shifting technique do not depend on the frequencies of sounds, that technique is suitable for the imaging of sounds in the low-frequency range. The principle of imaging of sound using parallel phase-shifting interferometry was reported by the authors (K. Ishikawa et al., Optics Express, 2016). The measurement system consists of a high-speed polarization camera made by Photron Ltd., and a polarization interferometer. This paper reviews the principle briefly and demonstrates the high-speed imaging of acoustical phenomena. The results suggest that the proposed system can be applied to various industrial problems in acoustical engineering.

  11. Fizeau interferometric cophasing of segmented mirrors: experimental validation.

    PubMed

    Cheetham, Anthony; Cvetojevic, Nick; Norris, Barnaby; Sivaramakrishnan, Anand; Tuthill, Peter

    2014-06-02

    We present an optical testbed demonstration of the Fizeau Interferometric Cophasing of Segmented Mirrors (FICSM) algorithm. FICSM allows a segmented mirror to be phased with a science imaging detector and three filters (selected among the normal science complement). It requires no specialised, dedicated wavefront sensing hardware. Applying random piston and tip/tilt aberrations of more than 5 wavelengths to a small segmented mirror array produced an initial unphased point spread function with an estimated Strehl ratio of 9% that served as the starting point for our phasing algorithm. After using the FICSM algorithm to cophase the pupil, we estimated a Strehl ratio of 94% based on a comparison between our data and simulated encircled energy metrics. Our final image quality is limited by the accuracy of our segment actuation, which yields a root mean square (RMS) wavefront error of 25 nm. This is the first hardware demonstration of coarse and fine phasing an 18-segment pupil with the James Webb Space Telescope (JWST) geometry using a single algorithm. FICSM can be implemented on JWST using any of its scientic imaging cameras making it useful as a fall-back in the event that accepted phasing strategies encounter problems. We present an operational sequence that would co-phase such an 18-segment primary in 3 sequential iterations of the FICSM algorithm. Similar sequences can be readily devised for any segmented mirror.

  12. Anti-Stokes effect CCD camera and SLD based optical coherence tomography for full-field imaging in the 1550nm region

    NASA Astrophysics Data System (ADS)

    Kredzinski, Lukasz; Connelly, Michael J.

    2012-06-01

    Full-field Optical coherence tomography is an en-face interferometric imaging technology capable of carrying out high resolution cross-sectional imaging of the internal microstructure of an examined specimen in a non-invasive manner. The presented system is based on competitively priced optical components available at the main optical communications band located in the 1550 nm region. It consists of a superluminescent diode and an anti-stokes imaging device. The single mode fibre coupled SLD was connected to a multi-mode fibre inserted into a mode scrambler to obtain spatially incoherent illumination, suitable for OCT wide-field modality in terms of crosstalk suppression and image enhancement. This relatively inexpensive system with moderate resolution of approximately 24um x 12um (axial x lateral) was constructed to perform a 3D cross sectional imaging of a human tooth. To our knowledge this is the first 1550 nm full-field OCT system reported.

  13. Innovative Camera and Image Processing System to Characterize Cryospheric Changes

    NASA Astrophysics Data System (ADS)

    Schenk, A.; Csatho, B. M.; Nagarajan, S.

    2010-12-01

    The polar regions play an important role in Earth’s climatic and geodynamic systems. Digital photogrammetric mapping provides a means for monitoring the dramatic changes observed in the polar regions during the past decades. High-resolution, photogrammetrically processed digital aerial imagery provides complementary information to surface measurements obtained by laser altimetry systems. While laser points accurately sample the ice surface, stereo images allow for the mapping of features, such as crevasses, flow bands, shear margins, moraines, leads, and different types of sea ice. Tracking features in repeat images produces a dense velocity vector field that can either serve as validation for interferometrically derived surface velocities or it constitutes a stand-alone product. A multi-modal, photogrammetric platform consists of one or more high-resolution, commercial color cameras, GPS and inertial navigation system as well as optional laser scanner. Such a system, using a Canon EOS-1DS Mark II camera, was first flown on the Icebridge missions Fall 2009 and Spring 2010, capturing hundreds of thousands of images at a frame rate of about one second. While digital images and videos have been used for quite some time for visual inspection, precise 3D measurements with low cost, commercial cameras require special photogrammetric treatment that only became available recently. Calibrating the multi-camera imaging system and geo-referencing the images are absolute prerequisites for all subsequent applications. Commercial cameras are inherently non-metric, that is, their sensor model is only approximately known. Since these cameras are not as rugged as photogrammetric cameras, the interior orientation also changes, due to temperature and pressure changes and aircraft vibration, resulting in large errors in 3D measurements. It is therefore necessary to calibrate the cameras frequently, at least whenever the system is newly installed. Geo-referencing the images is performed by the Applanix navigation system. Our new method enables a 3D reconstruction of ice sheet surface with high accuracy and unprecedented details, as it is demonstrated by examples from the Antarctic Peninsula, acquired by the IceBridge mission. Repeat digital imaging also provides data for determining surface elevation changes and velocities that are critical parameters for ice sheet models. Although these methods work well, there are known problems with satellite images and the traditional area-based matching, especially over rapidly changing outlet glaciers. To take full advantage of the high resolution, repeat stereo imaging we have developed a new method. The processing starts with the generation of a DEM from geo-referenced stereo images of the first time epoch. The next step is concerned with extracting and matching interest points in object space. Since an interest point moves its spatial position between two time epochs, such points are only radiometrically conjugate but not geometrically. In fact, the geometric displacement of two identical points, together with the time difference, renders velocities. We computed the evolution of the velocity field and surface topography on the floating tongue of the Jakobshavn glacier from historical stereo aerial photographs to illustrate the approach.

  14. Rolling Shutter Effect aberration compensation in Digital Holographic Microscopy

    NASA Astrophysics Data System (ADS)

    Monaldi, Andrea C.; Romero, Gladis G.; Cabrera, Carlos M.; Blanc, Adriana V.; Alanís, Elvio E.

    2016-05-01

    Due to the sequential-readout nature of most CMOS sensors, each row of the sensor array is exposed at a different time, resulting in the so-called rolling shutter effect that induces geometric distortion to the image if the video camera or the object moves during image acquisition. Particularly in digital holograms recording, while the sensor captures progressively each row of the hologram, interferometric fringes can oscillate due to external vibrations and/or noises even when the object under study remains motionless. The sensor records each hologram row in different instants of these disturbances. As a final effect, phase information is corrupted, distorting the reconstructed holograms quality. We present a fast and simple method for compensating this effect based on image processing tools. The method is exemplified by holograms of microscopic biological static objects. Results encourage incorporating CMOS sensors over CCD in Digital Holographic Microscopy due to a better resolution and less expensive benefits.

  15. Digital micromirror device as programmable rough particle in interferometric particle imaging.

    PubMed

    Fromager, M; Aït Ameur, K; Brunel, M

    2017-04-20

    The 2D autocorrelation of the projection of an irregular rough particle can be estimated using the analysis of its interferometric out-of-focus image. We report the development of an experimental setup that creates speckle-like patterns generated by "programmable" rough particles of desired-shape. It should become an important tool for the development of new setups, configurations, and algorithms in interferometric particle imaging.

  16. Nonlinear interferometric vibrational imaging of biological tissue

    NASA Astrophysics Data System (ADS)

    Jiang, Zhi; Marks, Daniel L.; Geddes, Joseph B., III; Boppart, Stephen A.

    2008-02-01

    We demonstrate imaging with the technique of nonlinear interferometric vibrational imaging (NIVI). Experimental images using this instrumentation and method have been acquired from both phantom and biological tissues. In our system, coherent anti-Stokes Raman scattering (CARS) signals are detected by spectral interferometry, which is able to fully restore high resolution Raman spectrum on each focal spot of a sample covering multiple Raman bands using broadband pump and Stokes laser beams. Spectral-domain detection has been demonstrated and allows for a significant increase in image acquiring speed, in signal-to-noise, and in interferometric signal stability.

  17. An Improved Interferometric Calibration Method Based on Independent Parameter Decomposition

    NASA Astrophysics Data System (ADS)

    Fan, J.; Zuo, X.; Li, T.; Chen, Q.; Geng, X.

    2018-04-01

    Interferometric SAR is sensitive to earth surface undulation. The accuracy of interferometric parameters plays a significant role in precise digital elevation model (DEM). The interferometric calibration is to obtain high-precision global DEM by calculating the interferometric parameters using ground control points (GCPs). However, interferometric parameters are always calculated jointly, making them difficult to decompose precisely. In this paper, we propose an interferometric calibration method based on independent parameter decomposition (IPD). Firstly, the parameters related to the interferometric SAR measurement are determined based on the three-dimensional reconstruction model. Secondly, the sensitivity of interferometric parameters is quantitatively analyzed after the geometric parameters are completely decomposed. Finally, each interferometric parameter is calculated based on IPD and interferometric calibration model is established. We take Weinan of Shanxi province as an example and choose 4 TerraDEM-X image pairs to carry out interferometric calibration experiment. The results show that the elevation accuracy of all SAR images is better than 2.54 m after interferometric calibration. Furthermore, the proposed method can obtain the accuracy of DEM products better than 2.43 m in the flat area and 6.97 m in the mountainous area, which can prove the correctness and effectiveness of the proposed IPD based interferometric calibration method. The results provide a technical basis for topographic mapping of 1 : 50000 and even larger scale in the flat area and mountainous area.

  18. Spectral domain optical coherence tomography with dual-balanced detection

    NASA Astrophysics Data System (ADS)

    Bo, En; Liu, Xinyu; Chen, Si; Luo, Yuemei; Wang, Nanshuo; Wang, Xianghong; Liu, Linbo

    2016-03-01

    We developed a spectral domain optical coherence tomography (SD-OCT) system employing dual-balanced detection (DBD) for direct current term suppression and SNR enhancement, especially for auto-autocorrelation artifacts reduction. The DBD was achieved by using a beam splitter to building a free-space Michelson interferometer, which generated two interferometric spectra with a phase difference of π. These two phase-opposed spectra were guided to the spectrometer through two single mode fibers of the 8 fiber v-groove array and acquired by ultizing the upper two lines of a three-line CCD camera. We rotated this fiber v-groove array by 1.35 degrees to focus two spectra onto the first and second line of the CCD camera. Two spectra were aligned by optimum spectrum matching algorithm. By subtracting one spectrum from the other, this dual-balanced detection system achieved a direct current term suppression of ~30 dB, SNR enhancement of ~3 dB, and auto-autocorrelation artifacts reduction of ~10 dB experimentally. Finally we respectively validated the feasibility and performance of dual-balanced detection by imaging a glass plate and swine corneal tissue ex vivo. The quality of images obtained using dual-balanced detection was significantly improved with regard to the conventional single-detection (SD) images.

  19. Optical information authentication using compressed double-random-phase-encoded images and quick-response codes.

    PubMed

    Wang, Xiaogang; Chen, Wen; Chen, Xudong

    2015-03-09

    In this paper, we develop a new optical information authentication system based on compressed double-random-phase-encoded images and quick-response (QR) codes, where the parameters of optical lightwave are used as keys for optical decryption and the QR code is a key for verification. An input image attached with QR code is first optically encoded in a simplified double random phase encoding (DRPE) scheme without using interferometric setup. From the single encoded intensity pattern recorded by a CCD camera, a compressed double-random-phase-encoded image, i.e., the sparse phase distribution used for optical decryption, is generated by using an iterative phase retrieval technique with QR code. We compare this technique to the other two methods proposed in literature, i.e., Fresnel domain information authentication based on the classical DRPE with holographic technique and information authentication based on DRPE and phase retrieval algorithm. Simulation results show that QR codes are effective on improving the security and data sparsity of optical information encryption and authentication system.

  20. Investigation of the Iterative Phase Retrieval Algorithm for Interferometric Applications

    NASA Astrophysics Data System (ADS)

    Gombkötő, Balázs; Kornis, János

    2010-04-01

    Sequentially recorded intensity patterns reflected from a coherently illuminated diffuse object can be used to reconstruct the complex amplitude of the scattered beam. Several iterative phase retrieval algorithms are known in the literature to obtain the initially unknown phase from these longitudinally displaced intensity patterns. When two sequences are recorded in two different states of a centimeter sized object in optical setups that are similar to digital holographic interferometry-but omitting the reference wave-, displacement, deformation, or shape measurement is theoretically possible. To do this, the retrieved phase pattern should contain information not only about the intensities and locations of the point sources of the object surface, but their relative phase as well. Not only experiments require strict mechanical precision to record useful data, but even in simulations several parameters influence the capabilities of iterative phase retrieval, such as object to camera distance range, uniform or varying camera step sequence, speckle field characteristics, and sampling. Experiments were done to demonstrate this principle with an as large as 5×5 cm sized deformable object as well. Good initial results were obtained in an imaging setup, where the intensity pattern sequences were recorded near the image plane.

  1. Subaperture correlation based digital adaptive optics for full field optical coherence tomography.

    PubMed

    Kumar, Abhishek; Drexler, Wolfgang; Leitgeb, Rainer A

    2013-05-06

    This paper proposes a sub-aperture correlation based numerical phase correction method for interferometric full field imaging systems provided the complex object field information can be extracted. This method corrects for the wavefront aberration at the pupil/ Fourier transform plane without the need of any adaptive optics, spatial light modulators (SLM) and additional cameras. We show that this method does not require the knowledge of any system parameters. In the simulation study, we consider a full field swept source OCT (FF SSOCT) system to show the working principle of the algorithm. Experimental results are presented for a technical and biological sample to demonstrate the proof of the principle.

  2. Two-dimensional Imaging Velocity Interferometry: Technique and Data Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erskine, D J; Smith, R F; Bolme, C

    2011-03-23

    We describe the data analysis procedures for an emerging interferometric technique for measuring motion across a two-dimensional image at a moment in time, i.e. a snapshot 2d-VISAR. Velocity interferometers (VISAR) measuring target motion to high precision have been an important diagnostic in shockwave physics for many years Until recently, this diagnostic has been limited to measuring motion at points or lines across a target. We introduce an emerging interferometric technique for measuring motion across a two-dimensional image, which could be called a snapshot 2d-VISAR. If a sufficiently fast movie camera technology existed, it could be placed behind a traditional VISARmore » optical system and record a 2d image vs time. But since that technology is not yet available, we use a CCD detector to record a single 2d image, with the pulsed nature of the illumination providing the time resolution. Consequently, since we are using pulsed illumination having a coherence length shorter than the VISAR interferometer delay ({approx}0.1 ns), we must use the white light velocimetry configuration to produce fringes with significant visibility. In this scheme, two interferometers (illuminating, detecting) having nearly identical delays are used in series, with one before the target and one after. This produces fringes with at most 50% visibility, but otherwise has the same fringe shift per target motion of a traditional VISAR. The 2d-VISAR observes a new world of information about shock behavior not readily accessible by traditional point or 1d-VISARS, simultaneously providing both a velocity map and an 'ordinary' snapshot photograph of the target. The 2d-VISAR has been used to observe nonuniformities in NIF related targets (polycrystalline diamond, Be), and in Si and Al.« less

  3. Methods for multiple-telescope beam imaging and guiding in the near-infrared

    NASA Astrophysics Data System (ADS)

    Anugu, N.; Amorim, A.; Gordo, P.; Eisenhauer, F.; Pfuhl, O.; Haug, M.; Wieprecht, E.; Wiezorrek, E.; Lima, J.; Perrin, G.; Brandner, W.; Straubmeier, C.; Le Bouquin, J.-B.; Garcia, P. J. V.

    2018-05-01

    Atmospheric turbulence and precise measurement of the astrometric baseline vector between any two telescopes are two major challenges in implementing phase-referenced interferometric astrometry and imaging. They limit the performance of a fibre-fed interferometer by degrading the instrument sensitivity and the precision of astrometric measurements and by introducing image reconstruction errors due to inaccurate phases. A multiple-beam acquisition and guiding camera was built to meet these challenges for a recently commissioned four-beam combiner instrument, GRAVITY, at the European Southern Observatory Very Large Telescope Interferometer. For each telescope beam, it measures (a) field tip-tilts by imaging stars in the sky, (b) telescope pupil shifts by imaging pupil reference laser beacons installed on each telescope using a 2 × 2 lenslet and (c) higher-order aberrations using a 9 × 9 Shack-Hartmann. The telescope pupils are imaged to provide visual monitoring while observing. These measurements enable active field and pupil guiding by actuating a train of tip-tilt mirrors placed in the pupil and field planes, respectively. The Shack-Hartmann measured quasi-static aberrations are used to focus the auxiliary telescopes and allow the possibility of correcting the non-common path errors between the adaptive optics systems of the unit telescopes and GRAVITY. The guiding stabilizes the light injection into single-mode fibres, increasing sensitivity and reducing the astrometric and image reconstruction errors. The beam guiding enables us to achieve an astrometric error of less than 50 μas. Here, we report on the data reduction methods and laboratory tests of the multiple-beam acquisition and guiding camera and its performance on-sky.

  4. Ambient and Cryogenic Alignment Verification and Performance of the Infrared Multi-Object Spectrometer

    NASA Technical Reports Server (NTRS)

    Connelly, Joseph A.; Ohl, Raymond G.; Mink, Ronald G.; Mentzell, J. Eric; Saha, Timo T.; Tveekrem, June L.; Hylan, Jason E.; Sparr, Leroy M.; Chambers, V. John; Hagopian, John G.

    2003-01-01

    The Infrared Multi-Object Spectrometer (IRMOS) is a facility instrument for the Kitt Peak National Observatory 4 and 2.1 meter telescopes. IRMOS is a near-IR (0.8 - 2.5 micron) spectrometer with low- to mid-resolving power (R = 300 - 3000). IRMOS produces simultaneous spectra of approximately 100 objects in its 2.8 x 2.0 arc-min field of view using a commercial Micro Electro-Mechanical Systems (MEMS) Digital Micro-mirror Device (DMD) from Texas Instruments. The IRMOS optical design consists of two imaging subsystems. The focal reducer images the focal plane of the telescope onto the DMD field stop, and the spectrograph images the DMD onto the detector. We describe ambient breadboard subsystem alignment and imaging performance of each stage independently, and the ambient and cryogenic imaging performance of the fully assembled instrument. Interferometric measurements of subsystem wavefront error serve to venfy alignment, and are accomplished using a commercial, modified Twyman-Green laser unequal path interferometer. Image testing provides further verification of the optomechanical alignment method and a measurement of near-angle scattered light due to mirror small-scale surface error. Image testing is performed at multiple field points. A mercury-argon pencil lamp provides spectral lines at 546.1 nm and 1550 nm, and a CCD camera and IR camera are used as detectors. We use commercial optical modeling software to predict the point-spread function and its effect on instrument slit transmission and resolution. Our breadboard test results validate this prediction. We conclude with an instrument performance prediction for first light.

  5. Interferometric Imaging Directly with Closure Phases and Closure Amplitudes

    NASA Astrophysics Data System (ADS)

    Chael, Andrew A.; Johnson, Michael D.; Bouman, Katherine L.; Blackburn, Lindy L.; Akiyama, Kazunori; Narayan, Ramesh

    2018-04-01

    Interferometric imaging now achieves angular resolutions as fine as ∼10 μas, probing scales that are inaccessible to single telescopes. Traditional synthesis imaging methods require calibrated visibilities; however, interferometric calibration is challenging, especially at high frequencies. Nevertheless, most studies present only a single image of their data after a process of “self-calibration,” an iterative procedure where the initial image and calibration assumptions can significantly influence the final image. We present a method for efficient interferometric imaging directly using only closure amplitudes and closure phases, which are immune to station-based calibration errors. Closure-only imaging provides results that are as noncommittal as possible and allows for reconstructing an image independently from separate amplitude and phase self-calibration. While closure-only imaging eliminates some image information (e.g., the total image flux density and the image centroid), this information can be recovered through a small number of additional constraints. We demonstrate that closure-only imaging can produce high-fidelity results, even for sparse arrays such as the Event Horizon Telescope, and that the resulting images are independent of the level of systematic amplitude error. We apply closure imaging to VLBA and ALMA data and show that it is capable of matching or exceeding the performance of traditional self-calibration and CLEAN for these data sets.

  6. Nanoparticle light scattering on interferometric surfaces

    NASA Astrophysics Data System (ADS)

    Hayrapetyan, K.; Arif, K. M.; Savran, C. A.; Nolte, D. D.

    2011-03-01

    We present a model based on Mie Surface Double Interaction (MSDI) to explore bead-based detection mechanisms using imaging and scanning. The application goal of this work is to explore the trade-offs between the sensitivity and throughput among various detection methods. Experimentally we use thermal oxide on silicon to establish and control surface interferometric conditions. Surface-captured gold beads are detected using Molecular Interferometric Imaging (MI2) and Spinning-Disc Interferometry (SDI).

  7. A novel design measuring method based on linearly polarized laser interference

    NASA Astrophysics Data System (ADS)

    Cao, Yanbo; Ai, Hua; Zhao, Nan

    2013-09-01

    The interferometric method is widely used in the precision measurement, including the surface quality of the large-aperture mirror. The laser interference technology has been developing rapidly as the laser sources become more and more mature and reliable. We adopted the laser diode as the source for the sake of the short coherent wavelength of it for the optical path difference of the system is quite shorter as several wavelengths, and the power of laser diode is sufficient for measurement and safe to human eye. The 673nm linearly laser was selected and we construct a novel form of interferometric system as we called `Closed Loop', comprised of polarizing optical components, such as polarizing prism and quartz wave plate, the light from the source split by which into measuring beam and referencing beam, they've both reflected by the measuring mirror, after the two beams transforming into circular polarization and spinning in the opposite directions we induced the polarized light synchronous phase shift interference technology to get the detecting fringes, which transfers the phase shifting in time domain to space, so that we did not need to consider the precise-controlled shift of optical path difference, which will introduce the disturbance of the air current and vibration. We got the interference fringes from four different CCD cameras well-alignment, and the fringes are shifted into four different phases of 0, π/2, π, and 3π/2 in time. After obtaining the images from the CCD cameras, we need to align the interference fringes pixel to pixel from different CCD cameras, and synthesis the rough morphology, after getting rid of systematic error, we could calculate the surface accuracy of the measuring mirror. This novel design detecting method could be applied into measuring the optical system aberration, and it would develop into the setup of the portable structural interferometer and widely used in different measuring circumstances.

  8. An overview of instrumentation for the Large Binocular Telescope

    NASA Astrophysics Data System (ADS)

    Wagner, R. Mark

    2004-09-01

    An overview of instrumentation for the Large Binocular Telescope is presented. Optical instrumentation includes the Large Binocular Camera (LBC), a pair of wide-field (27'x 27') UB/VRI optimized mosaic CCD imagers at the prime focus, and the Multi-Object Double Spectrograph (MODS), a pair of dual-beam blue-red optimized long-slit spectrographs mounted at the straight-through F/15 Gregorian focus incorporating multiple slit masks for multi-object spectroscopy over a 6\\arcmin\\ field and spectral resolutions of up to 8000. Infrared instrumentation includes the LBT Near-IR Spectroscopic Utility with Camera and Integral Field Unit for Extragalactic Research (LUCIFER), a modular near-infrared (0.9-2.5 μm) imager and spectrograph pair mounted at a bent interior focal station and designed for seeing-limited (FOV: 4'x 4') imaging, long-slit spectroscopy, and multi-object spectroscopy utilizing cooled slit masks and diffraction limited (FOV: 0'.5 x 0'.5) imaging and long-slit spectroscopy. Strategic instruments under development for the remaining two combined focal stations include an interferometric cryogenic beam combiner with near-infrared and thermal-infrared instruments for Fizeau imaging and nulling interferometry (LBTI) and an optical bench beam combiner with visible and near-infrared imagers utilizing multi-conjugate adaptive optics for high angular resolution and sensitivity (LINC/NIRVANA). In addition, a fiber-fed bench spectrograph (PEPSI) capable of ultra high resolution spectroscopy and spectropolarimetry (R = 40,000-300,000) will be available as a principal investigator instrument. The availability of all these instruments mounted simultaneously on the LBT permits unique science, flexible scheduling, and improved operational support.

  9. Alignment of a multilayer-coated imaging system using extreme ultraviolet Foucault and Ronchi interferometric testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ray-Chaudhuri, A.K.; Ng, W.; Cerrina, F.

    1995-11-01

    Multilayer-coated imaging systems for extreme ultraviolet (EUV) lithography at 13 nm represent a significant challenge for alignment and characterization. The standard practice of utilizing visible light interferometry fundamentally provides an incomplete picture since this technique fails to account for phase effects induced by the multilayer coating. Thus the development of optical techniques at the functional EUV wavelength is required. We present the development of two EUV optical tests based on Foucault and Ronchi techniques. These relatively simple techniques are extremely sensitive due to the factor of 50 reduction in wavelength. Both techniques were utilized to align a Mo--Si multilayer-coated Schwarzschildmore » camera. By varying the illumination wavelength, phase shift effects due to the interplay of multilayer coating and incident angle were uniquely detected. {copyright} {ital 1995} {ital American} {ital Vacuum} {ital Society}« less

  10. Dynamic measurements of flowing cells labeled by gold nanoparticles using full-field photothermal interferometric imaging

    NASA Astrophysics Data System (ADS)

    Turko, Nir A.; Roitshtain, Darina; Blum, Omry; Kemper, Björn; Shaked, Natan T.

    2017-06-01

    We present highly dynamic photothermal interferometric phase microscopy for quantitative, selective contrast imaging of live cells during flow. Gold nanoparticles can be biofunctionalized to bind to specific cells, and stimulated for local temperature increase due to plasmon resonance, causing a rapid change of the optical phase. These phase changes can be recorded by interferometric phase microscopy and analyzed to form an image of the binding sites of the nanoparticles in the cells, gaining molecular specificity. Since the nanoparticle excitation frequency might overlap with the sample dynamics frequencies, photothermal phase imaging was performed on stationary or slowly dynamic samples. Furthermore, the computational analysis of the photothermal signals is time consuming. This makes photothermal imaging unsuitable for applications requiring dynamic imaging or real-time analysis, such as analyzing and sorting cells during fast flow. To overcome these drawbacks, we utilized an external interferometric module and developed new algorithms, based on discrete Fourier transform variants, enabling fast analysis of photothermal signals in highly dynamic live cells. Due to the self-interference module, the cells are imaged with and without excitation in video-rate, effectively increasing signal-to-noise ratio. Our approach holds potential for using photothermal cell imaging and depletion in flow cytometry.

  11. Development and Application of a Low Frequency Near-Field Interferometric-TOA 3D Lightning Mapping Array

    NASA Astrophysics Data System (ADS)

    Lyu, F.; Cummer, S. A.; Weinert, J. L.; McTague, L. E.; Solanki, R.; Barrett, J.

    2014-12-01

    Lightning processes radiated extremely wideband electromagnetic signals. Lightning images mapped by VHF interferometry and VHF time of arrival lightning mapping arrays enable us to understand the lightning in-cloud detail development during the extent of flash that can not always be captured by cameras because of the shield of cloud. Lightning processes radiate electromagnetically over an extremely wide bandwidth, offering the possibility of multispectral lightning radio imaging. Low frequency signals are often used for lightning detection, but usually only for ground point location or thunderstorm tracking. Some recent results have demonstrated lightning LF 3D mapping of discrete lightning pulses, but imaging of continuous LF emissions have not been shown. In this work, we report a GPS-synchronized LF near field interferometric-TOA 3D lightning mapping array applied to image the development of lightning flashes on second time scale. Cross-correlation, as used in broadband interferometry, is applied in our system to find windowed arrival time differences with sub-microsecond time resolution. However, because the sources are in the near field of the array, time of arrival processing is used to find the source locations with a typical precision of 100 meters. We show that this system images the complete lightning flash structure with thousands of LF sources for extensive flashes. Importantly, this system is able to map both continuous emissions like dart leaders, and bursty or discrete emissions. Lightning stepped leader and dart leader propagation speeds are estimated to 0.56-2.5x105 m/s and 0.8-2.0x106 m/s respectively, which are consistent with previous reports. In many aspects our LF images are remarkably similar to VHF lightning mapping array images, despite the 1000 times difference in frequency, which may suggest some special links between the LF and VHF emission during lightning processes.

  12. Using dynamic interferometric synthetic aperature radar (InSAR) to image fast-moving surface waves

    DOEpatents

    Vincent, Paul

    2005-06-28

    A new differential technique and system for imaging dynamic (fast moving) surface waves using Dynamic Interferometric Synthetic Aperture Radar (InSAR) is introduced. This differential technique and system can sample the fast-moving surface displacement waves from a plurality of moving platform positions in either a repeat-pass single-antenna or a single-pass mode having a single-antenna dual-phase receiver or having dual physically separate antennas, and reconstruct a plurality of phase differentials from a plurality of platform positions to produce a series of desired interferometric images of the fast moving waves.

  13. Digital cartography of Io

    NASA Technical Reports Server (NTRS)

    Mcewen, Alfred S.; Duck, B.; Edwards, Kathleen

    1991-01-01

    A high resolution controlled mosaic of the hemisphere of Io centered on longitude 310 degrees is produced. Digital cartographic techniques were employed. Approximately 80 Voyager 1 clear and blue filter frames were utilized. This mosaic was merged with low-resolution color images. This dataset is compared to the geologic map of this region. Passage of the Voyager spacecraft through the Io plasma torus during acquisition of the highest resolution images exposed the vidicon detectors to ionized radiation, resulting in dark-current buildup on the vidicon. Because the vidicon is scanned from top to bottom, more charge accumulated toward the bottom of the frames, and the additive error increases from top to bottom as a ramp function. This ramp function was removed by using a model. Photometric normalizations were applied using the Minnaert function. An attempt to use Hapke's photometric function revealed that this function does not adequately describe Io's limb darkening at emission angles greater than 80 degrees. In contrast, the Minnaert function accurately describes the limb darkening up to emission angles of about 89 degrees. The improved set of discrete camera angles derived from this effort will be used in conjunction with the space telemetry pointing history file (the IPPS file), corrected on 4 or 12 second intervals to derive a revised time history for the pointing of the Infrared Interferometric Spectrometer (IRIS). For IRIS observations acquired between camera shutterings, the IPPS file can be corrected by linear interpolation, provided that the spacecraft motions were continuous. Image areas corresponding to the fields of view of IRIS spectra acquired between camera shutterings will be extracted from the mosaic to place the IRIS observations and hotspot models into geologic context.

  14. Alignment and Performance of the Infrared Multi-Object Spectrometer

    NASA Technical Reports Server (NTRS)

    Connelly, Joseph A.; Ohl, Raymond G.; Mentzell, J. Eric; Madison, Timothy J.; Hylan, Jason E.; Mink, Ronald G.; Saha, Timo T.; Tveekrem, June L.; Sparr, Leroy M.; Chambers, V. John; hide

    2004-01-01

    The Infrared Multi-Object Spectrometer (IRMOS) is a principle investigator class instrument for the Kitt Peak National Observatory 4 and 2.1 meter telescopes. IRMOS is a near-IR (0.8 - 2.5 micron) spectrometer with low-to mid-resolving power (R = 300 - 3000). IRMOS produces simultaneous spectra of approximately 100 objects in its 2.8 x 2.0 arc-min field of view (4 m telescope) using a commercial Micro Electro-Mechanical Systems (MEMS) micro-mirror array (MMA) from Texas Instruments. The IRMOS optical design consists of two imaging subsystems. The focal reducer images the focal plane of the telescope onto the MMA field stop, and the spectrograph images the MMA onto the detector. We describe ambient breadboard subsystem alignment and imaging performance of each stage independently, and ambient imaging performance of the fully assembled instrument. Interferometric measurements of subsystem wavefront error serve as a qualitative alignment guide, and are accomplished using a commercial, modified Twyman-Green laser unequal path interferometer. Image testing provides verification of the optomechanical alignment method and a measurement of near-angle scattered light due to mirror small-scale surface error. Image testing is performed at multiple field points. A mercury-argon pencil lamp provides a spectral line at 546.1 nanometers, a blackbody source provides a line at 1550 nanometers, and a CCD camera and IR camera are used as detectors. We use commercial optical modeling software to predict the point-spread function and its effect on instrument slit transmission and resolution. Our breadboard and instrument level test results validate this prediction. We conclude with an instrument performance prediction for cryogenic operation and first light in late 2003.

  15. An overview of instrumentation for the Large Binocular Telescope

    NASA Astrophysics Data System (ADS)

    Wagner, R. Mark

    2006-06-01

    An overview of instrumentation for the Large Binocular Telescope is presented. Optical instrumentation includes the Large Binocular Camera (LBC), a pair of wide-field (27' × 27') mosaic CCD imagers at the prime focus, and the Multi-Object Double Spectrograph (MODS), a pair of dual-beam blue-red optimized long-slit spectrographs mounted at the straight-through F/15 Gregorian focus incorporating multiple slit masks for multi-object spectroscopy over a 6' field and spectral resolutions of up to 8000. Infrared instrumentation includes the LBT Near-IR Spectroscopic Utility with Camera and Integral Field Unit for Extragalactic Research (LUCIFER), a modular near-infrared (0.9-2.5 μm) imager and spectrograph pair mounted at a bent interior focal station and designed for seeing-limited (FOV: 4' × 4') imaging, long-slit spectroscopy, and multi-object spectroscopy utilizing cooled slit masks and diffraction limited (FOV: 0'.5 × 0'.5) imaging and long-slit spectroscopy. Strategic instruments under development for the remaining two combined focal stations include an interferometric cryogenic beam combiner with near-infrared and thermal-infrared instruments for Fizeau imaging and nulling interferometry (LBTI) and an optical bench near-infrared beam combiner utilizing multi-conjugate adaptive optics for high angular resolution and sensitivity (LINC-NIRVANA). In addition, a fiber-fed bench spectrograph (PEPSI) capable of ultra high resolution spectroscopy and spectropolarimetry (R = 40,000-300,000) will be available as a principal investigator instrument. The availability of all these instruments mounted simultaneously on the LBT permits unique science, flexible scheduling, and improved operational support.

  16. An overview of instrumentation for the Large Binocular Telescope

    NASA Astrophysics Data System (ADS)

    Wagner, R. Mark

    2008-07-01

    An overview of instrumentation for the Large Binocular Telescope is presented. Optical instrumentation includes the Large Binocular Camera (LBC), a pair of wide-field (27' × 27') mosaic CCD imagers at the prime focus, and the Multi-Object Double Spectrograph (MODS), a pair of dual-beam blue-red optimized long-slit spectrographs mounted at the straight-through F/15 Gregorian focus incorporating multiple slit masks for multi-object spectroscopy over a 6 field and spectral resolutions of up to 8000. Infrared instrumentation includes the LBT Near-IR Spectroscopic Utility with Camera and Integral Field Unit for Extragalactic Research (LUCIFER), a modular near-infrared (0.9-2.5 μm) imager and spectrograph pair mounted at a bent interior focal station and designed for seeing-limited (FOV: 4' × 4') imaging, long-slit spectroscopy, and multi-object spectroscopy utilizing cooled slit masks and diffraction limited (FOV: 0.5' × 0.5') imaging and long-slit spectroscopy. Strategic instruments under development for the remaining two combined focal stations include an interferometric cryogenic beam combiner with near-infrared and thermal-infrared instruments for Fizeau imaging and nulling interferometry (LBTI) and an optical bench near-infrared beam combiner utilizing multi-conjugate adaptive optics for high angular resolution and sensitivity (LINC-NIRVANA). In addition, a fiber-fed bench spectrograph (PEPSI) capable of ultra high resolution spectroscopy and spectropolarimetry (R = 40,000-300,000) will be available as a principal investigator instrument. The availability of all these instruments mounted simultaneously on the LBT permits unique science, flexible scheduling, and improved operational support.

  17. Advances in Measurement of Skin Friction in Airflow

    NASA Technical Reports Server (NTRS)

    Brown, James L.; Naughton, Jonathan W.

    2006-01-01

    The surface interferometric skin-friction (SISF) measurement system is an instrument for determining the distribution of surface shear stress (skin friction) on a wind-tunnel model. The SISF system utilizes the established oil-film interference method, along with advanced image-data-processing techniques and mathematical models that express the relationship between interferograms and skin friction, to determine the distribution of skin friction over an observed region of the surface of a model during a single wind-tunnel test. In the oil-film interference method, a wind-tunnel model is coated with a thin film of oil of known viscosity and is illuminated with quasi-monochromatic, collimated light, typically from a mercury lamp. The light reflected from the outer surface of the oil film interferes with the light reflected from the oil-covered surface of the model. In the present version of the oil-film interference method, a camera captures an image of the illuminated model and the image in the camera is modulated by the interference pattern. The interference pattern depends on the oil-thickness distribution on the observed surface, and this distribution can be extracted through analysis of the image acquired by the camera. The oil-film technique is augmented by a tracer technique for observing the streamline pattern. To make the streamlines visible, small dots of fluorescentchalk/oil mixture are placed on the model just before a test. During the test, the chalk particles are embedded in the oil flow and produce chalk streaks that mark the streamlines. The instantaneous rate of thinning of the oil film at a given position on the surface of the model can be expressed as a function of the instantaneous thickness, the skin-friction distribution on the surface, and the streamline pattern on the surface; the functional relationship is expressed by a mathematical model that is nonlinear in the oil-film thickness and is known simply as the thin-oil-film equation. From the image data acquired as described, the time-dependent oil-thickness distribution and streamline pattern are extracted and by inversion of the thin-oil-film equation it is then possible to determine the skin-friction distribution. In addition to a quasi-monochromatic light source, the SISF system includes a beam splitter and two video cameras equipped with filters for observing the same area on a model in different wavelength ranges, plus a frame grabber and a computer for digitizing the video images and processing the image data. One video camera acquires the interference pattern in a narrow wavelength range of the quasi-monochromatic source. The other video camera acquires the streamline image of fluorescence from the chalk in a nearby but wider wavelength range. The interference- pattern and fluorescence images are digitized, and the resulting data are processed by an algorithm that inverts the thin-oil-film equation to find the skin-friction distribution.

  18. Registration of interferometric SAR images

    NASA Technical Reports Server (NTRS)

    Lin, Qian; Vesecky, John F.; Zebker, Howard A.

    1992-01-01

    Interferometric synthetic aperture radar (INSAR) is a new way of performing topography mapping. Among the factors critical to mapping accuracy is the registration of the complex SAR images from repeated orbits. A new algorithm for registering interferometric SAR images is presented. A new figure of merit, the average fluctuation function of the phase difference image, is proposed to evaluate the fringe pattern quality. The process of adjusting the registration parameters according to the fringe pattern quality is optimized through a downhill simplex minimization algorithm. The results of applying the proposed algorithm to register two pairs of Seasat SAR images with a short baseline (75 m) and a long baseline (500 m) are shown. It is found that the average fluctuation function is a very stable measure of fringe pattern quality allowing very accurate registration.

  19. Techniques and Tools for Estimating Ionospheric Effects in Interferometric and Polarimetric SAR Data

    NASA Technical Reports Server (NTRS)

    Rosen, P.; Lavalle, M.; Pi, X.; Buckley, S.; Szeliga, W.; Zebker, H.; Gurrola, E.

    2011-01-01

    The InSAR Scientific Computing Environment (ISCE) is a flexible, extensible software tool designed for the end-to-end processing and analysis of synthetic aperture radar data. ISCE inherits the core of the ROI_PAC interferometric tool, but contains improvements at all levels of the radar processing chain, including a modular and extensible architecture, new focusing approach, better geocoding of the data, handling of multi-polarization data, radiometric calibration, and estimation and correction of ionospheric effects. In this paper we describe the characteristics of ISCE with emphasis on the ionospheric modules. To detect ionospheric anomalies, ISCE implements the Faraday rotation method using quadpolarimetric images, and the split-spectrum technique using interferometric single-, dual- and quad-polarimetric images. The ability to generate co-registered time series of quad-polarimetric images makes ISCE also an ideal tool to be used for polarimetric-interferometric radar applications.

  20. Interferometric synthetic aperture radar imagery of the Gulf Stream

    NASA Technical Reports Server (NTRS)

    Ainsworth, T. L.; Cannella, M. E.; Jansen, R. W.; Chubb, S. R.; Carande, R. E.; Foley, E. W.; Goldstein, R. M.; Valenzuela, G. R.

    1993-01-01

    The advent of interferometric synthetic aperture radar (INSAR) imagery brought to the ocean remote sensing field techniques used in radio astronomy. Whilst details of the interferometry differ between the two fields, the basic idea is the same: Use the phase information arising from positional differences of the radar receivers and/or transmitters to probe remote structures. The interferometric image is formed from two complex synthetic aperture radar (SAR) images. These two images are of the same area but separated in time. Typically the time between these images is very short -- approximately 50 msec for the L-band AIRSAR (Airborne SAR). During this short period the radar scatterers on the ocean surface do not have time to significantly decorrelate. Hence the two SAR images will have the same amplitude, since both obtain the radar backscatter from essentially the same object. Although the ocean surface structure does not significantly decorrelate in 50 msec, surface features do have time to move. It is precisely the translation of scattering features across the ocean surface which gives rise to phase differences between the two SAR images. This phase difference is directly proportional to the range velocity of surface scatterers. The constant of proportionality is dependent upon the interferometric mode of operation.

  1. An interferometric imaging biosensor using weighted spectrum analysis to confirm DNA monolayer films with attogram sensitivity.

    PubMed

    Fu, Rongxin; Li, Qi; Wang, Ruliang; Xue, Ning; Lin, Xue; Su, Ya; Jiang, Kai; Jin, Xiangyu; Lin, Rongzan; Gan, Wupeng; Lu, Ying; Huang, Guoliang

    2018-05-01

    Interferometric imaging biosensors are powerful and convenient tools for confirming the existence of DNA monolayer films on silicon microarray platforms. However, their accuracy and sensitivity need further improvement because DNA molecules contribute to an inconspicuous interferometric signal both in thickness and size. Such weaknesses result in poor performance of these biosensors for low DNA content analyses and point mutation tests. In this paper, an interferometric imaging biosensor with weighted spectrum analysis is presented to confirm DNA monolayer films. The interferometric signal of DNA molecules can be extracted and then quantitative detection results for DNA microarrays can be reconstructed. With the proposed strategy, the relative error of thickness detection was reduced from 88.94% to merely 4.15%. The mass sensitivity per unit area of the proposed biosensor reached 20 attograms (ag). Therefore, the sample consumption per unit area of the target DNA content was only 62.5 zeptomoles (zm), with the volume of 0.25 picolitres (pL). Compared with the fluorescence resonance energy transfer (FRET), the measurement veracity of the interferometric imaging biosensor with weighted spectrum analysis is free to the changes in spotting concentration and DNA length. The detection range was more than 1µm. Moreover, single nucleotide mismatch could be pointed out combined with specific DNA ligation. A mutation experiment for lung cancer detection proved the high selectivity and accurate analysis capability of the presented biosensor. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Nonlinear Interferometric Vibrational Imaging (NIVI) with Novel Optical Sources

    NASA Astrophysics Data System (ADS)

    Boppart, Stephen A.; King, Matthew D.; Liu, Yuan; Tu, Haohua; Gruebele, Martin

    Optical imaging is essential in medicine and in fundamental studies of biological systems. Although many existing imaging modalities can supply valuable information, not all are capable of label-free imaging with high-contrast and molecular specificity. The application of molecular or nanoparticle contrast agents may adversely influence the biological system under investigation. These substances also present ongoing concerns over toxicity or particle clearance, which must be properly addressed before their approval for in vivo human imaging. Hence there is an increasing appreciation for label-free imaging techniques. It is of primary importance to develop imaging techniques that can indiscriminately identify and quantify biochemical compositions to high degrees of sensitivity and specificity through only the intrinsic optical response of endogenous molecular species. The development and use of nonlinear interferometric vibrational imaging, which is based on the interferometric detection of optical signals from coherent anti-Stokes Raman scattering (CARS), along with novel optical sources, offers the potential for label-free molecular imaging.

  3. PP and PS interferometric images of near-seafloor sediments

    USGS Publications Warehouse

    Haines, S.S.

    2011-01-01

    I present interferometric processing examples from an ocean-bottom cable OBC dataset collected at a water depth of 800 m in the Gulf of Mexico. Virtual source and receiver gathers created through cross-correlation of full wavefields show clear PP reflections and PS conversions from near-seafloor layers of interest. Virtual gathers from wavefield-separated data show improved PP and PS arrivals. PP and PS brute stacks from the wavefield-separated data compare favorably with images from a non-interferometric processing flow. ?? 2011 Society of Exploration Geophysicists.

  4. Quantum dot-based local field imaging reveals plasmon-based interferometric logic in silver nanowire networks.

    PubMed

    Wei, Hong; Li, Zhipeng; Tian, Xiaorui; Wang, Zhuoxian; Cong, Fengzi; Liu, Ning; Zhang, Shunping; Nordlander, Peter; Halas, Naomi J; Xu, Hongxing

    2011-02-09

    We show that the local electric field distribution of propagating plasmons along silver nanowires can be imaged by coating the nanowires with a layer of quantum dots, held off the surface of the nanowire by a nanoscale dielectric spacer layer. In simple networks of silver nanowires with two optical inputs, control of the optical polarization and phase of the input fields directs the guided waves to a specific nanowire output. The QD-luminescent images of these structures reveal that a complete family of phase-dependent, interferometric logic functions can be performed on these simple networks. These results show the potential for plasmonic waveguides to support compact interferometric logic operations.

  5. Full-field OCT: ex vivo and in vivo biological imaging applications

    NASA Astrophysics Data System (ADS)

    Grieve, Katharine; Dubois, Arnaud; Moneron, Gael; Guyot, Elvire; Boccara, Albert C.

    2005-04-01

    We present results of studies in embryology and ophthalmology performed using our ultrahigh-resolution full-field OCT system. We also discuss recent developments to our ultrashort acquisition time full-field optical coherence tomography system designed to allow in vivo biological imaging. Preliminary results of high-speed imaging in biological samples are presented. The core of the experimental setup is the Linnik interferometer, illuminated by a white light source. En face tomographic images are obtained in real-time without scanning by computing the difference of two phase-opposed interferometric images recorded by high-resolution CCD cameras. An isotropic spatial resolution of ~1 μm is achieved thanks to the short source coherence length and the use of high numerical aperture microscope objectives. A detection sensitivity of ~90 dB is obtained by means of image averaging and pixel binning. In ophthalmology, reconstructed xz images from rat ocular tissue are presented, where cellular-level structures in the retina are revealed, demonstrating the unprecedented resolution of our instrument. Three-dimensional reconstructions of the mouse embryo allowing the study of the establishment of the anterior-posterior axis are shown. Finally we present the first results of embryonic imaging using the new rapid acquisition full-field OCT system, which offers an acquisition time of 10 μs per frame.

  6. An overview of instrumentation for the Large Binocular Telescope

    NASA Astrophysics Data System (ADS)

    Wagner, R. Mark

    2010-07-01

    An overview of instrumentation for the Large Binocular Telescope is presented. Optical instrumentation includes the Large Binocular Camera (LBC), a pair of wide-field (27 × 27) mosaic CCD imagers at the prime focus, and the Multi-Object Double Spectrograph (MODS), a pair of dual-beam blue-red optimized long-slit spectrographs mounted at the straight-through F/15 Gregorian focus incorporating multiple slit masks for multi-object spectroscopy over a 6 field and spectral resolutions of up to 8000. Infrared instrumentation includes the LBT Near-IR Spectroscopic Utility with Camera and Integral Field Unit for Extragalactic Research (LUCIFER), a modular near-infrared (0.9-2.5 μm) imager and spectrograph pair mounted at a bent interior focal station and designed for seeing-limited (FOV: 4 × 4) imaging, long-slit spectroscopy, and multi-object spectroscopy utilizing cooled slit masks and diffraction limited (FOV: 0.5 × 0.5) imaging and long-slit spectroscopy. Strategic instruments under development for the remaining two combined focal stations include an interferometric cryogenic beam combiner with near-infrared and thermal-infrared instruments for Fizeau imaging and nulling interferometry (LBTI) and an optical bench near-infrared beam combiner utilizing multi-conjugate adaptive optics for high angular resolution and sensitivity (LINC-NIRVANA). In addition, a fiber-fed bench spectrograph (PEPSI) capable of ultra high resolution spectroscopy and spectropolarimetry (R = 40,000-300,000) will be available as a principal investigator instrument. The availability of all these instruments mounted simultaneously on the LBT permits unique science, flexible scheduling, and improved operational support. Over the past two years the LBC and the first LUCIFER instrument have been brought into routine scientific operation and MODS1 commissioning is set to begin in the fall of 2010.

  7. Resolving phase ambiguities in the calibration of redundant interferometric arrays: implications for array design

    NASA Astrophysics Data System (ADS)

    Kurien, Binoy G.; Tarokh, Vahid; Rachlin, Yaron; Shah, Vinay N.; Ashcom, Jonathan B.

    2016-10-01

    We provide new results enabling robust interferometric image reconstruction in the presence of unknown aperture piston variation via the technique of redundant spacing calibration (RSC). The RSC technique uses redundant measurements of the same interferometric baseline with different pairs of apertures to reveal the piston variation among these pairs. In both optical and radio interferometry, the presence of phase-wrapping ambiguities in the measurements is a fundamental issue that needs to be addressed for reliable image reconstruction. In this paper, we show that these ambiguities affect recently developed RSC phasor-based reconstruction approaches operating on the complex visibilities, as well as traditional phase-based approaches operating on their logarithm. We also derive new sufficient conditions for an interferometric array to be immune to these ambiguities in the sense that their effect can be rendered benign in image reconstruction. This property, which we call wrap-invariance, has implications for the reliability of imaging via classical three-baseline phase closures as well as generalized closures. We show that wrap-invariance is conferred upon arrays whose interferometric graph satisfies a certain cycle-free condition. For cases in which this condition is not satisfied, a simple algorithm is provided for identifying those graph cycles which prevent its satisfaction. We apply this algorithm to diagnose and correct a member of a pattern family popular in the literature.

  8. A Fabry-Perot interferometric imaging spectrometer in LWIR

    NASA Astrophysics Data System (ADS)

    Zhang, Fang; Gao, Jiaobo; Wang, Nan; Wu, Jianghui; Meng, Hemin; Zhang, Lei; Gao, Shan

    2017-02-01

    With applications ranging from the desktop to remote sensing, the long wave infrared (LWIR) interferometric spectral imaging system is always with huge volume and large weight. In order to miniaturize and light the instrument, a new method of LWIR spectral imaging system based on a variable gap Fabry-Perot (FP) interferometer is researched. With the system working principle analyzed, theoretically, it is researched that how to make certain the primary parameter, such as, wedge angle of interferometric cavity, f-number of the imaging lens and the relationship between the wedge angle and the modulation of the interferogram. A prototype is developed and a good experimental result of a uniform radiation source, a monochromatic source, is obtained. The research shows that besides high throughput and high spectral resolution, the advantage of miniaturization is also simultaneously achieved in this method.

  9. Interferometric inverse synthetic aperture radar imaging for space targets based on wideband direct sampling using two antennas

    NASA Astrophysics Data System (ADS)

    Tian, Biao; Liu, Yang; Xu, Shiyou; Chen, Zengping

    2014-01-01

    Interferometric inverse synthetic aperture radar (InISAR) imaging provides complementary information to monostatic inverse synthetic aperture radar (ISAR) imaging. This paper proposes a new InISAR imaging system for space targets based on wideband direct sampling using two antennas. The system is easy to realize in engineering since the motion trajectory of space targets can be known in advance, which is simpler than that of three receivers. In the preprocessing step, high speed movement compensation is carried out by designing an adaptive matched filter containing speed that is obtained from the narrow band information. Then, the coherent processing and keystone transform for ISAR imaging are adopted to reserve the phase history of each antenna. Through appropriate collocation of the system, image registration and phase unwrapping can be avoided. Considering the situation not to be satisfied, the influence of baseline variance is analyzed and compensation method is adopted. The corresponding size can be achieved by interferometric processing of the two complex ISAR images. Experimental results prove the validity of the analysis and the three-dimensional imaging algorithm.

  10. Quantitative phase imaging of biological cells and tissues using singleshot white light interference microscopy and phase subtraction method for extended range of measurement

    NASA Astrophysics Data System (ADS)

    Mehta, Dalip Singh; Sharma, Anuradha; Dubey, Vishesh; Singh, Veena; Ahmad, Azeem

    2016-03-01

    We present a single-shot white light interference microscopy for the quantitative phase imaging (QPI) of biological cells and tissues. A common path white light interference microscope is developed and colorful white light interferogram is recorded by three-chip color CCD camera. The recorded white light interferogram is decomposed into the red, green and blue color wavelength component interferograms and processed it to find out the RI for different color wavelengths. The decomposed interferograms are analyzed using local model fitting (LMF)" algorithm developed for reconstructing the phase map from single interferogram. LMF is slightly off-axis interferometric QPI method which is a single-shot method that employs only a single image, so it is fast and accurate. The present method is very useful for dynamic process where path-length changes at millisecond level. From the single interferogram a wavelength-dependent quantitative phase imaging of human red blood cells (RBCs) are reconstructed and refractive index is determined. The LMF algorithm is simple to implement and is efficient in computation. The results are compared with the conventional phase shifting interferometry and Hilbert transform techniques.

  11. Optical detection of Trypanosoma cruzi in blood samples for diagnosis purpose

    NASA Astrophysics Data System (ADS)

    Alanis, Elvio; Romero, Graciela; Alvarez, Liliana; Martinez, Carlos C.; Basombrio, Miguel A.

    2004-10-01

    An optical method for detection of Trypanosoma Cruzi (T. cruzi) parasites in blood samples of mice infected with Chagas disease is presented. The method is intended for use in human blood, for diagnosis purposes. A thin layer of blood infected by T. cruzi parasites, in small concentrations, is examined in an interferometric microscope in which the images of the vision field are taken by a CCD camera and temporarily stored in the memory of a host computer. The whole sample is scanned displacing the microscope plate by means of step motors driven by the computer. Several consecutive images of the same field are taken and digitally processed by means of image temporal differentiation in order to detect if a parasite is eventually present in the field. Each field of view is processed in the same fashion, until the full area of the sample is covered or until a parasite is detected, in which case an acoustical warning is activated and the corresponding image is displayed permitting the technician to corroborate the result visually. A discussion of the reliability of the method as well as a comparison with other well established techniques are presented.

  12. Label-free and live cell imaging by interferometric scattering microscopy.

    PubMed

    Park, Jin-Sung; Lee, Il-Buem; Moon, Hyeon-Min; Joo, Jong-Hyeon; Kim, Kyoung-Hoon; Hong, Seok-Cheol; Cho, Minhaeng

    2018-03-14

    Despite recent remarkable advances in microscopic techniques, it still remains very challenging to directly observe the complex structure of cytoplasmic organelles in live cells without a fluorescent label. Here we report label-free and live-cell imaging of mammalian cell, Escherischia coli , and yeast, using interferometric scattering microscopy, which reveals the underlying structures of a variety of cytoplasmic organelles as well as the underside structure of the cells. The contact areas of the cells attached onto a glass substrate, e.g. , focal adhesions and filopodia, are clearly discernible. We also found a variety of fringe-like features in the cytoplasmic area, which may reflect the folded structures of cytoplasmic organelles. We thus anticipate that the label-free interferometric scattering microscopy can be used as a powerful tool to shed interferometric light on in vivo structures and dynamics of various intracellular phenomena.

  13. 3D-shape recognition and size measurement of irregular rough particles using multi-views interferometric out-of-focus imaging.

    PubMed

    Ouldarbi, L; Talbi, M; Coëtmellec, S; Lebrun, D; Gréhan, G; Perret, G; Brunel, M

    2016-11-10

    We realize simplified-tomography experiments on irregular rough particles using interferometric out-of-focus imaging. Using two angles of view, we determine the global 3D-shape, the dimensions, and the 3D-orientation of irregular rough particles whose morphologies belong to families such as sticks, plates, and crosses.

  14. Analysis of two dimensional signals via curvelet transform

    NASA Astrophysics Data System (ADS)

    Lech, W.; Wójcik, W.; Kotyra, A.; Popiel, P.; Duk, M.

    2007-04-01

    This paper describes an application of curvelet transform analysis problem of interferometric images. Comparing to two-dimensional wavelet transform, curvelet transform has higher time-frequency resolution. This article includes numerical experiments, which were executed on random interferometric image. In the result of nonlinear approximations, curvelet transform obtains matrix with smaller number of coefficients than is guaranteed by wavelet transform. Additionally, denoising simulations show that curvelet could be a very good tool to remove noise from images.

  15. Radon-domain interferometric interpolation for reconstruction of the near-offset gap in marine seismic data

    NASA Astrophysics Data System (ADS)

    Xu, Zhuo; Sopher, Daniel; Juhlin, Christopher; Han, Liguo; Gong, Xiangbo

    2018-04-01

    In towed marine seismic data acquisition, a gap between the source and the nearest recording channel is typical. Therefore, extrapolation of the missing near-offset traces is often required to avoid unwanted effects in subsequent data processing steps. However, most existing interpolation methods perform poorly when extrapolating traces. Interferometric interpolation methods are one particular method that have been developed for filling in trace gaps in shot gathers. Interferometry-type interpolation methods differ from conventional interpolation methods as they utilize information from several adjacent shot records to fill in the missing traces. In this study, we aim to improve upon the results generated by conventional time-space domain interferometric interpolation by performing interferometric interpolation in the Radon domain, in order to overcome the effects of irregular data sampling and limited source-receiver aperture. We apply both time-space and Radon-domain interferometric interpolation methods to the Sigsbee2B synthetic dataset and a real towed marine dataset from the Baltic Sea with the primary aim to improve the image of the seabed through extrapolation into the near-offset gap. Radon-domain interferometric interpolation performs better at interpolating the missing near-offset traces than conventional interferometric interpolation when applied to data with irregular geometry and limited source-receiver aperture. We also compare the interferometric interpolated results with those obtained using solely Radon transform (RT) based interpolation and show that interferometry-type interpolation performs better than solely RT-based interpolation when extrapolating the missing near-offset traces. After data processing, we show that the image of the seabed is improved by performing interferometry-type interpolation, especially when Radon-domain interferometric interpolation is applied.

  16. Robust sparse image reconstruction of radio interferometric observations with PURIFY

    NASA Astrophysics Data System (ADS)

    Pratley, Luke; McEwen, Jason D.; d'Avezac, Mayeul; Carrillo, Rafael E.; Onose, Alexandru; Wiaux, Yves

    2018-01-01

    Next-generation radio interferometers, such as the Square Kilometre Array, will revolutionize our understanding of the Universe through their unprecedented sensitivity and resolution. However, to realize these goals significant challenges in image and data processing need to be overcome. The standard methods in radio interferometry for reconstructing images, such as CLEAN, have served the community well over the last few decades and have survived largely because they are pragmatic. However, they produce reconstructed interferometric images that are limited in quality and scalability for big data. In this work, we apply and evaluate alternative interferometric reconstruction methods that make use of state-of-the-art sparse image reconstruction algorithms motivated by compressive sensing, which have been implemented in the PURIFY software package. In particular, we implement and apply the proximal alternating direction method of multipliers algorithm presented in a recent article. First, we assess the impact of the interpolation kernel used to perform gridding and degridding on sparse image reconstruction. We find that the Kaiser-Bessel interpolation kernel performs as well as prolate spheroidal wave functions while providing a computational saving and an analytic form. Secondly, we apply PURIFY to real interferometric observations from the Very Large Array and the Australia Telescope Compact Array and find that images recovered by PURIFY are of higher quality than those recovered by CLEAN. Thirdly, we discuss how PURIFY reconstructions exhibit additional advantages over those recovered by CLEAN. The latest version of PURIFY, with developments presented in this work, is made publicly available.

  17. INSAR Study Of Landslides In The Region Of Lake Sevan-Armenia

    NASA Astrophysics Data System (ADS)

    Lazarov, A.; Minchev, D.

    2012-01-01

    The region of Lake Sevan in Armenia is of theoretical and practical interest due to its very high landslide phenomena caused by metrological and hydrological reasons. Based on the ESA Principal Investigator Number C1P-6051 and requested data from ASAR instrument of ESA ENVISAT satellite four single look complex images including two images from 2008 and two images from 2009 of the region of the Sevan Lake in Armenia are obtained and thoroughly investigated. The one of the images is pointed out as a master and the rest of them, three images as slaves. Hence, three interferometric pairs are produced. Then data of NASA SRTM mission is applied to the interferometric pairs in order to remove topography from the interferograms. Three interferograms generated illustrate decreasing of coherence caused by high temporary decorelation, which means decreasing the level of coincidence of SLC’s in each interferometric pair, according to the time of acquisition each of them.

  18. Interferometric redatuming by sparse inversion

    NASA Astrophysics Data System (ADS)

    van der Neut, Joost; Herrmann, Felix J.

    2013-02-01

    Assuming that transmission responses are known between the surface and a particular depth level in the subsurface, seismic sources can be effectively mapped to this level by a process called interferometric redatuming. After redatuming, the obtained wavefields can be used for imaging below this particular depth level. Interferometric redatuming consists of two steps, namely (i) the decomposition of the observed wavefields into downgoing and upgoing constituents and (ii) a multidimensional deconvolution of the upgoing constituents with the downgoing constituents. While this method works in theory, sensitivity to noise and artefacts due to incomplete acquisition require a different formulation. In this letter, we demonstrate the benefits of formulating the two steps that undergird interferometric redatuming in terms of a transform-domain sparsity-promoting program. By exploiting compressibility of seismic wavefields in the curvelet domain, the method not only becomes robust with respect to noise but we are also able to remove certain artefacts while preserving the frequency content. Although we observe improvements when we promote sparsity in the redatumed data space, we expect better results when interferometric redatuming would be combined or integrated with least-squares migration with sparsity promotion in the image space.

  19. A portfolio of products from the rapid terrain visualization interferometric SAR

    NASA Astrophysics Data System (ADS)

    Bickel, Douglas L.; Doerry, Armin W.

    2007-04-01

    The Rapid Terrain Visualization interferometric synthetic aperture radar was designed and built at Sandia National Laboratories as part of an Advanced Concept Technology Demonstration (ACTD) to "demonstrate the technologies and infrastructure to meet the Army requirement for rapid generation of digital topographic data to support emerging crisis or contingencies." This sensor was built by Sandia National Laboratories for the Joint Programs Sustainment and Development (JPSD) Project Office to provide highly accurate digital elevation models (DEMs) for military and civilian customers, both inside and outside of the United States. The sensor achieved better than HRTe Level IV position accuracy in near real-time. The system was flown on a deHavilland DHC-7 Army aircraft. This paper presents a collection of images and data products from the Rapid Terrain Visualization interferometric synthetic aperture radar. The imagery includes orthorectified images and DEMs from the RTV interferometric SAR radar.

  20. Particle tracking and extended object imaging by interferometric super resolution microscopy

    NASA Astrophysics Data System (ADS)

    Gdor, Itay; Yoo, Seunghwan; Wang, Xiaolei; Daddysman, Matthew; Wilton, Rosemarie; Ferrier, Nicola; Hereld, Mark; Cossairt, Oliver (Ollie); Katsaggelos, Aggelos; Scherer, Norbert F.

    2018-02-01

    An interferometric fluorescent microscope and a novel theoretic image reconstruction approach were developed and used to obtain super-resolution images of live biological samples and to enable dynamic real time tracking. The tracking utilizes the information stored in the interference pattern of both the illuminating incoherent light and the emitted light. By periodically shifting the interferometer phase and a phase retrieval algorithm we obtain information that allow localization with sub-2 nm axial resolution at 5 Hz.

  1. Dynamic frequency-domain interferometer for absolute distance measurements with high resolution

    NASA Astrophysics Data System (ADS)

    Weng, Jidong; Liu, Shenggang; Ma, Heli; Tao, Tianjiong; Wang, Xiang; Liu, Cangli; Tan, Hua

    2014-11-01

    A unique dynamic frequency-domain interferometer for absolute distance measurement has been developed recently. This paper presents the working principle of the new interferometric system, which uses a photonic crystal fiber to transmit the wide-spectrum light beams and a high-speed streak camera or frame camera to record the interference stripes. Preliminary measurements of harmonic vibrations of a speaker, driven by a radio, and the changes in the tip clearance of a rotating gear wheel show that this new type of interferometer has the ability to perform absolute distance measurements both with high time- and distance-resolution.

  2. Visible AO Observations at Halpha for Accreting Young Planets

    NASA Astrophysics Data System (ADS)

    Close, L. M.; Follette, K.; Males, J. R.; Morzinski, K.; Rodigas, T. J.; Hinz, P.; Wu, Y.-L.; Apai, D.; Najita, J.; Puglisi, A.; Esposito, S.; Riccardi, A.; Bailey, V.; Xompero, M.; Briguglio, R.; Weinberger, A.

    2014-01-01

    We utilized the new high-order (250-378 mode) Magellan Adaptive Optics system (MagAO) to obtain very high-resolution science in the visible with MagAO's VisAO CCD camera. In the good-median seeing conditions of Magellan (0.5-0.7'') we find MagAO delivers individual short exposure images as good as 19 mas optical resolution. Due to telescope vibrations, long exposure (60s) r' (0.63μm) images are slightly coarser at FWHM = 23-29 mas (Strehl ~ 28%) with bright (R < 9 mag) guide stars. These are the highest resolution filled-aperture images published to date. Images of the young (~ 1 Myr) Orion Trapezium θ1 Ori A, B, and C cluster members were obtained with VisAO. In particular, the 32 mas binary θ1 Ori C 1 C 2 was easily resolved in non-interferometric images for the first time. Relative positions of the bright trapezium binary stars were measured with ~ 0.6-5 mas accuracy. In the second commissioning run we were able to correct 378 modes and achieved good contrasts (Strehl>20% on young transition disks at Hα). We discuss the contrasts achieved at Hα and the possibility of detecting low mass (~ 1-5 Mjup) planets (past 5AU) with our new SAPPHIRES survey with MagAO at Hα.

  3. Control of Formation-Flying Multi-Element Space Interferometers with Direct Interferometer-Output Feedback

    NASA Technical Reports Server (NTRS)

    Lu, Hui-Ling; Cheng, H. L.; Lyon, Richard G.; Carpenter, Kenneth G.

    2007-01-01

    The long-baseline space interferometer concept involving formation flying of multiple spacecraft holds great promise as future space missions for high-resolution imagery. A major challenge of obtaining high-quality interferometric synthesized images from long-baseline space interferometers is to accurately control these spacecraft and their optics payloads in the specified configuration. Our research focuses on the determination of the optical errors to achieve fine control of long-baseline space interferometers without resorting to additional sensing equipment. We present a suite of estimation tools that can effectively extract from the raw interferometric image relative x/y, piston translational and tip/tilt deviations at the exit pupil aperture. The use of these error estimates in achieving control of the interferometer elements is demonstrated using simulated as well as laboratory-collected interferometric stellar images.

  4. Control of Formation-Flying Multi-Element Space Interferometers with Direct Interferometer-Output Feedback

    NASA Technical Reports Server (NTRS)

    Lu, Hui-Ling; Cheng, Victor H. L.; Lyon, Richard G.; Carpenter, Kenneth G.

    2007-01-01

    The long-baseline space interferometer concept involving formation flying of multiple spacecrafts holds great promise as future space missions for high-resolution imagery. A major challenge of obtaining high-quality interferometric synthesized images from long-baseline space interferometers is to accurately control these spacecraft and their optics payloads in the specified configuration. Our research focuses on the determination of the optical errors to achieve fine control of long-baseline space interferometers without resorting to additional sensing equipment. We present a suite of estimation tools that can effectively extract from the raw interferometric image relative x/y, piston translational and tip/tilt deviations at the exit pupil aperture. The use of these error estimates in achieving control of the interferometer elements is demonstrated using simulated as well as laboratory-collected interferometric stellar images.

  5. Resolution enhancement of wide-field interferometric microscopy by coupled deep autoencoders.

    PubMed

    Işil, Çağatay; Yorulmaz, Mustafa; Solmaz, Berkan; Turhan, Adil Burak; Yurdakul, Celalettin; Ünlü, Selim; Ozbay, Ekmel; Koç, Aykut

    2018-04-01

    Wide-field interferometric microscopy is a highly sensitive, label-free, and low-cost biosensing imaging technique capable of visualizing individual biological nanoparticles such as viral pathogens and exosomes. However, further resolution enhancement is necessary to increase detection and classification accuracy of subdiffraction-limited nanoparticles. In this study, we propose a deep-learning approach, based on coupled deep autoencoders, to improve resolution of images of L-shaped nanostructures. During training, our method utilizes microscope image patches and their corresponding manual truth image patches in order to learn the transformation between them. Following training, the designed network reconstructs denoised and resolution-enhanced image patches for unseen input.

  6. Evidence for a Population of High-Redshift Submillimeter Galaxies from Interferometric Imaging

    NASA Astrophysics Data System (ADS)

    Younger, Joshua D.; Fazio, Giovanni G.; Huang, Jia-Sheng; Yun, Min S.; Wilson, Grant W.; Ashby, Matthew L. N.; Gurwell, Mark A.; Lai, Kamson; Peck, Alison B.; Petitpas, Glen R.; Wilner, David J.; Iono, Daisuke; Kohno, Kotaro; Kawabe, Ryohei; Hughes, David H.; Aretxaga, Itziar; Webb, Tracy; Martínez-Sansigre, Alejo; Kim, Sungeun; Scott, Kimberly S.; Austermann, Jason; Perera, Thushara; Lowenthal, James D.; Schinnerer, Eva; Smolčić, Vernesa

    2007-12-01

    We have used the Submillimeter Array to image a flux-limited sample of seven submillimeter galaxies, selected by the AzTEC camera on the JCMT at 1.1 mm, in the COSMOS field at 890 μm with ~2" resolution. All of the sources-two radio-bright and five radio-dim-are detected as single point sources at high significance (>6 σ), with positions accurate to ~0.2" that enable counterpart identification at other wavelengths observed with similarly high angular resolution. All seven have IRAC counterparts, but only two have secure counterparts in deep HST ACS imaging. As compared to the two radio-bright sources in the sample, and those in previous studies, the five radio-dim sources in the sample (1) have systematically higher submillimeter-to-radio flux ratios, (2) have lower IRAC 3.6-8.0 μm fluxes, and (3) are not detected at 24 μm. These properties, combined with size constraints at 890 μm (θ<~1.2''), suggest that the radio-dim submillimeter galaxies represent a population of very dusty starbursts, with physical scales similar to local ultraluminous infrared galaxies, with an average redshift higher than radio-bright sources.

  7. Fizeau interferometric imaging of Io volcanism with LBTI/LMIRcam

    NASA Astrophysics Data System (ADS)

    Leisenring, J. M.; Hinz, P. M.; Skrutskie, M.; Skemer, A.; Woodward, C. E.; Veillet, C.; Arcidiacono, C.; Bailey, V.; Bertero, M.; Boccacci, P.; Conrad, A.; de Kleer, K.; de Pater, I.; Defrère, D.; Hill, J.; Hofmann, K.-H.; Kaltenegger, L.; La Camera, A.; Nelson, M. J.; Schertl, D.; Spencer, J.; Weigelt, G.; Wilson, J. C.

    2014-07-01

    The Large Binocular Telescope (LBT) houses two 8.4-meter mirrors separated by 14.4 meters on a common mount. Coherent combination of these two AO-corrected apertures via the LBT Interferometer (LBTI) produces Fizeau interferometric images with a spatial resolution equivalent to that of a 22.8-meter telescope and the light- gathering power of single 11.8-meter mirror. Capitalizing on these unique capabilities, we used LBTI/LMIRcam to image thermal radiation from volcanic activity on the surface of Io at M-Band (4.8 μm) over a range of parallactic angles. At the distance of Io, the M-Band resolution of the interferometric baseline corresponds to a physical distance of ~135 km, enabling high-resolution monitoring of Io volcanism such as ares and outbursts inaccessible from other ground-based telescopes operating in this wavelength regime. Two deconvolution routines are used to recover the full spatial resolution of the combined images, resolving at least sixteen known volcanic hot spots. Coupling these observations with advanced image reconstruction algorithms demonstrates the versatility of Fizeau interferometry and realizes the LBT as the first in a series of extremely large telescopes.

  8. Algorithms and Array Design Criteria for Robust Imaging in Interferometry

    NASA Astrophysics Data System (ADS)

    Kurien, Binoy George

    Optical interferometry is a technique for obtaining high-resolution imagery of a distant target by interfering light from multiple telescopes. Image restoration from interferometric measurements poses a unique set of challenges. The first challenge is that the measurement set provides only a sparse-sampling of the object's Fourier Transform and hence image formation from these measurements is an inherently ill-posed inverse problem. Secondly, atmospheric turbulence causes severe distortion of the phase of the Fourier samples. We develop array design conditions for unique Fourier phase recovery, as well as a comprehensive algorithmic framework based on the notion of redundant-spaced-calibration (RSC), which together achieve reliable image reconstruction in spite of these challenges. Within this framework, we see that classical interferometric observables such as the bispectrum and closure phase can limit sensitivity, and that generalized notions of these observables can improve both theoretical and empirical performance. Our framework leverages techniques from lattice theory to resolve integer phase ambiguities in the interferometric phase measurements, and from graph theory, to select a reliable set of generalized observables. We analyze the expected shot-noise-limited performance of our algorithm for both pairwise and Fizeau interferometric architectures and corroborate this analysis with simulation results. We apply techniques from the field of compressed sensing to perform image reconstruction from the estimates of the object's Fourier coefficients. The end result is a comprehensive strategy to achieve well-posed and easily-predictable reconstruction performance in optical interferometry.

  9. Validation of nonlinear interferometric vibrational imaging as a molecular OCT technique by the use of Raman microscopy

    NASA Astrophysics Data System (ADS)

    Benalcazar, Wladimir A.; Jiang, Zhi; Marks, Daniel L.; Geddes, Joseph B.; Boppart, Stephen A.

    2009-02-01

    We validate a molecular imaging technique called Nonlinear Interferometric Vibrational Imaging (NIVI) by comparing vibrational spectra with those acquired from Raman microscopy. This broadband coherent anti-Stokes Raman scattering (CARS) technique uses heterodyne detection and OCT acquisition and design principles to interfere a CARS signal generated by a sample with a local oscillator signal generated separately by a four-wave mixing process. These are mixed and demodulated by spectral interferometry. Its confocal configuration allows the acquisition of 3D images based on endogenous molecular signatures. Images from both phantom and mammary tissues have been acquired by this instrument and its spectrum is compared with its spontaneous Raman signatures.

  10. Measuring droplet size distributions from overlapping interferometric particle images.

    PubMed

    Bocanegra Evans, Humberto; Dam, Nico; van der Voort, Dennis; Bertens, Guus; van de Water, Willem

    2015-02-01

    Interferometric particle imaging provides a simple way to measure the probability density function (PDF) of droplet sizes from out-focus images. The optical setup is straightforward, but the interpretation of the data is a problem when particle images overlap. We propose a new way to analyze the images. The emphasis is not on a precise identification of droplets, but on obtaining a good estimate of the PDF of droplet sizes in the case of overlapping particle images. The algorithm is tested using synthetic and experimental data. We next use these methods to measure the PDF of droplet sizes produced by spinning disk aerosol generators. The mean primary droplet diameter agrees with predictions from the literature, but we find a broad distribution of satellite droplet sizes.

  11. Interferometric temporal focusing microscopy using three-photon excitation fluorescence.

    PubMed

    Toda, Keisuke; Isobe, Keisuke; Namiki, Kana; Kawano, Hiroyuki; Miyawaki, Atsushi; Midorikawa, Katsumi

    2018-04-01

    Super-resolution microscopy has become a powerful tool for biological research. However, its spatial resolution and imaging depth are limited, largely due to background light. Interferometric temporal focusing (ITF) microscopy, which combines structured illumination microscopy and three-photon excitation fluorescence microscopy, can overcome these limitations. Here, we demonstrate ITF microscopy using three-photon excitation fluorescence, which has a spatial resolution of 106 nm at an imaging depth of 100 µm with an excitation wavelength of 1060 nm.

  12. An overview of instrumentation for the Large Binocular Telescope

    NASA Astrophysics Data System (ADS)

    Wagner, R. Mark

    2012-09-01

    An overview of instrumentation for the Large Binocular Telescope (LBT) is presented. Optical instrumentation includes the Large Binocular Camera (LBC), a pair of wide-field (27' x 27') mosaic CCD imagers at the prime focus, and the Multi-Object Double Spectrograph (MODS), a pair of dual-beam blue-red optimized long-slit spectrographs mounted at the left and right direct F/15 Gregorian foci incorporating multiple slit masks for multi-object spectroscopy over a 6' field and spectral resolutions of up to 2000. Infrared instrumentation includes the LBT Near-IR Spectroscopic Utility with Camera and Integral Field Unit for Extragalactic Research (LUCI), a modular near-infrared (0.9-2.5 μm) imager and spectrograph pair mounted at the left and right front bent F/15 Gregorian foci and designed for seeing-limited (FOV: 4' × 4') imaging, long-slit spectroscopy, and multiobject spectroscopy utilizing cooled slit masks and diffraction limited (FOV: 0'.5 × 0'.5) imaging and long-slit spectroscopy. Strategic instruments under development that can utilize the full 23-m baseline of the LBT include an interferometric cryogenic beam combiner with near-infrared and thermal-infrared instruments for Fizeau imaging and nulling interferometry (LBTI) and an optical bench near-infrared beam combiner utilizing multi-conjugate adaptive optics for high angular resolution and sensitivity (LINC-NIRVANA). LBTI is currently undergoing commissioning on the LBT and utilizing the installed adaptive secondary mirrors in both single- sided and two-sided beam combination modes. In addition, a fiber-fed bench spectrograph (PEPSI) capable of ultra high resolution spectroscopy and spectropolarimetry (R = 40,000-300,000) will be available as a principal investigator instrument. Over the past four years the LBC pair, LUCI1, and MODS1 have been commissioned and are now scheduled for routine partner science observations. The delivery of both LUCI2 and MODS2 is anticipated before the end of 2012. The availability of all these instruments mounted simultaneously on the LBT permits unique science, flexible scheduling, and improved operational support.

  13. Quasi real-time analysis of mixed-phase clouds using interferometric out-of-focus imaging: development of an algorithm to assess liquid and ice water content

    NASA Astrophysics Data System (ADS)

    Lemaitre, P.; Brunel, M.; Rondeau, A.; Porcheron, E.; Gréhan, G.

    2015-12-01

    According to changes in aircraft certifications rules, instrumentation has to be developed to alert the flight crews of potential icing conditions. The technique developed needs to measure in real time the amount of ice and liquid water encountered by the plane. Interferometric imaging offers an interesting solution: It is currently used to measure the size of regular droplets, and it can further measure the size of irregular particles from the analysis of their speckle-like out-of-focus images. However, conventional image processing needs to be speeded up to be compatible with the real-time detection of icing conditions. This article presents the development of an optimised algorithm to accelerate image processing. The algorithm proposed is based on the detection of each interferogram with the use of the gradient pair vector method. This method is shown to be 13 times faster than the conventional Hough transform. The algorithm is validated on synthetic images of mixed phase clouds, and finally tested and validated in laboratory conditions. This algorithm should have important applications in the size measurement of droplets and ice particles for aircraft safety, cloud microphysics investigation, and more generally in the real-time analysis of triphasic flows using interferometric particle imaging.

  14. High-resolution imaging of biological tissue with full-field optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Zhu, Yue; Gao, Wanrong

    2015-03-01

    A new full-field optical coherence tomography system with high-resolution has been developed for imaging of cells and tissues. Compared with other FF-OCT (Full-field optical coherence tomography, FF-OCT) systems illuminated with optical fiber bundle, the improved Köhler illumination arrangement with a halogen lamp was used in the proposed FF-OCT system. High numerical aperture microscopic objectives were used for imaging and a piezoelectric ceramic transducer (PZT) was used for phase-shifting. En-face tomographic images can be obtained by applying the five-step phase-shifting algorithm to a series of interferometric images which are recorded by a smart camera. Three-dimensional images can be generated from these tomographic images. Imaging of the chip of Intel Pentium 4 processor demonstrated the ultrahigh resolution of the system (lateral resolution is 0.8μm ), which approaches the theoretical resolution 0.7 μm× 0.5 μm (lateral × axial). En-face images of cells of onion show an excellent performance of the system in generating en-face images of biological tissues. Then, unstained pig stomach was imaged as a tissue and gastric pits could be easily recognized using FF-OCT system. Our study provides evidence for the potential ability of FFOCT in identifying gastric pits from pig stomach tissue. Finally, label-free and unstained ex vivo human liver tissues from both normal and tumor were imaged with this FFOCT system. The results show that the setup has the potential for medical diagnosis applications such liver cancer diagnosis.

  15. The Research of Spectral Reconstruction for Large Aperture Static Imaging Spectrometer

    NASA Astrophysics Data System (ADS)

    Lv, H.; Lee, Y.; Liu, R.; Fan, C.; Huang, Y.

    2018-04-01

    Imaging spectrometer obtains or indirectly obtains the spectral information of the ground surface feature while obtaining the target image, which makes the imaging spectroscopy has a prominent advantage in fine characterization of terrain features, and is of great significance for the study of geoscience and other related disciplines. Since the interference data obtained by interferometric imaging spectrometer is intermediate data, which must be reconstructed to achieve the high quality spectral data and finally used by users. The difficulty to restrict the application of interferometric imaging spectroscopy is to reconstruct the spectrum accurately. Based on the original image acquired by Large Aperture Static Imaging Spectrometer as the input, this experiment selected the pixel that is identified as crop by artificial recognition, extract and preprocess the interferogram to recovery the corresponding spectrum of this pixel. The result shows that the restructured spectrum formed a small crest near the wavelength of 0.55 μm with obvious troughs on both sides. The relative reflection intensity of the restructured spectrum rises abruptly at the wavelength around 0.7 μm, forming a steep slope. All these characteristics are similar with the spectral reflection curve of healthy green plants. It can be concluded that the experimental result is consistent with the visual interpretation results, thus validating the effectiveness of the scheme for interferometric imaging spectrum reconstruction proposed in this paper.

  16. Two dimensional photoacoustic imaging using microfiber interferometric acoustic transducers

    NASA Astrophysics Data System (ADS)

    Wang, Xiu Xin; Li, Zhang Yong; Tian, Yin; Wang, Wei; Pang, Yu; Tam, Kin Yip

    2018-07-01

    Photoacoustic imaging transducer with a pair of wavelength-matched Bragg gratings (forming a Fabry-Perot cavity) inscribed on a short section of microfiber has been developed. A tunable laser with wavelength that matched to one of selected fringe slopes was used to transmit the acoustic induced wavelength. Interferometric fringes with high finesse in transmission significantly enhanced the sensitivity of the transducer even under very small acoustic perturbations. The performance of this novel transducer was evaluated through the imaging studies of human hairs (∼98 μm in diameter). The spatial resolution is 300 μm. We have demonstrated that the novel transducer developed in this study is a versatile tool for photoacoustic imaging study.

  17. Image Reconstruction for Interferometric Imaging of Geosynchronous Satellites

    NASA Astrophysics Data System (ADS)

    DeSantis, Zachary J.

    Imaging distant objects at a high resolution has always presented a challenge due to the diffraction limit. Larger apertures improve the resolution, but at some point the cost of engineering, building, and correcting phase aberrations of large apertures become prohibitive. Interferometric imaging uses the Van Cittert-Zernike theorem to form an image from measurements of spatial coherence. This effectively allows the synthesis of a large aperture from two or more smaller telescopes to improve the resolution. We apply this method to imaging geosynchronous satellites with a ground-based system. Imaging a dim object from the ground presents unique challenges. The atmosphere creates errors in the phase measurements. The measurements are taken simultaneously across a large bandwidth of light. The atmospheric piston error, therefore, manifests as a linear phase error across the spectral measurements. Because the objects are faint, many of the measurements are expected to have a poor signal-to-noise ratio (SNR). This eliminates possibility of use of commonly used techniques like closure phase, which is a standard technique in astronomical interferometric imaging for making partial phase measurements in the presence of atmospheric error. The bulk of our work has been focused on forming an image, using sub-Nyquist sampled data, in the presence of these linear phase errors without relying on closure phase techniques. We present an image reconstruction algorithm that successfully forms an image in the presence of these linear phase errors. We demonstrate our algorithm’s success in both simulation and in laboratory experiments.

  18. Improving the performance of interferometric imaging through the use of disturbance feedforward.

    PubMed

    Böhm, Michael; Glück, Martin; Keck, Alexander; Pott, Jörg-Uwe; Sawodny, Oliver

    2017-05-01

    In this paper, we present a disturbance compensation technique to improve the performance of interferometric imaging for extremely large ground-based telescopes, e.g., the Large Binocular Telescope (LBT), which serves as the application example in this contribution. The most significant disturbance sources at ground-based telescopes are wind-induced mechanical vibrations in the range of 8-60 Hz. Traditionally, their optical effect is eliminated by feedback systems, such as the adaptive optics control loop combined with a fringe tracking system within the interferometric instrument. In this paper, accelerometers are used to measure the vibrations. These measurements are used to estimate the motion of the mirrors, i.e., tip, tilt and piston, with a dynamic estimator. Additional delay compensation methods are presented to cancel sensor network delays and actuator input delays, improving the estimation result even more, particularly at higher frequencies. Because various instruments benefit from the implementation of telescope vibration mitigation, the estimator is implemented as a separate, independent software on the telescope, publishing the estimated values via multicast on the telescope's ethernet. Every client capable of using and correcting the estimated disturbances can subscribe and use these values in a feedforward for its compensation device, e.g., the deformable mirror, the piston mirror of LINC-NIRVANA, or the fast path length corrector of the Large Binocular Telescope Interferometer. This easy-to-use approach eventually leveraged the presented technology for interferometric use at the LBT and now significantly improves the sky coverage, performance, and operational robustness of interferometric imaging on a regular basis.

  19. Phase-space evolution of x-ray coherence in phase-sensitive imaging.

    PubMed

    Wu, Xizeng; Liu, Hong

    2008-08-01

    X-ray coherence evolution in the imaging process plays a key role for x-ray phase-sensitive imaging. In this work we present a phase-space formulation for the phase-sensitive imaging. The theory is reformulated in terms of the cross-spectral density and associated Wigner distribution. The phase-space formulation enables an explicit and quantitative account of partial coherence effects on phase-sensitive imaging. The presented formulas for x-ray spectral density at the detector can be used for performing accurate phase retrieval and optimizing the phase-contrast visibility. The concept of phase-space shearing length derived from this phase-space formulation clarifies the spatial coherence requirement for phase-sensitive imaging with incoherent sources. The theory has been applied to x-ray Talbot interferometric imaging as well. The peak coherence condition derived reveals new insights into three-grating-based Talbot-interferometric imaging and gratings-based x-ray dark-field imaging.

  20. A novel lightweight Fizeau infrared interferometric imaging system

    NASA Astrophysics Data System (ADS)

    Hope, Douglas A.; Hart, Michael; Warner, Steve; Durney, Oli; Romeo, Robert

    2016-05-01

    Aperture synthesis imaging techniques using an interferometer provide a means to achieve imagery with spatial resolution equivalent to a conventional filled aperture telescope at a significantly reduced size, weight and cost, an important implication for air- and space-borne persistent observing platforms. These concepts have been realized in SIRII (Space-based IR-imaging interferometer), a new light-weight, compact SWIR and MWIR imaging interferometer designed for space-based surveillance. The sensor design is configured as a six-element Fizeau interferometer; it is scalable, light-weight, and uses structural components and main optics made of carbon fiber replicated polymer (CFRP) that are easy to fabricate and inexpensive. A three-element prototype of the SIRII imager has been constructed. The optics, detectors, and interferometric signal processing principles draw on experience developed in ground-based astronomical applications designed to yield the highest sensitivity and resolution with cost-effective optical solutions. SIRII is being designed for technical intelligence from geo-stationary orbit. It has an instantaneous 6 x 6 mrad FOV and the ability to rapidly scan a 6x6 deg FOV, with a minimal SNR. The interferometric design can be scaled to larger equivalent filled aperture, while minimizing weight and costs when compared to a filled aperture telescope with equivalent resolution. This scalability in SIRII allows it address a range of IR-imaging scenarios.

  1. Large Binocular Telescope project

    NASA Astrophysics Data System (ADS)

    Hill, John M.; Salinari, Piero

    2000-08-01

    The Large Binocular Telescope (LBT) Project is a collaboration between institutions in Arizona, Germany, Italy, and Ohio. The telescope will have two 8.4 meter diameter primary mirrors phased on a common mounting with a 22.8 meter baseline. The second of two borosilicate honeycomb primary mirrors for LBT is being case at the Steward Observatory Mirror Lab this year. The baseline optical configuration of LBT includes adaptive infrared secondaries of a Gregorian design. The F/15 secondaries are undersized to provide a low thermal background focal plane which is unvignetted over a 4 arcminute diameter field-of- view. The interferometric focus combining the light from the two 8.4 meter primaries will reimage the two folded Gregorian focal planes to three central locations. The telescope elevation structure accommodates swing arm spiders which allow rapid interchange of the various secondary and tertiary mirrors as well as prime focus cameras. Maximum stiffness and minimal thermal disturbance were important drivers for the design of the telescope in order to provide the best possible images for interferometric observations. The telescope structure accommodates installation of a vacuum bell jar for aluminizing the primary mirrors in-situ on the telescope. The telescope structure is being fabricated in Italy by Ansaldo Energia S.p.A. in Milan. After pre-erection in the factory, the telescope will be shipped to Arizona in early 2001. The enclosure is being built on Mt. Graham under the auspices of Hart Construction Management Services of Safford, Arizona. The enclosure will be completed by late 2001 and ready for telescope installation.

  2. Non-convex optimization for self-calibration of direction-dependent effects in radio interferometric imaging

    NASA Astrophysics Data System (ADS)

    Repetti, Audrey; Birdi, Jasleen; Dabbech, Arwa; Wiaux, Yves

    2017-10-01

    Radio interferometric imaging aims to estimate an unknown sky intensity image from degraded observations, acquired through an antenna array. In the theoretical case of a perfectly calibrated array, it has been shown that solving the corresponding imaging problem by iterative algorithms based on convex optimization and compressive sensing theory can be competitive with classical algorithms such as clean. However, in practice, antenna-based gains are unknown and have to be calibrated. Future radio telescopes, such as the Square Kilometre Array, aim at improving imaging resolution and sensitivity by orders of magnitude. At this precision level, the direction-dependency of the gains must be accounted for, and radio interferometric imaging can be understood as a blind deconvolution problem. In this context, the underlying minimization problem is non-convex, and adapted techniques have to be designed. In this work, leveraging recent developments in non-convex optimization, we propose the first joint calibration and imaging method in radio interferometry, with proven convergence guarantees. Our approach, based on a block-coordinate forward-backward algorithm, jointly accounts for visibilities and suitable priors on both the image and the direction-dependent effects (DDEs). As demonstrated in recent works, sparsity remains the prior of choice for the image, while DDEs are modelled as smooth functions of the sky, I.e. spatially band-limited. Finally, we show through simulations the efficiency of our method, for the reconstruction of both images of point sources and complex extended sources. matlab code is available on GitHub.

  3. Interferometric scattering (iSCAT) microscopy: studies of biological membrane dynamics

    NASA Astrophysics Data System (ADS)

    Reina, Francesco; Galiani, Silvia; Shrestha, Dilip; Sezgin, Erdinc; Lagerholm, B. Christoffer; Cole, Daniel; Kukura, Philipp; Eggeling, Christian

    2018-02-01

    The study of the organization and dynamics of molecules in model and cellular membranes is an important topic in contemporary biophysics. Imaging and single particle tracking in this particular field, however, proves particularly demanding, as it requires simultaneously high spatio-temporal resolution and high signal-to-noise ratios. A remedy to this challenge might be Interferometric Scattering (iSCAT) microscopy, due to its fast sampling rates, label-free imaging capabilities and, most importantly, tuneable signal level output. Here we report our recent advances in the imaging and molecular tracking on phase-separated model membrane systems and live-cell membranes using this technique.

  4. Camera-based micro interferometer for distance sensing

    NASA Astrophysics Data System (ADS)

    Will, Matthias; Schädel, Martin; Ortlepp, Thomas

    2017-12-01

    Interference of light provides a high precision, non-contact and fast method for measurement method for distances. Therefore this technology dominates in high precision systems. However, in the field of compact sensors capacitive, resistive or inductive methods dominates. The reason is, that the interferometric system has to be precise adjusted and needs a high mechanical stability. As a result, we have usual high-priced complex systems not suitable in the field of compact sensors. To overcome these we developed a new concept for a very small interferometric sensing setup. We combine a miniaturized laser unit, a low cost pixel detector and machine vision routines to realize a demonstrator for a Michelson type micro interferometer. We demonstrate a low cost sensor smaller 1cm3 including all electronics and demonstrate distance sensing up to 30 cm and resolution in nm range.

  5. Compact LWIR sensors using spatial interferometric technology (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Bingham, Adam L.; Lucey, Paul G.; Knobbe, Edward T.

    2017-05-01

    Recent developments in reducing the cost and mass of hyperspectral sensors have enabled more widespread use for short range compositional imaging applications. HSI in the long wave infrared (LWIR) is of interest because it is sensitive to spectral phenomena not accessible to other wavelengths, and because of its inherent thermal imaging capability. At Spectrum Photonics we have pursued compact LWIR hyperspectral sensors both using microbolometer arrays and compact cryogenic detector cameras. Our microbolometer-based systems are principally aimed at short standoff applications, currently weigh 10-15 lbs and feature sizes approximately 20x20x10 cm, with sensitivity in the 1-2 microflick range, and imaging times on the order of 30 seconds. Our systems that employ cryogenic arrays are aimed at medium standoff ranges such as nadir looking missions from UAVs. Recent work with cooled sensors has focused on Strained Layer Superlattice (SLS) technology, as these detector arrays are undergoing rapid improvements, and have some advantages compared to HgCdTe detectors in terms of calibration stability. These sensors include full on-board processing sensor stabilization so are somewhat larger than the microbolometer systems, but could be adapted to much more compact form factors. We will review our recent progress in both these application areas.

  6. Interferometric and nonlinear-optical spectral-imaging techniques for outer space and live cells

    NASA Astrophysics Data System (ADS)

    Itoh, Kazuyoshi

    2015-12-01

    Multidimensional signals such as the spectral images allow us to have deeper insights into the natures of objects. In this paper the spectral imaging techniques that are based on optical interferometry and nonlinear optics are presented. The interferometric imaging technique is based on the unified theory of Van Cittert-Zernike and Wiener-Khintchine theorems and allows us to retrieve a spectral image of an object in the far zone from the 3D spatial coherence function. The retrieval principle is explained using a very simple object. The promising applications to space interferometers for astronomy that are currently in progress will also be briefly touched on. An interesting extension of interferometric spectral imaging is a 3D and spectral imaging technique that records 4D information of objects where the 3D and spectral information is retrieved from the cross-spectral density function of optical field. The 3D imaging is realized via the numerical inverse propagation of the cross-spectral density. A few techniques suggested recently are introduced. The nonlinear optical technique that utilizes stimulated Raman scattering (SRS) for spectral imaging of biomedical targets is presented lastly. The strong signals of SRS permit us to get vibrational information of molecules in the live cell or tissue in real time. The vibrational information of unstained or unlabeled molecules is crucial especially for medical applications. The 3D information due to the optical nonlinearity is also the attractive feature of SRS spectral microscopy.

  7. Hyperspectral imaging with in-line interferometric femtosecond stimulated Raman scattering spectroscopy

    NASA Astrophysics Data System (ADS)

    Dobner, Sven; Fallnich, Carsten

    2014-02-01

    We present the hyperspectral imaging capabilities of in-line interferometric femtosecond stimulated Raman scattering. The beneficial features of this method, namely, the improved signal-to-background ratio compared to other applicable broadband stimulated Raman scattering methods and the simple experimental implementation, allow for a rather fast acquisition of three-dimensional raster-scanned hyperspectral data-sets, which is shown for PMMA beads and a lipid droplet in water as a demonstration. A subsequent application of a principle component analysis displays the chemical selectivity of the method.

  8. Process for combining multiple passes of interferometric SAR data

    DOEpatents

    Bickel, Douglas L.; Yocky, David A.; Hensley, Jr., William H.

    2000-11-21

    Interferometric synthetic aperture radar (IFSAR) is a promising technology for a wide variety of military and civilian elevation modeling requirements. IFSAR extends traditional two dimensional SAR processing to three dimensions by utilizing the phase difference between two SAR images taken from different elevation positions to determine an angle of arrival for each pixel in the scene. This angle, together with the two-dimensional location information in the traditional SAR image, can be transformed into geographic coordinates if the position and motion parameters of the antennas are known accurately.

  9. Extracting DEM from airborne X-band data based on PolInSAR

    NASA Astrophysics Data System (ADS)

    Hou, X. X.; Huang, G. M.; Zhao, Z.

    2015-06-01

    Polarimetric Interferometric Synthetic Aperture Radar (PolInSAR) is a new trend of SAR remote sensing technology which combined polarized multichannel information and Interferometric information. It is of great significance for extracting DEM in some regions with low precision of DEM such as vegetation coverage area and building concentrated area. In this paper we describe our experiments with high-resolution X-band full Polarimetric SAR data acquired by a dual-baseline interferometric airborne SAR system over an area of Danling in southern China. Pauli algorithm is used to generate the double polarimetric interferometry data, Singular Value Decomposition (SVD), Numerical Radius (NR) and Phase diversity (PD) methods are used to generate the full polarimetric interferometry data. Then we can make use of the polarimetric interferometric information to extract DEM with processing of pre filtering , image registration, image resampling, coherence optimization, multilook processing, flat-earth removal, interferogram filtering, phase unwrapping, parameter calibration, height derivation and geo-coding. The processing system named SARPlore has been exploited based on VC++ led by Chinese Academy of Surveying and Mapping. Finally compared optimization results with the single polarimetric interferometry, it has been observed that optimization ways can reduce the interferometric noise and the phase unwrapping residuals, and improve the precision of DEM. The result of full polarimetric interferometry is better than double polarimetric interferometry. Meanwhile, in different terrain, the result of full polarimetric interferometry will have a different degree of increase.

  10. Space Radar Image of Kilauea, Hawaii - Interferometry 1

    NASA Image and Video Library

    1999-05-01

    This X-band image of the volcano Kilauea was taken on October 4, 1994, by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar. The area shown is about 9 kilometers by 13 kilometers (5.5 miles by 8 miles) and is centered at about 19.58 degrees north latitude and 155.55 degrees west longitude. This image and a similar image taken during the first flight of the radar instrument on April 13, 1994 were combined to produce the topographic information by means of an interferometric process. This is a process by which radar data acquired on different passes of the space shuttle is overlaid to obtain elevation information. Three additional images are provided showing an overlay of radar data with interferometric fringes; a three-dimensional image based on altitude lines; and, finally, a topographic view of the region. http://photojournal.jpl.nasa.gov/catalog/PIA01763

  11. Optical spatial heterodyne interferometric Fourier transform technique (OSHIFT) and a resulting interferometer

    NASA Astrophysics Data System (ADS)

    Georges, James A., III

    2007-09-01

    This article reports on the novel patent pending Optical Spatial Heterodyne Interferometric Fourier Transform Technique (the OSHIFT technique), the resulting interferometer also referred to as OSHIFT, and its preliminary results. OSHIFT was borne out of the following requirements: wavefront sensitivity on the order of 1/100 waves, high-frequency wavefront spatial sampling, snapshot 100Hz operation, and the ability to deal with discontinuous wavefronts. The first two capabilities lend themselves to the use of traditional interferometric techniques; however, the last two prove difficult for standard techniques, e.g., phase shifting interferometry tends to take a time sequence of images and most interferometers require estimation of a center fringe across wavefront discontinuities. OSHIFT overcomes these challenges by employing a spatial heterodyning concept in the Fourier (image) plane of the optic-under-test. This concept, the mathematical theory, an autocorrelation view of operation, and the design with results of OSHIFT will be discussed. Also discussed will be future concepts such as a sensor that could interrogate an entire imaging system as well as a methodology to create innovative imaging systems that encode wavefront information onto the image. Certain techniques and systems described in this paper are the subject of a patent application currently pending in the United States Patent Office.

  12. A low-frequency near-field interferometric-TOA 3-D Lightning Mapping Array

    NASA Astrophysics Data System (ADS)

    Lyu, Fanchao; Cummer, Steven A.; Solanki, Rahulkumar; Weinert, Joel; McTague, Lindsay; Katko, Alex; Barrett, John; Zigoneanu, Lucian; Xie, Yangbo; Wang, Wenqi

    2014-11-01

    We report on the development of an easily deployable LF near-field interferometric-time of arrival (TOA) 3-D Lightning Mapping Array applied to imaging of entire lightning flashes. An interferometric cross-correlation technique is applied in our system to compute windowed two-sensor time differences with submicrosecond time resolution before TOA is used for source location. Compared to previously reported LF lightning location systems, our system captures many more LF sources. This is due mainly to the improved mapping of continuous lightning processes by using this type of hybrid interferometry/TOA processing method. We show with five station measurements that the array detects and maps different lightning processes, such as stepped and dart leaders, during both in-cloud and cloud-to-ground flashes. Lightning images mapped by our LF system are remarkably similar to those created by VHF mapping systems, which may suggest some special links between LF and VHF emission during lightning processes.

  13. Large Binocular Telescope Observations of Europa Occulting Io's Volcanoes at 4.8 μm

    NASA Astrophysics Data System (ADS)

    Skrutskie, Michael F.; Conrad, Albert; Resnick, Aaron; Leisenring, Jarron; Hinz, Phil; de Pater, Imke; de Kleer, Katherine; Spencer, John; Skemer, Andrew; Woodward, Charles E.; Davies, Ashley Gerard; Defrére, Denis

    2015-11-01

    On 8 March 2015 Europa passed nearly centrally in front of Io. The Large Binocular Telescope observed this event in dual-aperture AO-corrected Fizeau interferometric imaging mode using the mid-infrared imager LMIRcam operating behind the Large Binocular Telescope Interferometer (LBTI) at a broadband wavelength of 4.8 μm (M-band). Occultation light curves generated from frames recorded every 123 milliseconds show that both Loki and Pele/Pillan were well resolved. Europa's center shifted by 2 kilometers relative to Io from frame-to-frame. The derived light curve for Loki is consistent with the double-lobed structure reported by Conrad et al. (2015) using direct interferometric imaging with LBTI.

  14. Ortho-Babinet polarization-interrogating filter: an interferometric approach to polarization measurement.

    PubMed

    Van Delden, Jay S

    2003-07-15

    A novel, interferometric, polarization-interrogating filter assembly and method for the simultaneous measurement of all four Stokes parameters across a partially polarized irradiance image in a no-moving-parts, instantaneous, highly sensitive manner is described. In the reported embodiment of the filter, two spatially varying linear retarders and a linear polarizer comprise an ortho-Babinet, polarization-interrogating (OBPI) filter. The OBPI filter uniquely encodes the incident ensemble of electromagnetic wave fronts comprising a partially polarized irradiance image in a controlled, deterministic, spatially varying manner to map the complete state of polarization across the image to local variations in a superposed interference pattern. Experimental interferograms are reported along with a numerical simulation of the method.

  15. The ALMA Science Pipeline: Current Status

    NASA Astrophysics Data System (ADS)

    Humphreys, Elizabeth; Miura, Rie; Brogan, Crystal L.; Hibbard, John; Hunter, Todd R.; Indebetouw, Remy

    2016-09-01

    The ALMA Science Pipeline is being developed for the automated calibration and imaging of ALMA interferometric and single-dish data. The calibration Pipeline for interferometric data was accepted for use by ALMA Science Operations in 2014, and for single-dish data end-to-end processing in 2015. However, work is ongoing to expand the use cases for which the Pipeline can be used e.g. for higher frequency and lower signal-to-noise datasets, and for new observing modes. A current focus includes the commissioning of science target imaging for interferometric data. For the Single Dish Pipeline, the line finding algorithm used in baseline subtraction and baseline flagging heuristics have been greately improved since the prototype used for data from the previous cycle. These algorithms, unique to the Pipeline, produce better results than standard manual processing in many cases. In this poster, we report on the current status of the Pipeline capabilities, present initial results from the Imaging Pipeline, and the smart line finding and flagging algorithm used in the Single Dish Pipeline. The Pipeline is released as part of CASA (the Common Astronomy Software Applications package).

  16. Autonomous Navigation for Deep Space Missions

    NASA Technical Reports Server (NTRS)

    Bhaskaran, Shyam

    2012-01-01

    Navigation (determining where the spacecraft is at any given time, controlling its path to achieve desired targets), performed using ground-in- the-loop techniques: (1) Data includes 2-way radiometric (Doppler, range), interferometric (Delta- Differential One-way Range), and optical (images of natural bodies taken by onboard camera) (2) Data received on the ground, processed to determine orbit, commands sent to execute maneuvers to control orbit. A self-contained, onboard, autonomous navigation system can: (1) Eliminate delays due to round-trip light time (2) Eliminate the human factors in ground-based processing (3) Reduce turnaround time from navigation update to minutes, down to seconds (4) React to late-breaking data. At JPL, we have developed the framework and computational elements of an autonomous navigation system, called AutoNav. It was originally developed as one of the technologies for the Deep Space 1 mission, launched in 1998; subsequently used on three other spacecraft, for four different missions. The primary use has been on comet missions to track comets during flybys, and impact one comet.

  17. Imaging Active Giants and Comparisons to Doppler Imaging

    NASA Astrophysics Data System (ADS)

    Roettenbacher, Rachael

    2018-04-01

    In the outer layers of cool, giant stars, stellar magnetism stifles convection creating localized starspots, analogous to sunspots. Because they frequently cover much larger regions of the stellar surface than sunspots, starspots of giant stars have been imaged using a variety of techniques to understand, for example, stellar magnetism, differential rotation, and spot evolution. Active giants have been imaged using photometric, spectroscopic, and, only recently, interferometric observations. Interferometry has provided a way to unambiguously see stellar surfaces without the degeneracies experienced by other methods. The only facility presently capable of obtaining the sub-milliarcsecond resolution necessary to not only resolve some giant stars, but also features on their surfaces is the Center for High-Angular Resolution Astronomy (CHARA) Array. Here, an overview will be given of the results of imaging active giants and details on the recent comparisons of simultaneous interferometric and Doppler images.

  18. Non-interferometric phase retrieval using refractive index manipulation.

    PubMed

    Chen, Chyong-Hua; Hsu, Hsin-Feng; Chen, Hou-Ren; Hsieh, Wen-Feng

    2017-04-07

    We present a novel, inexpensive and non-interferometric technique to retrieve phase images by using a liquid crystal phase shifter without including any physically moving parts. First, we derive a new equation of the intensity-phase relation with respect to the change of refractive index, which is similar to the transport of the intensity equation. The equation indicates that this technique is unneeded to consider the variation of magnifications between optical images. For proof of the concept, we use a liquid crystal mixture MLC 2144 to manufacture a phase shifter and to capture the optical images in a rapid succession by electrically tuning the applied voltage of the phase shifter. Experimental results demonstrate that this technique is capable of reconstructing high-resolution phase images and to realize the thickness profile of a microlens array quantitatively.

  19. Computational adaptive optics for broadband interferometric tomography of tissues and cells

    NASA Astrophysics Data System (ADS)

    Adie, Steven G.; Mulligan, Jeffrey A.

    2016-03-01

    Adaptive optics (AO) can shape aberrated optical wavefronts to physically restore the constructive interference needed for high-resolution imaging. With access to the complex optical field, however, many functions of optical hardware can be achieved computationally, including focusing and the compensation of optical aberrations to restore the constructive interference required for diffraction-limited imaging performance. Holography, which employs interferometric detection of the complex optical field, was developed based on this connection between hardware and computational image formation, although this link has only recently been exploited for 3D tomographic imaging in scattering biological tissues. This talk will present the underlying imaging science behind computational image formation with optical coherence tomography (OCT) -- a beam-scanned version of broadband digital holography. Analogous to hardware AO (HAO), we demonstrate computational adaptive optics (CAO) and optimization of the computed pupil correction in 'sensorless mode' (Zernike polynomial corrections with feedback from image metrics) or with the use of 'guide-stars' in the sample. We discuss the concept of an 'isotomic volume' as the volumetric extension of the 'isoplanatic patch' introduced in astronomical AO. Recent CAO results and ongoing work is highlighted to point to the potential biomedical impact of computed broadband interferometric tomography. We also discuss the advantages and disadvantages of HAO vs. CAO for the effective shaping of optical wavefronts, and highlight opportunities for hybrid approaches that synergistically combine the unique advantages of hardware and computational methods for rapid volumetric tomography with cellular resolution.

  20. The flight test of Pi-SAR(L) for the repeat-pass interferometric SAR

    NASA Astrophysics Data System (ADS)

    Nohmi, Hitoshi; Shimada, Masanobu; Miyawaki, Masanori

    2006-09-01

    This paper describes the experiment of the repeat pass interferometric SAR using Pi-SAR(L). The air-borne repeat-pass interferometric SAR is expected as an effective method to detect landslide or predict a volcano eruption. To obtain a high-quality interferometric image, it is necessary to make two flights on the same flight pass. In addition, since the antenna of the Pi-SAR(L) is secured to the aircraft, it is necessary to fly at the same drift angle to keep the observation direction same. We built a flight control system using an auto pilot which has been installed in the airplane. This navigation system measures position and altitude precisely with using a differential GPS, and the PC Navigator outputs a difference from the desired course to the auto pilot. Since the air density is thinner and the speed is higher than the landing situation, the gain of the control system is required to be adjusted during the repeat pass flight. The observation direction could be controlled to some extent by adjusting a drift angle with using a flight speed control. The repeat-pass flight was conducted in Japan for three days in late November. The flight was stable and the deviation was within a few meters for both horizontal and vertical direction even in the gusty condition. The SAR data were processed in time domain based on range Doppler algorism to make the complete motion compensation. Thus, the interferometric image processed after precise phase compensation is shown.

  1. The 2014 interferometric imaging beauty contest

    NASA Astrophysics Data System (ADS)

    Monnier, John D.; Berger, Jean-Philippe; Le Bouquin, Jean-Baptiste; Tuthill, Peter G.; Wittkowski, Markus; Grellmann, Rebekka; Müller, André; Renganswany, Sridhar; Hummel, Christian; Hofmann, Karl-Heinz; Schertl, Dieter; Weigelt, Gerd; Young, John; Buscher, David; Sanchez-Bermudez, Joel; Alberdi, Antxon; Schoedel, Rainer; Köhler, Rainer; Soulez, Ferréol; Thiébaut, Éric; Kluska, Jacques; Malbet, Fabien; Duvert, Gilles; Kraus, Stefan; Kloppenborg, Brian K.; Baron, Fabien; de Wit, Willem-Jan; Rivinius, Thomas; Merand, Antoine

    2014-07-01

    Here we present the results of the 6th biennial optical interferometry imaging beauty contest. Taking advantage of a unique opportunity, the red supergiant VY CMa and the Mira variable R Car were observed in the astronomical H-band with three 4-telescope configurations of the VLTI-AT array using the PIONIER instrument. The community was invited to participate in the subsequent image reconstruction and interpretation phases of the project. Ten groups submitted entries to the beauty contest, and we found reasonable consistency between images obtained from independent workers using quite different algorithms. We also found that significant differences existed between the submitted images, much greater than in past beauty contests that were all based on simulated data. A novel crowd-sourcing" method allowed consensus median images to be constructed, filtering likely artifacts and retaining real features." We definitively detect strong spots on the surfaces of both stars as well as distinct circumstellar shells of emission (likely water/CO) around R Car. In a close contest, Joel Sanchez (IAA-CSIC/Spain) was named the winner of the 2014 interferometric imaging beauty contest. This process has shown that new comers" can use publicly-available imaging software to interpret VLTI/PIONIER imaging data, as long as sufficient observations are taken to have complete uv coverage { a luxury that is often missing. We urge proposers to request adequate observing nights to collect sufficient data for imaging and for time allocation committees to recognise the importance of uv coverage for reliable interpretation of interferometric data. We believe that the result of the proposed broad international project will contribute to inspiring trust in the image reconstruction processes in optical interferometry.

  2. Computational adaptive optics for broadband optical interferometric tomography of biological tissue

    NASA Astrophysics Data System (ADS)

    Boppart, Stephen A.

    2015-03-01

    High-resolution real-time tomography of biological tissues is important for many areas of biological investigations and medical applications. Cellular level optical tomography, however, has been challenging because of the compromise between transverse imaging resolution and depth-of-field, the system and sample aberrations that may be present, and the low imaging sensitivity deep in scattering tissues. The use of computed optical imaging techniques has the potential to address several of these long-standing limitations and challenges. Two related techniques are interferometric synthetic aperture microscopy (ISAM) and computational adaptive optics (CAO). Through three-dimensional Fourierdomain resampling, in combination with high-speed OCT, ISAM can be used to achieve high-resolution in vivo tomography with enhanced depth sensitivity over a depth-of-field extended by more than an order-of-magnitude, in realtime. Subsequently, aberration correction with CAO can be performed in a tomogram, rather than to the optical beam of a broadband optical interferometry system. Based on principles of Fourier optics, aberration correction with CAO is performed on a virtual pupil using Zernike polynomials, offering the potential to augment or even replace the more complicated and expensive adaptive optics hardware with algorithms implemented on a standard desktop computer. Interferometric tomographic reconstructions are characterized with tissue phantoms containing sub-resolution scattering particles, and in both ex vivo and in vivo biological tissue. This review will collectively establish the foundation for high-speed volumetric cellular-level optical interferometric tomography in living tissues.

  3. Computational adaptive optics for broadband optical interferometric tomography of biological tissue.

    PubMed

    Adie, Steven G; Graf, Benedikt W; Ahmad, Adeel; Carney, P Scott; Boppart, Stephen A

    2012-05-08

    Aberrations in optical microscopy reduce image resolution and contrast, and can limit imaging depth when focusing into biological samples. Static correction of aberrations may be achieved through appropriate lens design, but this approach does not offer the flexibility of simultaneously correcting aberrations for all imaging depths, nor the adaptability to correct for sample-specific aberrations for high-quality tomographic optical imaging. Incorporation of adaptive optics (AO) methods have demonstrated considerable improvement in optical image contrast and resolution in noninterferometric microscopy techniques, as well as in optical coherence tomography. Here we present a method to correct aberrations in a tomogram rather than the beam of a broadband optical interferometry system. Based on Fourier optics principles, we correct aberrations of a virtual pupil using Zernike polynomials. When used in conjunction with the computed imaging method interferometric synthetic aperture microscopy, this computational AO enables object reconstruction (within the single scattering limit) with ideal focal-plane resolution at all depths. Tomographic reconstructions of tissue phantoms containing subresolution titanium-dioxide particles and of ex vivo rat lung tissue demonstrate aberration correction in datasets acquired with a highly astigmatic illumination beam. These results also demonstrate that imaging with an aberrated astigmatic beam provides the advantage of a more uniform depth-dependent signal compared to imaging with a standard gaussian beam. With further work, computational AO could enable the replacement of complicated and expensive optical hardware components with algorithms implemented on a standard desktop computer, making high-resolution 3D interferometric tomography accessible to a wider group of users and nonspecialists.

  4. The high resolution optical instruments for the Pleiades HR Earth observation satellites

    NASA Astrophysics Data System (ADS)

    Gaudin-Delrieu, Catherine; Lamard, Jean-Luc; Cheroutre, Philippe; Bailly, Bruno; Dhuicq, Pierre; Puig, Olivier

    2017-11-01

    Coming after the SPOT satellites series, PLEIADESHR is a CNES optical high resolution satellite dedicated to Earth observation, part of a larger optical and radar multi-sensors system, ORFEO, which is developed in cooperation between France and Italy for dual Civilian and Defense use. The development of the two PLEIADES-HR cameras was entrusted by CNES to Thales Alenia Space. This new generation of instrument represents a breakthrough in comparison with the previous SPOT instruments owing to a significant step in on-ground resolution, which approaches the capabilities of aerial photography. The PLEIADES-HR instrument program benefits from Thales Alenia Space long and successful heritage in Earth observation from space. The proposed solution benefits from an extensive use of existing products, Cannes Space Optics Centre facilities, unique in Europe, dedicated to High Resolution instruments. The optical camera provides wide field panchromatic images supplemented by 4 multispectral channels with narrow spectral bands. The optical concept is based on a four mirrors Korsch telescope. Crucial improvements in detector technology, optical fabrication and electronics make it possible for the PLEIADES-HR instrument to achieve the image quality requirements while respecting the drastic limitations of mass and volume imposed by the satellite agility needs and small launchers compatibility. The two flight telescopes were integrated, aligned and tested. After the integration phase, the alignment, mainly based on interferometric measurements in vacuum chamber, was successfully achieved within high accuracy requirements. The wave front measurements show outstanding performances, confirmed, after the integration of the PFM Detection Unit, by MTF measurements on the Proto-Flight Model Instrument. Delivery of the proto flight model occurred mi-2008. The FM2 Instrument delivery is planned Q2-2009. The first optical satellite launch of the PLEIADES-HR constellation is foreseen beginning-2010, the second will follow beginning-2011.

  5. Ground Deformation from Chilean Volcanic Eruption Shown by Satellite Radar Image

    NASA Image and Video Library

    2015-04-29

    This satellite interferometric synthetic aperture radar image-pair shows relative deformation of the Earth surface when nn April 22-23, 2015, significant explosive eruptions occurred at Calbuco volcano, Chile.

  6. Non-interferometric phase retrieval using refractive index manipulation

    PubMed Central

    Chen, Chyong-Hua; Hsu, Hsin-Feng; Chen, Hou-Ren; Hsieh, Wen-Feng

    2017-01-01

    We present a novel, inexpensive and non-interferometric technique to retrieve phase images by using a liquid crystal phase shifter without including any physically moving parts. First, we derive a new equation of the intensity-phase relation with respect to the change of refractive index, which is similar to the transport of the intensity equation. The equation indicates that this technique is unneeded to consider the variation of magnifications between optical images. For proof of the concept, we use a liquid crystal mixture MLC 2144 to manufacture a phase shifter and to capture the optical images in a rapid succession by electrically tuning the applied voltage of the phase shifter. Experimental results demonstrate that this technique is capable of reconstructing high-resolution phase images and to realize the thickness profile of a microlens array quantitatively. PMID:28387382

  7. Photothermal nanoparticles as molecular specificity agents in interferometric phase microscopy (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Shaked, Natan T.

    2017-02-01

    I review our latest advances in wide-field interferometric imaging of biological cells with molecular specificity, obtained by time-modulated photothermal excitation of gold nanoparticles. Heat emitted from the nanoparticles affects the measured phase signal via both the nanoparticle surrounding refractive-index and thickness changes. These nanoparticles can be bio-functionalized to bind certain biological cell components; thus, they can be used for biomedical imaging with molecular specificity, as new nanoscopy labels, and for photothermal therapy. Predicting the ideal nanoparticle parameters requires a model that computes the thermal and phase distributions around the particle, enabling more efficient phase imaging of plasmonic nanoparticles, and sparing trial and error experiments of using unsuitable nanoparticles. We thus developed a new model for predicting phase signatures from photothermal nanoparticles with arbitrary parameters. We also present a dual-modality technique based on wide-field photothermal interferometric phase imaging and simultaneous ablation to selectively deplete specific cell populations labelled by plasmonic nanoparticles. We experimentally demonstrated our ability to detect and specifically ablate in vitro cancer cells over-expressing epidermal growth factor receptors (EGFRs), labelled with plasmonic nanoparticles, in the presence of either EGFR under-expressing cancer cells or white blood cells. This demonstration established an initial model for depletion of circulating tumour cells in blood. The proposed system is able to image in wide field the label-free quantitative phase profile together with the photothermal phase profile of the sample, and provides the ability of both detection and ablation of chosen cells after their selective imaging.

  8. Lytro camera technology: theory, algorithms, performance analysis

    NASA Astrophysics Data System (ADS)

    Georgiev, Todor; Yu, Zhan; Lumsdaine, Andrew; Goma, Sergio

    2013-03-01

    The Lytro camera is the first implementation of a plenoptic camera for the consumer market. We consider it a successful example of the miniaturization aided by the increase in computational power characterizing mobile computational photography. The plenoptic camera approach to radiance capture uses a microlens array as an imaging system focused on the focal plane of the main camera lens. This paper analyzes the performance of Lytro camera from a system level perspective, considering the Lytro camera as a black box, and uses our interpretation of Lytro image data saved by the camera. We present our findings based on our interpretation of Lytro camera file structure, image calibration and image rendering; in this context, artifacts and final image resolution are discussed.

  9. Image quality prediction - An aid to the Viking lander imaging investigation on Mars

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Wall, S. D.

    1976-01-01

    Image quality criteria and image quality predictions are formulated for the multispectral panoramic cameras carried by the Viking Mars landers. Image quality predictions are based on expected camera performance, Mars surface radiance, and lighting and viewing geometry (fields of view, Mars lander shadows, solar day-night alternation), and are needed in diagnosis of camera performance, in arriving at a preflight imaging strategy, and revision of that strategy should the need arise. Landing considerations, camera control instructions, camera control logic, aspects of the imaging process (spectral response, spatial response, sensitivity), and likely problems are discussed. Major concerns include: degradation of camera response by isotope radiation, uncertainties in lighting and viewing geometry and in landing site local topography, contamination of camera window by dust abrasion, and initial errors in assigning camera dynamic ranges (gains and offsets).

  10. Contrast computation methods for interferometric measurement of sensor modulation transfer function

    NASA Astrophysics Data System (ADS)

    Battula, Tharun; Georgiev, Todor; Gille, Jennifer; Goma, Sergio

    2018-01-01

    Accurate measurement of image-sensor frequency response over a wide range of spatial frequencies is very important for analyzing pixel array characteristics, such as modulation transfer function (MTF), crosstalk, and active pixel shape. Such analysis is especially significant in computational photography for the purposes of deconvolution, multi-image superresolution, and improved light-field capture. We use a lensless interferometric setup that produces high-quality fringes for measuring MTF over a wide range of frequencies (here, 37 to 434 line pairs per mm). We discuss the theoretical framework, involving Michelson and Fourier contrast measurement of the MTF, addressing phase alignment problems using a moiré pattern. We solidify the definition of Fourier contrast mathematically and compare it to Michelson contrast. Our interferometric measurement method shows high detail in the MTF, especially at high frequencies (above Nyquist frequency). We are able to estimate active pixel size and pixel pitch from measurements. We compare both simulation and experimental MTF results to a lens-free slanted-edge implementation using commercial software.

  11. Full-field speckle interferometry for non-contact photoacoustic tomography.

    PubMed

    Horstmann, Jens; Spahr, Hendrik; Buj, Christian; Münter, Michael; Brinkmann, Ralf

    2015-05-21

    A full-field speckle interferometry method for non-contact and prospectively high speed Photoacoustic Tomography is introduced and evaluated as proof of concept. Thermoelastic pressure induced changes of the objects topography are acquired in a repetitive mode without any physical contact to the object. In order to obtain high acquisition speed, the object surface is illuminated by laser pulses and imaged onto a high speed camera chip. In a repetitive triple pulse mode, surface displacements can be acquired with nanometre sensitivity and an adjustable sampling rate of e.g. 20 MHz with a total acquisition time far below one second using kHz repetition rate lasers. Due to recurring interferometric referencing, the method is insensitive to thermal drift of the object due to previous pulses or other motion. The size of the investigated area and the spatial and temporal resolution of the detection are scalable. In this study, the approach is validated by measuring a silicone phantom and a porcine skin phantom with embedded silicone absorbers. The reconstruction of the absorbers is presented in 2D and 3D. The sensitivity of the measurement with respect to the photoacoustic detection is discussed. Potentially, Photoacoustic Imaging can be brought a step closer towards non-anaesthetized in vivo imaging and new medical applications not allowing acoustic contact, such as neurosurgical monitoring or burnt skin investigation.

  12. Label-free high-throughput imaging flow cytometry

    NASA Astrophysics Data System (ADS)

    Mahjoubfar, A.; Chen, C.; Niazi, K. R.; Rabizadeh, S.; Jalali, B.

    2014-03-01

    Flow cytometry is an optical method for studying cells based on their individual physical and chemical characteristics. It is widely used in clinical diagnosis, medical research, and biotechnology for analysis of blood cells and other cells in suspension. Conventional flow cytometers aim a laser beam at a stream of cells and measure the elastic scattering of light at forward and side angles. They also perform single-point measurements of fluorescent emissions from labeled cells. However, many reagents used in cell labeling reduce cellular viability or change the behavior of the target cells through the activation of undesired cellular processes or inhibition of normal cellular activity. Therefore, labeled cells are not completely representative of their unaltered form nor are they fully reliable for downstream studies. To remove the requirement of cell labeling in flow cytometry, while still meeting the classification sensitivity and specificity goals, measurement of additional biophysical parameters is essential. Here, we introduce an interferometric imaging flow cytometer based on the world's fastest continuous-time camera. Our system simultaneously measures cellular size, scattering, and protein concentration as supplementary biophysical parameters for label-free cell classification. It exploits the wide bandwidth of ultrafast laser pulses to perform blur-free quantitative phase and intensity imaging at flow speeds as high as 10 meters per second and achieves nanometer-scale optical path length resolution for precise measurements of cellular protein concentration.

  13. Analyze and predict VLTI observations: the Role of 2D/3D dust continuum radiative transfer codes

    NASA Astrophysics Data System (ADS)

    Pascucci, I.; Henning, Th; Steinacker, J.; Wolf, S.

    2003-10-01

    Radiative Transfer (RT) codes with image capability are a fundamental tool for preparing interferometric observations and for interpreting visibility data. In view of the upcoming VLTI facilities, we present the first comparison of images/visibilities coming from two 3D codes that use completely different techniques to solve the problem of self-consistent continuum RT. In addition, we focus on the astrophysical case of a disk distorted by tidal interaction with by-passing stars or internal planets and investigate for which parameters the distortion can be best detected in the mid-infrared using the mid-infrared interferometric device MIDI.

  14. 2D/3D Dust Continuum Radiative Transfer Codes to Analyze and Predict VLTI Observations

    NASA Astrophysics Data System (ADS)

    Pascucci, I.; Henning, Th.; Steinacker, J.; Wolf, S.

    Radiative Transfer (RT) codes with image capability are a fundamental tool for preparing interferometric observations and for interpreting visibility data. In view of the upcoming VLTI facilities, we present the first comparison of images/visibilities coming from two 3D codes that use completely different techniques to solve the problem of self-consistent continuum RT. In addition, we focus on the astrophysical case of a disk distorted by tidal interaction with by-passing stars or internal planets and investigate for which parameters the distortion can be best detected in the mid-infrared using the mid-infrared interferometric device MIDI.

  15. Improving waveform inversion using modified interferometric imaging condition

    NASA Astrophysics Data System (ADS)

    Guo, Xuebao; Liu, Hong; Shi, Ying; Wang, Weihong; Zhang, Zhen

    2017-12-01

    Similar to the reverse-time migration, full waveform inversion in the time domain is a memory-intensive processing method. The computational storage size for waveform inversion mainly depends on the model size and time recording length. In general, 3D and 4D data volumes need to be saved for 2D and 3D waveform inversion gradient calculations, respectively. Even the boundary region wavefield-saving strategy creates a huge storage demand. Using the last two slices of the wavefield to reconstruct wavefields at other moments through the random boundary, avoids the need to store a large number of wavefields; however, traditional random boundary method is less effective at low frequencies. In this study, we follow a new random boundary designed to regenerate random velocity anomalies in the boundary region for each shot of each iteration. The results obtained using the random boundary condition in less illuminated areas are more seriously affected by random scattering than other areas due to the lack of coverage. In this paper, we have replaced direct correlation for computing the waveform inversion gradient by modified interferometric imaging, which enhances the continuity of the imaging path and reduces noise interference. The new imaging condition is a weighted average of extended imaging gathers can be directly used in the gradient computation. In this process, we have not changed the objective function, and the role of the imaging condition is similar to regularization. The window size for the modified interferometric imaging condition-based waveform inversion plays an important role in this process. The numerical examples show that the proposed method significantly enhances waveform inversion performance.

  16. space Radar Image of Long Valley, California

    NASA Image and Video Library

    1999-05-01

    An area near Long Valley, California, was mapped by the Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar aboard the space shuttle Endeavor on April 13, 1994, during the first flight of the radar instrument, and on October 4, 1994, during the second flight of the radar instrument. The orbital configurations of the two data sets were ideal for interferometric combination -- that is overlaying the data from one image onto a second image of the same area to create an elevation map and obtain estimates of topography. Once the topography is known, any radar-induced distortions can be removed and the radar data can be geometrically projected directly onto a standard map grid for use in a geographical information system. The 50 kilometer by 50 kilometer (31 miles by 31 miles) map shown here is entirely derived from SIR-C L-band radar (horizontally transmitted and received) results. The color shown in this image is produced from the interferometrically determined elevations, while the brightness is determined by the radar backscatter. The map is in Universal Transverse Mercator (UTM) coordinates. Elevation contour lines are shown every 50 meters (164 feet). Crowley Lake is the dark feature near the south edge of the map. The Adobe Valley in the north and the Long Valley in the south are separated by the Glass Mountain Ridge, which runs through the center of the image. The height accuracy of the interferometrically derived digital elevation model is estimated to be 20 meters (66 feet) in this image. http://photojournal.jpl.nasa.gov/catalog/PIA01749

  17. Improving waveform inversion using modified interferometric imaging condition

    NASA Astrophysics Data System (ADS)

    Guo, Xuebao; Liu, Hong; Shi, Ying; Wang, Weihong; Zhang, Zhen

    2018-02-01

    Similar to the reverse-time migration, full waveform inversion in the time domain is a memory-intensive processing method. The computational storage size for waveform inversion mainly depends on the model size and time recording length. In general, 3D and 4D data volumes need to be saved for 2D and 3D waveform inversion gradient calculations, respectively. Even the boundary region wavefield-saving strategy creates a huge storage demand. Using the last two slices of the wavefield to reconstruct wavefields at other moments through the random boundary, avoids the need to store a large number of wavefields; however, traditional random boundary method is less effective at low frequencies. In this study, we follow a new random boundary designed to regenerate random velocity anomalies in the boundary region for each shot of each iteration. The results obtained using the random boundary condition in less illuminated areas are more seriously affected by random scattering than other areas due to the lack of coverage. In this paper, we have replaced direct correlation for computing the waveform inversion gradient by modified interferometric imaging, which enhances the continuity of the imaging path and reduces noise interference. The new imaging condition is a weighted average of extended imaging gathers can be directly used in the gradient computation. In this process, we have not changed the objective function, and the role of the imaging condition is similar to regularization. The window size for the modified interferometric imaging condition-based waveform inversion plays an important role in this process. The numerical examples show that the proposed method significantly enhances waveform inversion performance.

  18. Coastal Bathymetry Using Satellite Observation in Support of Intelligence Preparation of the Environment

    DTIC Science & Technology

    2011-09-01

    Sensor ..........................................................................25 2. The Environment for Visualizing Images 4.7 (ENVI......DEM Digital Elevation Model ENVI Environment for Visualizing Images HADR Humanitarian and Disaster Relief IfSAR Interferometric Synthetic Aperture

  19. Simulating interfering fringe displacements by lateral shifts of a camera for educational purposes

    NASA Astrophysics Data System (ADS)

    Rivera-Ortega, Uriel

    2018-07-01

    In this manuscript we propose a simple method to emulate fringe displacements in a fringe pattern, due to the interference of two plane waves, by using lateral shifts of a CMOS detector under the scheme of a Twyman–Green interferometric setup, avoiding unwanted vibrations and the need for specific and expensive devices in order to accomplish the task. The simplicity of the proposed experimental setup allows it to be easily replicated and used for teaching or demonstrative purposes, essentially for undergraduate students.

  20. Monitoring urban subsidence based on SAR lnterferometric point target analysis

    USGS Publications Warehouse

    Zhang, Y.; Zhang, Jiahua; Gong, W.; Lu, Z.

    2009-01-01

    lnterferometric point target analysis (IPTA) is one of the latest developments in radar interferometric processing. It is achieved by analysis of the interferometric phases of some individual point targets, which are discrete and present temporarily stable backscattering characteristics, in long temporal series of interferometric SAR images. This paper analyzes the interferometric phase model of point targets, and then addresses two key issues within IPTA process. Firstly, a spatial searching method is proposed to unwrap the interferometric phase difference between two neighboring point targets. The height residual error and linear deformation rate of each point target can then be calculated, when a global reference point with known height correction and deformation history is chosen. Secondly, a spatial-temporal filtering scheme is proposed to further separate the atmosphere phase and nonlinear deformation phase from the residual interferometric phase. Finally, an experiment of the developed IPTA methodology is conducted over Suzhou urban area. Totally 38 ERS-1/2 SAR scenes are analyzed, and the deformation information over 3 546 point targets in the time span of 1992-2002 are generated. The IPTA-derived deformation shows very good agreement with the published result, which demonstrates that the IPTA technique can be developed into an operational tool to map the ground subsidence over urban area.

  1. [Advance in interferogram data processing technique].

    PubMed

    Jing, Juan-Juan; Xiangli, Bin; Lü, Qun-Bo; Huang, Min; Zhou, Jin-Song

    2011-04-01

    Fourier transform spectrometry is a type of novel information obtaining technology, which integrated the functions of imaging and spectra, but the data that the instrument acquired is the interference data of the target, which is an intermediate data and couldn't be used directly, so data processing must be adopted for the successful application of the interferometric data In the present paper, data processing techniques are divided into two classes: general-purpose and special-type. First, the advance in universal interferometric data processing technique is introduced, then the special-type interferometric data extracting method and data processing technique is illustrated according to the classification of Fourier transform spectroscopy. Finally, the trends of interferogram data processing technique are discussed.

  2. Motion Estimation Utilizing Range Detection-Enhanced Visual Odometry

    NASA Technical Reports Server (NTRS)

    Morris, Daniel Dale (Inventor); Chang, Hong (Inventor); Friend, Paul Russell (Inventor); Chen, Qi (Inventor); Graf, Jodi Seaborn (Inventor)

    2016-01-01

    A motion determination system is disclosed. The system may receive a first and a second camera image from a camera, the first camera image received earlier than the second camera image. The system may identify corresponding features in the first and second camera images. The system may receive range data comprising at least one of a first and a second range data from a range detection unit, corresponding to the first and second camera images, respectively. The system may determine first positions and the second positions of the corresponding features using the first camera image and the second camera image. The first positions or the second positions may be determined by also using the range data. The system may determine a change in position of the machine based on differences between the first and second positions, and a VO-based velocity of the machine based on the determined change in position.

  3. Imaging Schwarzschild multilayer X-ray microscope

    NASA Technical Reports Server (NTRS)

    Hoover, Richard B.; Baker, Phillip C.; Shealy, David L.; Core, David B.; Walker, Arthur B. C., Jr.; Barbee, Troy W., Jr.; Kerstetter, Ted

    1993-01-01

    We have designed, analyzed, fabricated, and tested Schwarzschild multilayer X-ray microscopes. These instruments use flow-polished Zerodur mirror substrates which have been coated with multilayers optimized for maximum reflectivity at normal incidence at 135 A. They are being developed as prototypes for the Water Window Imaging X-Ray Microscope. Ultrasmooth mirror sets of hemlite grade sapphire have been fabricated and they are now being coated with multilayers to reflect soft X-rays at 38 A, within the biologically important 'water window'. In this paper, we discuss the fabrication of the microscope optics and structural components as well as the mounting of the optics and assembly of the microscopes. We also describe the optical alignment, interferometric and visible light testing of the microscopes, present interferometrically measured performance data, and provide the first results of optical imaging tests.

  4. Application of Sensor Fusion to Improve Uav Image Classification

    NASA Astrophysics Data System (ADS)

    Jabari, S.; Fathollahi, F.; Zhang, Y.

    2017-08-01

    Image classification is one of the most important tasks of remote sensing projects including the ones that are based on using UAV images. Improving the quality of UAV images directly affects the classification results and can save a huge amount of time and effort in this area. In this study, we show that sensor fusion can improve image quality which results in increasing the accuracy of image classification. Here, we tested two sensor fusion configurations by using a Panchromatic (Pan) camera along with either a colour camera or a four-band multi-spectral (MS) camera. We use the Pan camera to benefit from its higher sensitivity and the colour or MS camera to benefit from its spectral properties. The resulting images are then compared to the ones acquired by a high resolution single Bayer-pattern colour camera (here referred to as HRC). We assessed the quality of the output images by performing image classification tests. The outputs prove that the proposed sensor fusion configurations can achieve higher accuracies compared to the images of the single Bayer-pattern colour camera. Therefore, incorporating a Pan camera on-board in the UAV missions and performing image fusion can help achieving higher quality images and accordingly higher accuracy classification results.

  5. Wide area Hyperspectral Motion Imaging

    DTIC Science & Technology

    2017-02-03

    LEXINGTON, MASSACHUSETTS 02420-9108 (781) 981-1343 3 February 2017 TO: FROM: Dr. Joseph Lin (joseph.lin@ll.mit.edu), Advanced Imager ...Technology SUBJECT: Wide-area Hyperspectral Motion Imaging Introduction Wide-area motion imaging (WAMI) has received increased attention in...fielded imaging spectrometers use either dispersive or interferometric techniques. A dispersive spectrometer uses a grating or prism to disperse the

  6. INSAR Images Hawaii Kilauea Volcano

    NASA Image and Video Library

    2011-03-10

    This satellite interferometric synthetic aperture radar image using COSMO-SkyMed radar data, depicts the relative deformation of Earth surface at Kilauea between Feb. 11, 2011 and March 7, 2011 two days following the start of the current eruption.

  7. Sparse aperture endoscope

    DOEpatents

    Fitch, J.P.

    1999-07-06

    An endoscope is disclosed which reduces the volume needed by the imaging part, maintains resolution of a wide diameter optical system, while increasing tool access, and allows stereographic or interferometric processing for depth and perspective information/visualization. Because the endoscope decreases the volume consumed by imaging optics such allows a larger fraction of the volume to be used for non-imaging tools, which allows smaller incisions in surgical and diagnostic medical applications thus produces less trauma to the patient or allows access to smaller volumes than is possible with larger instruments. The endoscope utilizes fiber optic light pipes in an outer layer for illumination, a multi-pupil imaging system in an inner annulus, and an access channel for other tools in the center. The endoscope is amenable to implementation as a flexible scope, and thus increases it's utility. Because the endoscope uses a multi-aperture pupil, it can also be utilized as an optical array, allowing stereographic and interferometric processing. 7 figs.

  8. Sparse aperture endoscope

    DOEpatents

    Fitch, Joseph P.

    1999-07-06

    An endoscope which reduces the volume needed by the imaging part thereof, maintains resolution of a wide diameter optical system, while increasing tool access, and allows stereographic or interferometric processing for depth and perspective information/visualization. Because the endoscope decreases the volume consumed by imaging optics such allows a larger fraction of the volume to be used for non-imaging tools, which allows smaller incisions in surgical and diagnostic medical applications thus produces less trauma to the patient or allows access to smaller volumes than is possible with larger instruments. The endoscope utilizes fiber optic light pipes in an outer layer for illumination, a multi-pupil imaging system in an inner annulus, and an access channel for other tools in the center. The endoscope is amenable to implementation as a flexible scope, and thus increases the utility thereof. Because the endoscope uses a multi-aperture pupil, it can also be utilized as an optical array, allowing stereographic and interferometric processing.

  9. Interferometric imaging using Si3N4 photonic integrated circuits for a SPIDER imager.

    PubMed

    Su, Tiehui; Liu, Guangyao; Badham, Katherine E; Thurman, Samuel T; Kendrick, Richard L; Duncan, Alan; Wuchenich, Danielle; Ogden, Chad; Chriqui, Guy; Feng, Shaoqi; Chun, Jaeyi; Lai, Weicheng; Yoo, S J B

    2018-05-14

    This paper reports design, fabrication, and experimental demonstration of a silicon nitride photonic integrated circuit (PIC). The PIC is capable of conducting one-dimensional interferometric imaging with twelve baselines near λ = 1100-1600 nm. The PIC consists of twelve waveguide pairs, each leading to a multi-mode interferometer (MMI) that forms broadband interference fringes or each corresponding pair of the waveguides. Then an 18 channel arrayed waveguide grating (AWG) separates the combined signal into 18 signals of different wavelengths. A total of 103 sets of fringes are collected by the detector array at the output of the PIC. We keep the optical path difference (OPD) of each interferometer baseline to within 1 µm to maximize the visibility of the interference measurement. We also constructed a testbed to utilize the PIC for two-dimension complex visibility measurement with various targets. The experiment shows reconstructed images in good agreement with theoretical predictions.

  10. A compact LWIR imaging spectrometer with a variable gap Fabry-Perot interferometer

    NASA Astrophysics Data System (ADS)

    Zhang, Fang; Gao, Jiaobo; Wang, Nan; Zhao, Yujie; Zhang, Lei; Gao, Shan

    2017-02-01

    Fourier transform spectroscopy is a widely employed method for obtaining spectra, with applications ranging from the desktop to remote sensing. The long wave infrared (LWIR) interferometric spectral imaging system is always with huge volume and large weight. In order to miniaturize and light the instrument, a new method of LWIR spectral imaging system based on a variable gap Fabry-Perot (FP) interferometer is researched. With the system working principle analyzed, theoretically, it is researched that how to make certain the primary parameter, such as, the reflectivity of the two interferometric cavity surfaces, field of view (FOV) and f-number of the imaging lens. A prototype is developed and a good experimental result of CO2 laser is obtained. The research shows that besides high throughput and high spectral resolution, the advantage of miniaturization is also simultaneously achieved in this method.

  11. Application of Nondestructive Testing Techniques to Materials Testing.

    DTIC Science & Technology

    1987-12-01

    microscopy gives little quanti- image the center place of the Bragg cell to the back focal tative information on surface height. Nomarski differential...case we can write our technique in a shot-noise limited system, intensity (i2) f 2qloB = 2q 2 7PB measurements can yield interferometric accuracies. nh...comparable in sensitivity to OPTICAL AXIS phase-dependent interferometric techniques. Thedo--i thicknesses of photoresist films have been measured to f_

  12. Remote sensing of ocean wave spectra by interferometric synthetic aperture radar

    NASA Technical Reports Server (NTRS)

    Marom, M.; Thornton, E. B.; Goldstein, R. M.; Shemer, L.

    1990-01-01

    Ocean surface waves can be clearly observed by SAR in the interferometric configuration (INSAR) due to the ability of INSAR to provide images of the local surface velocity field. It is shown here that INSAR can be used to obtain wavenumber spectra that are in agreement with power spectra measured in situ. This new method has considerable potential to provide instantaneous spatial information about the structure of ocean wave fields.

  13. Phase-Enhanced 3D Snapshot ISAR Imaging and Interferometric SAR

    DTIC Science & Technology

    2009-12-28

    generalized technique requires the precession angle 9p be relatively small [see liq. (28)|. However, the noncoherent snapshot image equations remain...valid beyond this precession limit, and the unique sampling grid developed is still very useful for 3D imaging of the noncoherent snapshot equation

  14. Three-Dimensional Localization of Single Molecules for Super-Resolution Imaging and Single-Particle Tracking

    PubMed Central

    von Diezmann, Alex; Shechtman, Yoav; Moerner, W. E.

    2017-01-01

    Single-molecule super-resolution fluorescence microscopy and single-particle tracking are two imaging modalities that illuminate the properties of cells and materials on spatial scales down to tens of nanometers, or with dynamical information about nanoscale particle motion in the millisecond range, respectively. These methods generally use wide-field microscopes and two-dimensional camera detectors to localize molecules to much higher precision than the diffraction limit. Given the limited total photons available from each single-molecule label, both modalities require careful mathematical analysis and image processing. Much more information can be obtained about the system under study by extending to three-dimensional (3D) single-molecule localization: without this capability, visualization of structures or motions extending in the axial direction can easily be missed or confused, compromising scientific understanding. A variety of methods for obtaining both 3D super-resolution images and 3D tracking information have been devised, each with their own strengths and weaknesses. These include imaging of multiple focal planes, point-spread-function engineering, and interferometric detection. These methods may be compared based on their ability to provide accurate and precise position information of single-molecule emitters with limited photons. To successfully apply and further develop these methods, it is essential to consider many practical concerns, including the effects of optical aberrations, field-dependence in the imaging system, fluorophore labeling density, and registration between different color channels. Selected examples of 3D super-resolution imaging and tracking are described for illustration from a variety of biological contexts and with a variety of methods, demonstrating the power of 3D localization for understanding complex systems. PMID:28151646

  15. Full-field dual-color 100-nm super-resolution imaging reveals organization and dynamics of mitochondrial and ER networks.

    PubMed

    Brunstein, Maia; Wicker, Kai; Hérault, Karine; Heintzmann, Rainer; Oheim, Martin

    2013-11-04

    Most structured illumination microscopes use a physical or synthetic grating that is projected into the sample plane to generate a periodic illumination pattern. Albeit simple and cost-effective, this arrangement hampers fast or multi-color acquisition, which is a critical requirement for time-lapse imaging of cellular and sub-cellular dynamics. In this study, we designed and implemented an interferometric approach allowing large-field, fast, dual-color imaging at an isotropic 100-nm resolution based on a sub-diffraction fringe pattern generated by the interference of two colliding evanescent waves. Our all-mirror-based system generates illumination pat-terns of arbitrary orientation and period, limited only by the illumination aperture (NA = 1.45), the response time of a fast, piezo-driven tip-tilt mirror (10 ms) and the available fluorescence signal. At low µW laser powers suitable for long-period observation of life cells and with a camera exposure time of 20 ms, our system permits the acquisition of super-resolved 50 µm by 50 µm images at 3.3 Hz. The possibility it offers for rapidly adjusting the pattern between images is particularly advantageous for experiments that require multi-scale and multi-color information. We demonstrate the performance of our instrument by imaging mitochondrial dynamics in cultured cortical astrocytes. As an illustration of dual-color excitation dual-color detection, we also resolve interaction sites between near-membrane mitochondria and the endoplasmic reticulum. Our TIRF-SIM microscope provides a versatile, compact and cost-effective arrangement for super-resolution imaging, allowing the investigation of co-localization and dynamic interactions between organelles--important questions in both cell biology and neurophysiology.

  16. Simultaneous Water Vapor and Dry Air Optical Path Length Measurements and Compensation with the Large Binocular Telescope Interferometer

    NASA Technical Reports Server (NTRS)

    Defrere, D.; Hinz, P.; Downey, E.; Boehm, M.; Danchi, W. C.; Durney, O.; Ertel, S.; Hill, J. M.; Hoffmann, W. F.; Mennesson, B.; hide

    2016-01-01

    The Large Binocular Telescope Interferometer uses a near-infrared camera to measure the optical path length variations between the two AO-corrected apertures and provide high-angular resolution observations for all its science channels (1.5-13 microns). There is however a wavelength dependent component to the atmospheric turbulence, which can introduce optical path length errors when observing at a wavelength different from that of the fringe sensing camera. Water vapor in particular is highly dispersive and its effect must be taken into account for high-precision infrared interferometric observations as described previously for VLTI/MIDI or the Keck Interferometer Nuller. In this paper, we describe the new sensing approach that has been developed at the LBT to measure and monitor the optical path length fluctuations due to dry air and water vapor separately. After reviewing the current performance of the system for dry air seeing compensation, we present simultaneous H-, K-, and N-band observations that illustrate the feasibility of our feed forward approach to stabilize the path length fluctuations seen by the LBTI nuller uses a near-infrared camera to measure the optical path length variations between the two AO-corrected apertures and provide high-angular resolution observations for all its science channels (1.5-13 microns). There is however a wavelength dependent component to the atmospheric turbulence, which can introduce optical path length errors when observing at a wavelength different from that of the fringe sensing camera. Water vapor in particular is highly dispersive and its effect must be taken into account for high-precision infrared interferometric observations as described previously for VLTI MIDI or the Keck Interferometer Nuller. In this paper, we describe the new sensing approach that has been developed at the LBT to measure and monitor the optical path length fluctuations due to dry air and water vapor separately. After reviewing the current performance of the system for dry air seeing compensation, we present simultaneous H-, K-, and N-band observations that illustrate the feasibility of our feed forward approach to stabilize the path length fluctuations seen by the LBTI nuller.

  17. Building the James Webb Space Telescope

    NASA Technical Reports Server (NTRS)

    Gardner, Jonathan P.

    2012-01-01

    The James Webb Space Telescope is the scientific successor to the Hubble and Spitzer Space Telescopes. It will be a large (6.6m) cold (50K) telescope launched into orbit around the second Earth-Sun Lagrange point. It is a partnership of NASA with the European and Canadian Space Agencies. JWST will make progress In almost every area of astronomy, from the first galaxies to form in the early universe to exoplanets and Solar System objects. Webb will have four instruments: The Near-Infrared Camera, the Near-Infrared multi-object Spectrograph, and the Near-Infrared Imager and Slitless Spectrograph will cover the wavelength range 0.6 to 5 microns, while the Mid-Infrared Instrument will do both imaging and spectroscopy from 5 to 28.5 microns. The observatory Is confirmed for launch in 2018; the design is complete and it is in its construction phase. Innovations that make JWST possible include large-area low-noise infrared detectors, cryogenic ASICs, a MEMS micro-shutter array providing multi-object spectroscopy, a non-redundant mask for interferometric coronagraphy and diffraction-limited segmented beryllium mirrors with active wavefront sensing and control. Recent progress includes the completion of the mirrors, the delivery of the first flight instruments and the start of the integration and test phase.

  18. Image Sensors Enhance Camera Technologies

    NASA Technical Reports Server (NTRS)

    2010-01-01

    In the 1990s, a Jet Propulsion Laboratory team led by Eric Fossum researched ways of improving complementary metal-oxide semiconductor (CMOS) image sensors in order to miniaturize cameras on spacecraft while maintaining scientific image quality. Fossum s team founded a company to commercialize the resulting CMOS active pixel sensor. Now called the Aptina Imaging Corporation, based in San Jose, California, the company has shipped over 1 billion sensors for use in applications such as digital cameras, camera phones, Web cameras, and automotive cameras. Today, one of every three cell phone cameras on the planet feature Aptina s sensor technology.

  19. A comparison of select image-compression algorithms for an electronic still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    This effort is a study of image-compression algorithms for an electronic still camera. An electronic still camera can record and transmit high-quality images without the use of film, because images are stored digitally in computer memory. However, high-resolution images contain an enormous amount of information, and will strain the camera's data-storage system. Image compression will allow more images to be stored in the camera's memory. For the electronic still camera, a compression algorithm that produces a reconstructed image of high fidelity is most important. Efficiency of the algorithm is the second priority. High fidelity and efficiency are more important than a high compression ratio. Several algorithms were chosen for this study and judged on fidelity, efficiency and compression ratio. The transform method appears to be the best choice. At present, the method is compressing images to a ratio of 5.3:1 and producing high-fidelity reconstructed images.

  20. Capturing method for integral three-dimensional imaging using multiviewpoint robotic cameras

    NASA Astrophysics Data System (ADS)

    Ikeya, Kensuke; Arai, Jun; Mishina, Tomoyuki; Yamaguchi, Masahiro

    2018-03-01

    Integral three-dimensional (3-D) technology for next-generation 3-D television must be able to capture dynamic moving subjects with pan, tilt, and zoom camerawork as good as in current TV program production. We propose a capturing method for integral 3-D imaging using multiviewpoint robotic cameras. The cameras are controlled through a cooperative synchronous system composed of a master camera controlled by a camera operator and other reference cameras that are utilized for 3-D reconstruction. When the operator captures a subject using the master camera, the region reproduced by the integral 3-D display is regulated in real space according to the subject's position and view angle of the master camera. Using the cooperative control function, the reference cameras can capture images at the narrowest view angle that does not lose any part of the object region, thereby maximizing the resolution of the image. 3-D models are reconstructed by estimating the depth from complementary multiviewpoint images captured by robotic cameras arranged in a two-dimensional array. The model is converted into elemental images to generate the integral 3-D images. In experiments, we reconstructed integral 3-D images of karate players and confirmed that the proposed method satisfied the above requirements.

  1. Megapixel mythology and photospace: estimating photospace for camera phones from large image sets

    NASA Astrophysics Data System (ADS)

    Hultgren, Bror O.; Hertel, Dirk W.

    2008-01-01

    It is a myth that more pixels alone result in better images. The marketing of camera phones in particular has focused on their pixel numbers. However, their performance varies considerably according to the conditions of image capture. Camera phones are often used in low-light situations where the lack of a flash and limited exposure time will produce underexposed, noisy and blurred images. Camera utilization can be quantitatively described by photospace distributions, a statistical description of the frequency of pictures taken at varying light levels and camera-subject distances. If the photospace distribution is known, the user-experienced distribution of quality can be determined either directly by direct measurement of subjective quality, or by photospace-weighting of objective attributes. The population of a photospace distribution requires examining large numbers of images taken under typical camera phone usage conditions. ImagePhi was developed as a user-friendly software tool to interactively estimate the primary photospace variables, subject illumination and subject distance, from individual images. Additionally, subjective evaluations of image quality and failure modes for low quality images can be entered into ImagePhi. ImagePhi has been applied to sets of images taken by typical users with a selection of popular camera phones varying in resolution. The estimated photospace distribution of camera phone usage has been correlated with the distributions of failure modes. The subjective and objective data show that photospace conditions have a much bigger impact on image quality of a camera phone than the pixel count of its imager. The 'megapixel myth' is thus seen to be less a myth than an ill framed conditional assertion, whose conditions are to a large extent specified by the camera's operational state in photospace.

  2. Interferometrically enhanced sub-terahertz picosecond imaging utilizing a miniature collapsing-field-domain source

    NASA Astrophysics Data System (ADS)

    Vainshtein, Sergey N.; Duan, Guoyong; Mikhnev, Valeri A.; Zemlyakov, Valery E.; Egorkin, Vladimir I.; Kalyuzhnyy, Nikolay A.; Maleev, Nikolai A.; Näpänkangas, Juha; Sequeiros, Roberto Blanco; Kostamovaara, Juha T.

    2018-05-01

    Progress in terahertz spectroscopy and imaging is mostly associated with femtosecond laser-driven systems, while solid-state sources, mainly sub-millimetre integrated circuits, are still in an early development phase. As simple and cost-efficient an emitter as a Gunn oscillator could cause a breakthrough in the field, provided its frequency limitations could be overcome. Proposed here is an application of the recently discovered collapsing field domains effect that permits sub-THz oscillations in sub-micron semiconductor layers thanks to nanometer-scale powerfully ionizing domains arising due to negative differential mobility in extreme fields. This shifts the frequency limit by an order of magnitude relative to the conventional Gunn effect. Our first miniature picosecond pulsed sources cover the 100-200 GHz band and promise milliwatts up to ˜500 GHz. Thanks to the method of interferometrically enhanced time-domain imaging proposed here and the low single-shot jitter of ˜1 ps, our simple imaging system provides sufficient time-domain imaging contrast for fresh-tissue terahertz histology.

  3. MuLoG, or How to Apply Gaussian Denoisers to Multi-Channel SAR Speckle Reduction?

    PubMed

    Deledalle, Charles-Alban; Denis, Loic; Tabti, Sonia; Tupin, Florence

    2017-09-01

    Speckle reduction is a longstanding topic in synthetic aperture radar (SAR) imaging. Since most current and planned SAR imaging satellites operate in polarimetric, interferometric, or tomographic modes, SAR images are multi-channel and speckle reduction techniques must jointly process all channels to recover polarimetric and interferometric information. The distinctive nature of SAR signal (complex-valued, corrupted by multiplicative fluctuations) calls for the development of specialized methods for speckle reduction. Image denoising is a very active topic in image processing with a wide variety of approaches and many denoising algorithms available, almost always designed for additive Gaussian noise suppression. This paper proposes a general scheme, called MuLoG (MUlti-channel LOgarithm with Gaussian denoising), to include such Gaussian denoisers within a multi-channel SAR speckle reduction technique. A new family of speckle reduction algorithms can thus be obtained, benefiting from the ongoing progress in Gaussian denoising, and offering several speckle reduction results often displaying method-specific artifacts that can be dismissed by comparison between results.

  4. Interferometric imaging of the 2011-2013 Campi Flegrei unrest

    NASA Astrophysics Data System (ADS)

    De Siena, Luca; Nakahara, Hisashi; Zaccarelli, Lucia; Sammarco, Carmelo; La Rocca, Mario; Bianco, Francesca

    2017-04-01

    After its 1983-84 seismic and deformation crisis, seismologists have recorded very low and clustered seismicity at Campi Flegrei caldera (Italy). Hence, noise interferometry imaging has become the only option to image the present volcano logical state of the volcano. Three-component noise data recorded before, during, and after Campi Flegrei last deformation and geochemical unrest (2011-2013) have thus been processed with up-to-date interferometric imaging workflow based on MSNoise. Noise anisotropy, which strongly affects measurements throughout the caldera at all frequencies, has been accounted for by self-correlation measurements and smoothed by phase weighted stacking and phase-match filtering. The final group-velocity maps show strong low-velocity anomalies at the location of the last Campi Flegrei eruption (1538 A.D.). The main low-velocity anomalies contour Solfatara volcano and follow geomorphological cross-faulting. The comparison with geophysical imaging results obtained during the last seismic unrest at the caldera suggest strong changes in the physical properties of the volcano, particularly in the area of major hydrogeological hazard.

  5. Radar image and data fusion for natural hazards characterisation

    USGS Publications Warehouse

    Lu, Zhong; Dzurisin, Daniel; Jung, Hyung-Sup; Zhang, Jixian; Zhang, Yonghong

    2010-01-01

    Fusion of synthetic aperture radar (SAR) images through interferometric, polarimetric and tomographic processing provides an all - weather imaging capability to characterise and monitor various natural hazards. This article outlines interferometric synthetic aperture radar (InSAR) processing and products and their utility for natural hazards characterisation, provides an overview of the techniques and applications related to fusion of SAR/InSAR images with optical and other images and highlights the emerging SAR fusion technologies. In addition to providing precise land - surface digital elevation maps, SAR - derived imaging products can map millimetre - scale elevation changes driven by volcanic, seismic and hydrogeologic processes, by landslides and wildfires and other natural hazards. With products derived from the fusion of SAR and other images, scientists can monitor the progress of flooding, estimate water storage changes in wetlands for improved hydrological modelling predictions and assessments of future flood impacts and map vegetation structure on a global scale and monitor its changes due to such processes as fire, volcanic eruption and deforestation. With the availability of SAR images in near real - time from multiple satellites in the near future, the fusion of SAR images with other images and data is playing an increasingly important role in understanding and forecasting natural hazards.

  6. Application of Polarimetric-Interferometric Phase Coherence Optimization (PIPCO) Procedure to SIR-C/X-SAR Tien-Shan Tracks 122.20(94 Oct. 08)/154.20(94 Oct. 09) Repeat-Orbit C/L-Band Pol-D-InSAR Imag

    NASA Technical Reports Server (NTRS)

    Boerner, W. M.; Mott, H.; Verdi, J.; Darizhapov, D.; Dorjiev, B.; Tsybjito, T.; Korsunov, V.; Tatchkov, G.; Bashkuyev, Y.; Cloude, S.; hide

    1998-01-01

    During the past decade, Radar Polarimetry has established itself as a mature science and advanced technology in high resolution POL-SAR imaging, image target characterization and selective image feature extraction.

  7. Image Mosaicking Approach for a Double-Camera System in the GaoFen2 Optical Remote Sensing Satellite Based on the Big Virtual Camera.

    PubMed

    Cheng, Yufeng; Jin, Shuying; Wang, Mi; Zhu, Ying; Dong, Zhipeng

    2017-06-20

    The linear array push broom imaging mode is widely used for high resolution optical satellites (HROS). Using double-cameras attached by a high-rigidity support along with push broom imaging is one method to enlarge the field of view while ensuring high resolution. High accuracy image mosaicking is the key factor of the geometrical quality of complete stitched satellite imagery. This paper proposes a high accuracy image mosaicking approach based on the big virtual camera (BVC) in the double-camera system on the GaoFen2 optical remote sensing satellite (GF2). A big virtual camera can be built according to the rigorous imaging model of a single camera; then, each single image strip obtained by each TDI-CCD detector can be re-projected to the virtual detector of the big virtual camera coordinate system using forward-projection and backward-projection to obtain the corresponding single virtual image. After an on-orbit calibration and relative orientation, the complete final virtual image can be obtained by stitching the single virtual images together based on their coordinate information on the big virtual detector image plane. The paper subtly uses the concept of the big virtual camera to obtain a stitched image and the corresponding high accuracy rational function model (RFM) for concurrent post processing. Experiments verified that the proposed method can achieve seamless mosaicking while maintaining the geometric accuracy.

  8. Methods for identification of images acquired with digital cameras

    NASA Astrophysics Data System (ADS)

    Geradts, Zeno J.; Bijhold, Jurrien; Kieft, Martijn; Kurosawa, Kenji; Kuroki, Kenro; Saitoh, Naoki

    2001-02-01

    From the court we were asked whether it is possible to determine if an image has been made with a specific digital camera. This question has to be answered in child pornography cases, where evidence is needed that a certain picture has been made with a specific camera. We have looked into different methods of examining the cameras to determine if a specific image has been made with a camera: defects in CCDs, file formats that are used, noise introduced by the pixel arrays and watermarking in images used by the camera manufacturer.

  9. Direct Interferometric Imaging with IOTA Interferometer: Morphology of the Water Shell around U Ori

    NASA Astrophysics Data System (ADS)

    Pluzhnik, Eugene; Ragland, S.; Le Coroller, H.; Cotton, W.; Danchi, W.; Traub, W.; Willson, L.

    2007-12-01

    Optical interferometric observations of Mira stars with adequate resolution using the 3-telescope Infrared Optical Telescope Array (IOTA) interferometer have shown detectable asymmetry in several Mira stars. Several mechanisms have been proposed to explain the observed asymmetry. In this paper, we present subsequent IOTA observations of a Mira star, namely, U Ori taken at 1.51, 1.64 and 1.78 μm in 2005. The reconstructed images based on a model independent algorithm are also presented. These images show asymmetric structures of the water shell that is similar to the structure of 22 GHz masers obtained by Vlemmings et al. in 2003. We explore the possibility of the detection of molecular shell rotation with a period of about 30 years by comparing our results with radio observations and discuss a possible geometric structure of the shell.

  10. Imaging with New Classic and Vision at the NPOI

    NASA Astrophysics Data System (ADS)

    Jorgensen, Anders

    2018-04-01

    The Navy Precision Optical Interferometer (NPOI) is unique among interferometric observatories for its ability to position telescopes in an equally-spaced array configuration. This configuration is optimal for interferometric imaging because it allows the use of bootstrapping to track fringes on long baselines with signal-to-noise ratio less than one. When combined with coherent integration techniques this can produce visibilities with acceptable SNR on baselines long enough to resolve features on the surfaces of stars. The stellar surface imaging project at NPOI combines the bootstrapping array configuration of the NPOI array, real-time fringe tracking, baseline- and wavelength bootstrapping with Earth rotation to provide dense coverage in the UV plane at a wide range of spatial frequencies. In this presentation, we provide an overview of the project and an update of the latest status and results from the project.

  11. Interferometry in the era of time-domain astronomy

    NASA Astrophysics Data System (ADS)

    Schaefer, Gail H.; Cassan, Arnaud; Gallenne, Alexandre; Roettenbacher, Rachael M.; Schneider, Jean

    2018-04-01

    The physical nature of time variable objects is often inferred from photometric light-curves and spectroscopic variations. Long-baseline optical interferometry has the power to resolve the spatial structure of time variable sources directly in order to measure their physical properties and test the physics of the underlying models. Recent interferometric studies of variable objects include measuring the angular expansion and spatial structure during the early stages of novae outbursts, studying the transits and tidal distortions of the components in eclipsing and interacting binaries, measuring the radial pulsations in Cepheid variables, monitoring changes in the circumstellar discs around rapidly rotating massive stars, and imaging starspots. Future applications include measuring the image size and centroid displacements in gravitational microlensing events, and imaging the transits of exoplanets. Ongoing and upcoming photometric surveys will dramatically increase the number of time-variable objects detected each year, providing many potential targets to observe interferometrically. For short-lived transient events, it is critical for interferometric arrays to have the flexibility to respond rapidly to targets of opportunity and optimize the selection of baselines and beam combiners to provide the necessary resolution and sensitivity to resolve the source as its brightness and size change. We discuss the science opportunities made possible by resolving variable sources using long baseline optical interferometry.

  12. In situ electrochemical digital holographic microscopy; a study of metal electrodeposition in deep eutectic solvents.

    PubMed

    Abbott, Andrew P; Azam, Muhammad; Ryder, Karl S; Saleem, Saima

    2013-07-16

    This study has shown for the first time that digital holographic microscopy (DHM) can be used as a new analytical tool in analysis of kinetic mechanism and growth during electrolytic deposition processes. Unlike many alternative established electrochemical microscopy methods such as probe microscopy, DHM is both the noninvasive and noncontact, the unique holographic imaging allows the observations and measurement to be made remotely. DHM also provides interferometric resolution (nanometer vertical scale) with a very short acquisition time. It is a surface metrology technique that enables the retrieval of information about a 3D structure from the phase contrast of a single hologram acquired using a conventional digital camera. Here DHM has been applied to investigate directly the electro-crystallization of a metal on a substrate in real time (in situ) from two deep eutectic solvent (DES) systems based on mixture of choline chloride and either urea or ethylene glycol. We show, using electrochemical DHM that the nucleation and growth of silver deposits in these systems are quite distinct and influenced strongly by the hydrogen bond donor of the DES.

  13. VizieR Online Data Catalog: NGC1068 interferometric mid-IR measurements (Lopez-Gonzaga+, 2017)

    NASA Astrophysics Data System (ADS)

    Lopez-Gonzaga, N.; Asmus, D.; Bauer, F. E.; Tristram, K. R. W.; Burtscher, L.; Marinucci, A.; Matt, G.; Harrison, F. A.

    2017-06-01

    Single-aperture mid-infrared images and spectra were taken with the VLT spectrometer and imager for the mid-infrared (VISIR). Interferometric measurements were obtained with the instrument MIDI at the ESO's VLTI facility. Observations with intermediate AT baselines were requested and observed during the nights of January, 10, 20, and 23, 2015 using Director's discretionary time (DDT). We additionally included published and unpublished interferometric data from our previous campaigns with the requirement that they were observed (nearly) contemporaneously to the period of X-ray variation or observed a few years before. These include observations taken on the nights of September, 21, 26, and 30, 2014, and November, 17, 2014, using Guaranteed Time Observations (GTO). For our observations we used the low-resolution NaCl prism with spectral resolution R=λ/Δλ~30 to disperse the light of the beams. A log of the observations and instrument setup can be found in Appendix A. The published data were taken from Lopez-Gonzaga et al. (2014A&A...565A..71L, Cat. J/A+A/565/A71). (2 data files).

  14. Fast non-interferometric iterative phase retrieval for holographic data storage.

    PubMed

    Lin, Xiao; Huang, Yong; Shimura, Tsutomu; Fujimura, Ryushi; Tanaka, Yoshito; Endo, Masao; Nishimoto, Hajimu; Liu, Jinpeng; Li, Yang; Liu, Ying; Tan, Xiaodi

    2017-12-11

    Fast non-interferometric phase retrieval is a very important technique for phase-encoded holographic data storage and other phase based applications due to its advantage of easy implementation, simple system setup, and robust noise tolerance. Here we present an iterative non-interferometric phase retrieval for 4-level phase encoded holographic data storage based on an iterative Fourier transform algorithm and known portion of the encoded data, which increases the storage code rate to two-times that of an amplitude based method. Only a single image at the Fourier plane of the beam is captured for the iterative reconstruction. Since beam intensity at the Fourier plane of the reconstructed beam is more concentrated than the reconstructed beam itself, the requirement of diffractive efficiency of the recording media is reduced, which will improve the dynamic range of recording media significantly. The phase retrieval only requires 10 iterations to achieve a less than 5% phase data error rate, which is successfully demonstrated by recording and reconstructing a test image data experimentally. We believe our method will further advance the holographic data storage technique in the era of big data.

  15. Automatic calibration method for plenoptic camera

    NASA Astrophysics Data System (ADS)

    Luan, Yinsen; He, Xing; Xu, Bing; Yang, Ping; Tang, Guomao

    2016-04-01

    An automatic calibration method is proposed for a microlens-based plenoptic camera. First, all microlens images on the white image are searched and recognized automatically based on digital morphology. Then, the center points of microlens images are rearranged according to their relative position relationships. Consequently, the microlens images are located, i.e., the plenoptic camera is calibrated without the prior knowledge of camera parameters. Furthermore, this method is appropriate for all types of microlens-based plenoptic cameras, even the multifocus plenoptic camera, the plenoptic camera with arbitrarily arranged microlenses, or the plenoptic camera with different sizes of microlenses. Finally, we verify our method by the raw data of Lytro. The experiments show that our method has higher intelligence than the methods published before.

  16. Modulated electron-multiplied fluorescence lifetime imaging microscope: all-solid-state camera for fluorescence lifetime imaging.

    PubMed

    Zhao, Qiaole; Schelen, Ben; Schouten, Raymond; van den Oever, Rein; Leenen, René; van Kuijk, Harry; Peters, Inge; Polderdijk, Frank; Bosiers, Jan; Raspe, Marcel; Jalink, Kees; Geert Sander de Jong, Jan; van Geest, Bert; Stoop, Karel; Young, Ian Ted

    2012-12-01

    We have built an all-solid-state camera that is directly modulated at the pixel level for frequency-domain fluorescence lifetime imaging microscopy (FLIM) measurements. This novel camera eliminates the need for an image intensifier through the use of an application-specific charge coupled device design in a frequency-domain FLIM system. The first stage of evaluation for the camera has been carried out. Camera characteristics such as noise distribution, dark current influence, camera gain, sampling density, sensitivity, linearity of photometric response, and optical transfer function have been studied through experiments. We are able to do lifetime measurement using our modulated, electron-multiplied fluorescence lifetime imaging microscope (MEM-FLIM) camera for various objects, e.g., fluorescein solution, fixed green fluorescent protein (GFP) cells, and GFP-actin stained live cells. A detailed comparison of a conventional microchannel plate (MCP)-based FLIM system and the MEM-FLIM system is presented. The MEM-FLIM camera shows higher resolution and a better image quality. The MEM-FLIM camera provides a new opportunity for performing frequency-domain FLIM.

  17. Digital camera with apparatus for authentication of images produced from an image file

    NASA Technical Reports Server (NTRS)

    Friedman, Gary L. (Inventor)

    1993-01-01

    A digital camera equipped with a processor for authentication of images produced from an image file taken by the digital camera is provided. The digital camera processor has embedded therein a private key unique to it, and the camera housing has a public key that is so uniquely based upon the private key that digital data encrypted with the private key by the processor may be decrypted using the public key. The digital camera processor comprises means for calculating a hash of the image file using a predetermined algorithm, and second means for encrypting the image hash with the private key, thereby producing a digital signature. The image file and the digital signature are stored in suitable recording means so they will be available together. Apparatus for authenticating at any time the image file as being free of any alteration uses the public key for decrypting the digital signature, thereby deriving a secure image hash identical to the image hash produced by the digital camera and used to produce the digital signature. The apparatus calculates from the image file an image hash using the same algorithm as before. By comparing this last image hash with the secure image hash, authenticity of the image file is determined if they match, since even one bit change in the image hash will cause the image hash to be totally different from the secure hash.

  18. Plenoptic camera image simulation for reconstruction algorithm verification

    NASA Astrophysics Data System (ADS)

    Schwiegerling, Jim

    2014-09-01

    Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. Two distinct camera forms have been proposed in the literature. The first has the camera image focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The second plenoptic form has the lenslet array relaying the image formed by the camera lens to the sensor. We have developed a raytracing package that can simulate images formed by a generalized version of the plenoptic camera. Several rays from each sensor pixel are traced backwards through the system to define a cone of rays emanating from the entrance pupil of the camera lens. Objects that lie within this cone are integrated to lead to a color and exposure level for that pixel. To speed processing three-dimensional objects are approximated as a series of planes at different depths. Repeating this process for each pixel in the sensor leads to a simulated plenoptic image on which different reconstruction algorithms can be tested.

  19. Ring Image Analyzer

    NASA Technical Reports Server (NTRS)

    Strekalov, Dmitry V.

    2012-01-01

    Ring Image Analyzer software analyzes images to recognize elliptical patterns. It determines the ellipse parameters (axes ratio, centroid coordinate, tilt angle). The program attempts to recognize elliptical fringes (e.g., Newton Rings) on a photograph and determine their centroid position, the short-to-long-axis ratio, and the angle of rotation of the long axis relative to the horizontal direction on the photograph. These capabilities are important in interferometric imaging and control of surfaces. In particular, this program has been developed and applied for determining the rim shape of precision-machined optical whispering gallery mode resonators. The program relies on a unique image recognition algorithm aimed at recognizing elliptical shapes, but can be easily adapted to other geometric shapes. It is robust against non-elliptical details of the image and against noise. Interferometric analysis of precision-machined surfaces remains an important technological instrument in hardware development and quality analysis. This software automates and increases the accuracy of this technique. The software has been developed for the needs of an R&TD-funded project and has become an important asset for the future research proposal to NASA as well as other agencies.

  20. Ultrahigh sensitivity endoscopic camera using a new CMOS image sensor: providing with clear images under low illumination in addition to fluorescent images.

    PubMed

    Aoki, Hisae; Yamashita, Hiromasa; Mori, Toshiyuki; Fukuyo, Tsuneo; Chiba, Toshio

    2014-11-01

    We developed a new ultrahigh-sensitive CMOS camera using a specific sensor that has a wide range of spectral sensitivity characteristics. The objective of this study is to present our updated endoscopic technology that has successfully integrated two innovative functions; ultrasensitive imaging as well as advanced fluorescent viewing. Two different experiments were conducted. One was carried out to evaluate the function of the ultrahigh-sensitive camera. The other was to test the availability of the newly developed sensor and its performance as a fluorescence endoscope. In both studies, the distance from the endoscopic tip to the target was varied and those endoscopic images in each setting were taken for further comparison. In the first experiment, the 3-CCD camera failed to display the clear images under low illumination, and the target was hardly seen. In contrast, the CMOS camera was able to display the targets regardless of the camera-target distance under low illumination. Under high illumination, imaging quality given by both cameras was quite alike. In the second experiment as a fluorescence endoscope, the CMOS camera was capable of clearly showing the fluorescent-activated organs. The ultrahigh sensitivity CMOS HD endoscopic camera is expected to provide us with clear images under low illumination in addition to the fluorescent images under high illumination in the field of laparoscopic surgery.

  1. An evolution of image source camera attribution approaches.

    PubMed

    Jahanirad, Mehdi; Wahab, Ainuddin Wahid Abdul; Anuar, Nor Badrul

    2016-05-01

    Camera attribution plays an important role in digital image forensics by providing the evidence and distinguishing characteristics of the origin of the digital image. It allows the forensic analyser to find the possible source camera which captured the image under investigation. However, in real-world applications, these approaches have faced many challenges due to the large set of multimedia data publicly available through photo sharing and social network sites, captured with uncontrolled conditions and undergone variety of hardware and software post-processing operations. Moreover, the legal system only accepts the forensic analysis of the digital image evidence if the applied camera attribution techniques are unbiased, reliable, nondestructive and widely accepted by the experts in the field. The aim of this paper is to investigate the evolutionary trend of image source camera attribution approaches from fundamental to practice, in particular, with the application of image processing and data mining techniques. Extracting implicit knowledge from images using intrinsic image artifacts for source camera attribution requires a structured image mining process. In this paper, we attempt to provide an introductory tutorial on the image processing pipeline, to determine the general classification of the features corresponding to different components for source camera attribution. The article also reviews techniques of the source camera attribution more comprehensively in the domain of the image forensics in conjunction with the presentation of classifying ongoing developments within the specified area. The classification of the existing source camera attribution approaches is presented based on the specific parameters, such as colour image processing pipeline, hardware- and software-related artifacts and the methods to extract such artifacts. The more recent source camera attribution approaches, which have not yet gained sufficient attention among image forensics researchers, are also critically analysed and further categorised into four different classes, namely, optical aberrations based, sensor camera fingerprints based, processing statistics based and processing regularities based, to present a classification. Furthermore, this paper aims to investigate the challenging problems, and the proposed strategies of such schemes based on the suggested taxonomy to plot an evolution of the source camera attribution approaches with respect to the subjective optimisation criteria over the last decade. The optimisation criteria were determined based on the strategies proposed to increase the detection accuracy, robustness and computational efficiency of source camera brand, model or device attribution. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  2. Composite video and graphics display for multiple camera viewing system in robotics and teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)

    1991-01-01

    A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.

  3. Composite video and graphics display for camera viewing systems in robotics and teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)

    1993-01-01

    A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.

  4. The 1974 NASA-ASEE summer faculty fellowship aeronautics and space research program

    NASA Technical Reports Server (NTRS)

    Obrien, J. F., Jr.; Jones, C. O.; Barfield, B. F.

    1974-01-01

    Research activities by participants in the fellowship program are documented, and include such topics as: (1) multispectral imagery for detecting southern pine beetle infestations; (2) trajectory optimization techniques for low thrust vehicles; (3) concentration characteristics of a fresnel solar strip reflection concentrator; (4) calaboration and reduction of video camera data; (5) fracture mechanics of Cer-Vit glass-ceramic; (6) space shuttle external propellant tank prelaunch heat transfer; (7) holographic interferometric fringes; and (8) atmospheric wind and stress profiles in a two-dimensional internal boundary layer.

  5. Self-referenced interferometer for cylindrical surfaces.

    PubMed

    Šarbort, Martin; Řeřucha, Šimon; Holá, Miroslava; Buchta, Zdeněk; Lazar, Josef

    2015-11-20

    We present a new interferometric method for shape measurement of hollow cylindrical tubes. We propose a simple and robust self-referenced interferometer where the reference and object waves are represented by the central and peripheral parts, respectively, of the conical wave generated by a single axicon lens. The interferogram detected by a digital camera is characterized by a closed-fringe pattern with a circular carrier. The interference phase is demodulated using spatial synchronous detection. The capabilities of the interferometer are experimentally tested for various hollow cylindrical tubes with lengths up to 600 mm.

  6. Digital Camera with Apparatus for Authentication of Images Produced from an Image File

    NASA Technical Reports Server (NTRS)

    Friedman, Gary L. (Inventor)

    1996-01-01

    A digital camera equipped with a processor for authentication of images produced from an image file taken by the digital camera is provided. The digital camera processor has embedded therein a private key unique to it, and the camera housing has a public key that is so uniquely related to the private key that digital data encrypted with the private key may be decrypted using the public key. The digital camera processor comprises means for calculating a hash of the image file using a predetermined algorithm, and second means for encrypting the image hash with the private key, thereby producing a digital signature. The image file and the digital signature are stored in suitable recording means so they will be available together. Apparatus for authenticating the image file as being free of any alteration uses the public key for decrypting the digital signature, thereby deriving a secure image hash identical to the image hash produced by the digital camera and used to produce the digital signature. The authenticating apparatus calculates from the image file an image hash using the same algorithm as before. By comparing this last image hash with the secure image hash, authenticity of the image file is determined if they match. Other techniques to address time-honored methods of deception, such as attaching false captions or inducing forced perspectives, are included.

  7. Image Alignment for Multiple Camera High Dynamic Range Microscopy.

    PubMed

    Eastwood, Brian S; Childs, Elisabeth C

    2012-01-09

    This paper investigates the problem of image alignment for multiple camera high dynamic range (HDR) imaging. HDR imaging combines information from images taken with different exposure settings. Combining information from multiple cameras requires an alignment process that is robust to the intensity differences in the images. HDR applications that use a limited number of component images require an alignment technique that is robust to large exposure differences. We evaluate the suitability for HDR alignment of three exposure-robust techniques. We conclude that image alignment based on matching feature descriptors extracted from radiant power images from calibrated cameras yields the most accurate and robust solution. We demonstrate the use of this alignment technique in a high dynamic range video microscope that enables live specimen imaging with a greater level of detail than can be captured with a single camera.

  8. Image Alignment for Multiple Camera High Dynamic Range Microscopy

    PubMed Central

    Eastwood, Brian S.; Childs, Elisabeth C.

    2012-01-01

    This paper investigates the problem of image alignment for multiple camera high dynamic range (HDR) imaging. HDR imaging combines information from images taken with different exposure settings. Combining information from multiple cameras requires an alignment process that is robust to the intensity differences in the images. HDR applications that use a limited number of component images require an alignment technique that is robust to large exposure differences. We evaluate the suitability for HDR alignment of three exposure-robust techniques. We conclude that image alignment based on matching feature descriptors extracted from radiant power images from calibrated cameras yields the most accurate and robust solution. We demonstrate the use of this alignment technique in a high dynamic range video microscope that enables live specimen imaging with a greater level of detail than can be captured with a single camera. PMID:22545028

  9. 3D interferometric shape measurement technique using coherent fiber bundles

    NASA Astrophysics Data System (ADS)

    Zhang, Hao; Kuschmierz, Robert; Czarske, Jürgen

    2017-06-01

    In-situ 3-D shape measurements with submicron shape uncertainty of fast rotating objects in a cutting lathe are expected, which can be achieved by simultaneous distance and velocity measurements. Conventional tactile methods, coordinate measurement machines, only support ex-situ measurements. Optical measurement techniques such as triangulation and conoscopic holography offer only the distance, so that the absolute diameter cannot be retrieved directly. In comparison, laser Doppler distance sensors (P-LDD sensor) enable simultaneous and in-situ distance and velocity measurements for monitoring the cutting process in a lathe. In order to achieve shape measurement uncertainties below 1 μm, a P-LDD sensor with a dual camera based scattered light detection has been investigated. Coherent fiber bundles (CFB) are employed to forward the scattered light towards cameras. This enables a compact and passive sensor head in the future. Compared with a photo detector based sensor, the dual camera based sensor allows to decrease the measurement uncertainty by the order of one magnitude. As a result, the total shape uncertainty of absolute 3-D shape measurements can be reduced to about 100 nm.

  10. 3D interferometric microscope: color visualization of engineered surfaces for industrial applications

    NASA Astrophysics Data System (ADS)

    Schmit, Joanna; Novak, Matt; Bui, Son

    2015-09-01

    3D microscopes based on white light interference (WLI) provide precise measurement for the topography of engineering surfaces. However, the display of an object in its true colors as observed under white illumination is often desired; this traditionally has presented a challenge for WLI-based microscopes. Such 3D color display is appealing to the eye and great for presentations, and also provides fast evaluation of certain characteristics like defects, delamination, or deposition of different materials. Determination of color as observed by interferometric objectives is not straightforward; we will present how color imaging capabilities similar to an ordinary microscope can be obtained in interference microscopes based on WLI and we will give measurement and imaging examples of a few industrial samples.

  11. Pre-flight and On-orbit Geometric Calibration of the Lunar Reconnaissance Orbiter Camera

    NASA Astrophysics Data System (ADS)

    Speyerer, E. J.; Wagner, R. V.; Robinson, M. S.; Licht, A.; Thomas, P. C.; Becker, K.; Anderson, J.; Brylow, S. M.; Humm, D. C.; Tschimmel, M.

    2016-04-01

    The Lunar Reconnaissance Orbiter Camera (LROC) consists of two imaging systems that provide multispectral and high resolution imaging of the lunar surface. The Wide Angle Camera (WAC) is a seven color push-frame imager with a 90∘ field of view in monochrome mode and 60∘ field of view in color mode. From the nominal 50 km polar orbit, the WAC acquires images with a nadir ground sampling distance of 75 m for each of the five visible bands and 384 m for the two ultraviolet bands. The Narrow Angle Camera (NAC) consists of two identical cameras capable of acquiring images with a ground sampling distance of 0.5 m from an altitude of 50 km. The LROC team geometrically calibrated each camera before launch at Malin Space Science Systems in San Diego, California and the resulting measurements enabled the generation of a detailed camera model for all three cameras. The cameras were mounted and subsequently launched on the Lunar Reconnaissance Orbiter (LRO) on 18 June 2009. Using a subset of the over 793000 NAC and 207000 WAC images of illuminated terrain collected between 30 June 2009 and 15 December 2013, we improved the interior and exterior orientation parameters for each camera, including the addition of a wavelength dependent radial distortion model for the multispectral WAC. These geometric refinements, along with refined ephemeris, enable seamless projections of NAC image pairs with a geodetic accuracy better than 20 meters and sub-pixel precision and accuracy when orthorectifying WAC images.

  12. Radiative transfer modeling and analysis of spatially variant and coherent illumination for undersea object detection

    NASA Astrophysics Data System (ADS)

    Bailey, Bernard Charles

    Increasing the optical range of target detection and recognition continues to be an area of great interest in the ocean environment. Light attenuation limits radiative and information transfer for image formation in water. These limitations are difficult to surmount in conventional underwater imaging system design. Methods for the formation of images in scattering media generally rely upon temporal or spatial methodologies. Some interesting designs have been developed in an attempt to circumvent or overcome the scattering problem. This document describes a variation of the spatial interferometric technique that relies upon projected spatial gratings with subsequent detection against a coherent return signal for the purpose of noise reduction and image enhancement. A model is developed that simulates the projected structured illumination through turbid water to a target and its return to a detector. The model shows an unstructured backscatter superimposed on a structured return signal. The model can predict the effect on received signal to noise of variations in the projected spatial frequency and turbidity. The model has been extended to predict what a camera would actually see so that various noise reduction schemes can be modeled. Finally, some water tank tests are presented validating original hypothesis and model predictions. The method is advantageous in not requiring temporal synchronization between reference and signal beams and may use a continuous illumination source. Spatial coherency of the beam allows detection of the direct return, while scattered light appears as a noncoherent noise term. Both model and illumination method should prove to be valuable tools in ocean research.

  13. Interferometric pump-probe characterization of the nonlocal response of optically transparent ion implanted polymers

    NASA Astrophysics Data System (ADS)

    Stefanov, Ivan L.; Hadjichristov, Georgi B.

    2012-03-01

    Optical interferometric technique is applied to characterize the nonlocal response of optically transparent ion implanted polymers. The thermal nonlinearity of the ion-modified material in the near-surface region is induced by continuous wave (cw) laser irradiation at a relatively low intensity. The interferometry approach is demonstrated for a subsurface layer of a thickness of about 100 nm formed in bulk polymethylmethacrylate (PMMA) by implantation with silicon ions at an energy of 50 keV and fluence in the range 1014-1017 cm-2. The laser-induced thermooptic effect in this layer is finely probed by interferometric imaging. The interference phase distribution in the plane of the ion implanted layer is indicative for the thermal nonlinearity of the near-surface region of ion implanted optically transparent polymeric materials.

  14. Applying and extending ISO/TC42 digital camera resolution standards to mobile imaging products

    NASA Astrophysics Data System (ADS)

    Williams, Don; Burns, Peter D.

    2007-01-01

    There are no fundamental differences between today's mobile telephone cameras and consumer digital still cameras that suggest many existing ISO imaging performance standards do not apply. To the extent that they have lenses, color filter arrays, detectors, apertures, image processing, and are hand held, there really are no operational or architectural differences. Despite this, there are currently differences in the levels of imaging performance. These are driven by physical and economic constraints, and image-capture conditions. Several ISO standards for resolution, well established for digital consumer digital cameras, require care when applied to the current generation of cell phone cameras. In particular, accommodation of optical flare, shading non-uniformity and distortion are recommended. We offer proposals for the application of existing ISO imaging resolution performance standards to mobile imaging products, and suggestions for extending performance standards to the characteristic behavior of camera phones.

  15. NV-CMOS HD camera for day/night imaging

    NASA Astrophysics Data System (ADS)

    Vogelsong, T.; Tower, J.; Sudol, Thomas; Senko, T.; Chodelka, D.

    2014-06-01

    SRI International (SRI) has developed a new multi-purpose day/night video camera with low-light imaging performance comparable to an image intensifier, while offering the size, weight, ruggedness, and cost advantages enabled by the use of SRI's NV-CMOS HD digital image sensor chip. The digital video output is ideal for image enhancement, sharing with others through networking, video capture for data analysis, or fusion with thermal cameras. The camera provides Camera Link output with HD/WUXGA resolution of 1920 x 1200 pixels operating at 60 Hz. Windowing to smaller sizes enables operation at higher frame rates. High sensitivity is achieved through use of backside illumination, providing high Quantum Efficiency (QE) across the visible and near infrared (NIR) bands (peak QE <90%), as well as projected low noise (<2h+) readout. Power consumption is minimized in the camera, which operates from a single 5V supply. The NVCMOS HD camera provides a substantial reduction in size, weight, and power (SWaP) , ideal for SWaP-constrained day/night imaging platforms such as UAVs, ground vehicles, fixed mount surveillance, and may be reconfigured for mobile soldier operations such as night vision goggles and weapon sights. In addition the camera with the NV-CMOS HD imager is suitable for high performance digital cinematography/broadcast systems, biofluorescence/microscopy imaging, day/night security and surveillance, and other high-end applications which require HD video imaging with high sensitivity and wide dynamic range. The camera comes with an array of lens mounts including C-mount and F-mount. The latest test data from the NV-CMOS HD camera will be presented.

  16. Toward an image compression algorithm for the high-resolution electronic still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    Taking pictures with a camera that uses a digital recording medium instead of film has the advantage of recording and transmitting images without the use of a darkroom or a courier. However, high-resolution images contain an enormous amount of information and strain data-storage systems. Image compression will allow multiple images to be stored in the High-Resolution Electronic Still Camera. The camera is under development at Johnson Space Center. Fidelity of the reproduced image and compression speed are of tantamount importance. Lossless compression algorithms are fast and faithfully reproduce the image, but their compression ratios will be unacceptably low due to noise in the front end of the camera. Future efforts will include exploring methods that will reduce the noise in the image and increase the compression ratio.

  17. A LWIR hyperspectral imager using a Sagnac interferometer and cooled HgCdTe detector array

    NASA Astrophysics Data System (ADS)

    Lucey, Paul G.; Wood, Mark; Crites, Sarah T.; Akagi, Jason

    2012-06-01

    LWIR hyperspectral imaging has a wide range of civil and military applications with its ability to sense chemical compositions at standoff ranges. Most recent implementations of this technology use spectrographs employing varying degrees of cryogenic cooling to reduce sensor self-emission that can severely limit sensitivity. We have taken an interferometric approach that promises to reduce the need for cooling while preserving high resolution. Reduced cooling has multiple benefits including faster system readiness from a power off state, lower mass, and potentially lower cost owing to lower system complexity. We coupled an uncooled Sagnac interferometer with a 256x320 mercury cadmium telluride array with an 11 micron cutoff to produce a spatial interferometric LWIR hyperspectral imaging system operating from 7.5 to 11 microns. The sensor was tested in ground-ground applications, and from a small aircraft producing spectral imagery including detection of gas emission from high vapor pressure liquids.

  18. A frequency domain radar interferometric imaging (FII) technique based on high-resolution methods

    NASA Astrophysics Data System (ADS)

    Luce, H.; Yamamoto, M.; Fukao, S.; Helal, D.; Crochet, M.

    2001-01-01

    In the present work, we propose a frequency-domain interferometric imaging (FII) technique for a better knowledge of the vertical distribution of the atmospheric scatterers detected by MST radars. This is an extension of the dual frequency-domain interferometry (FDI) technique to multiple frequencies. Its objective is to reduce the ambiguity (resulting from the use of only two adjacent frequencies), inherent with the FDI technique. Different methods, commonly used in antenna array processing, are first described within the context of application to the FII technique. These methods are the Fourier-based imaging, the Capon's and the singular value decomposition method used with the MUSIC algorithm. Some preliminary simulations and tests performed on data collected with the middle and upper atmosphere (MU) radar (Shigaraki, Japan) are also presented. This work is a first step in the developments of the FII technique which seems to be very promising.

  19. Object recognition through turbulence with a modified plenoptic camera

    NASA Astrophysics Data System (ADS)

    Wu, Chensheng; Ko, Jonathan; Davis, Christopher

    2015-03-01

    Atmospheric turbulence adds accumulated distortion to images obtained by cameras and surveillance systems. When the turbulence grows stronger or when the object is further away from the observer, increasing the recording device resolution helps little to improve the quality of the image. Many sophisticated methods to correct the distorted images have been invented, such as using a known feature on or near the target object to perform a deconvolution process, or use of adaptive optics. However, most of the methods depend heavily on the object's location, and optical ray propagation through the turbulence is not directly considered. Alternatively, selecting a lucky image over many frames provides a feasible solution, but at the cost of time. In our work, we propose an innovative approach to improving image quality through turbulence by making use of a modified plenoptic camera. This type of camera adds a micro-lens array to a traditional high-resolution camera to form a semi-camera array that records duplicate copies of the object as well as "superimposed" turbulence at slightly different angles. By performing several steps of image reconstruction, turbulence effects will be suppressed to reveal more details of the object independently (without finding references near the object). Meanwhile, the redundant information obtained by the plenoptic camera raises the possibility of performing lucky image algorithmic analysis with fewer frames, which is more efficient. In our work, the details of our modified plenoptic cameras and image processing algorithms will be introduced. The proposed method can be applied to coherently illuminated object as well as incoherently illuminated objects. Our result shows that the turbulence effect can be effectively suppressed by the plenoptic camera in the hardware layer and a reconstructed "lucky image" can help the viewer identify the object even when a "lucky image" by ordinary cameras is not achievable.

  20. Automatic source camera identification using the intrinsic lens radial distortion

    NASA Astrophysics Data System (ADS)

    Choi, Kai San; Lam, Edmund Y.; Wong, Kenneth K. Y.

    2006-11-01

    Source camera identification refers to the task of matching digital images with the cameras that are responsible for producing these images. This is an important task in image forensics, which in turn is a critical procedure in law enforcement. Unfortunately, few digital cameras are equipped with the capability of producing watermarks for this purpose. In this paper, we demonstrate that it is possible to achieve a high rate of accuracy in the identification by noting the intrinsic lens radial distortion of each camera. To reduce manufacturing cost, the majority of digital cameras are equipped with lenses having rather spherical surfaces, whose inherent radial distortions serve as unique fingerprints in the images. We extract, for each image, parameters from aberration measurements, which are then used to train and test a support vector machine classifier. We conduct extensive experiments to evaluate the success rate of a source camera identification with five cameras. The results show that this is a viable approach with high accuracy. Additionally, we also present results on how the error rates may change with images captured using various optical zoom levels, as zooming is commonly available in digital cameras.

  1. Measuring Positions of Objects using Two or More Cameras

    NASA Technical Reports Server (NTRS)

    Klinko, Steve; Lane, John; Nelson, Christopher

    2008-01-01

    An improved method of computing positions of objects from digitized images acquired by two or more cameras (see figure) has been developed for use in tracking debris shed by a spacecraft during and shortly after launch. The method is also readily adaptable to such applications as (1) tracking moving and possibly interacting objects in other settings in order to determine causes of accidents and (2) measuring positions of stationary objects, as in surveying. Images acquired by cameras fixed to the ground and/or cameras mounted on tracking telescopes can be used in this method. In this method, processing of image data starts with creation of detailed computer- aided design (CAD) models of the objects to be tracked. By rotating, translating, resizing, and overlaying the models with digitized camera images, parameters that characterize the position and orientation of the camera can be determined. The final position error depends on how well the centroids of the objects in the images are measured; how accurately the centroids are interpolated for synchronization of cameras; and how effectively matches are made to determine rotation, scaling, and translation parameters. The method involves use of the perspective camera model (also denoted the point camera model), which is one of several mathematical models developed over the years to represent the relationships between external coordinates of objects and the coordinates of the objects as they appear on the image plane in a camera. The method also involves extensive use of the affine camera model, in which the distance from the camera to an object (or to a small feature on an object) is assumed to be much greater than the size of the object (or feature), resulting in a truly two-dimensional image. The affine camera model does not require advance knowledge of the positions and orientations of the cameras. This is because ultimately, positions and orientations of the cameras and of all objects are computed in a coordinate system attached to one object as defined in its CAD model.

  2. Earth elevation map production and high resolution sensing camera imaging analysis

    NASA Astrophysics Data System (ADS)

    Yang, Xiubin; Jin, Guang; Jiang, Li; Dai, Lu; Xu, Kai

    2010-11-01

    The Earth's digital elevation which impacts space camera imaging has prepared and imaging has analysed. Based on matching error that TDI CCD integral series request of the speed of image motion, statistical experimental methods-Monte Carlo method is used to calculate the distribution histogram of Earth's elevation in image motion compensated model which includes satellite attitude changes, orbital angular rate changes, latitude, longitude and the orbital inclination changes. And then, elevation information of the earth's surface from SRTM is read. Earth elevation map which produced for aerospace electronic cameras is compressed and spliced. It can get elevation data from flash according to the shooting point of latitude and longitude. If elevation data between two data, the ways of searching data uses linear interpolation. Linear interpolation can better meet the rugged mountains and hills changing requests. At last, the deviant framework and camera controller are used to test the character of deviant angle errors, TDI CCD camera simulation system with the material point corresponding to imaging point model is used to analyze the imaging's MTF and mutual correlation similarity measure, simulation system use adding cumulation which TDI CCD imaging exceeded the corresponding pixel horizontal and vertical offset to simulate camera imaging when stability of satellite attitude changes. This process is practicality. It can effectively control the camera memory space, and meet a very good precision TDI CCD camera in the request matches the speed of image motion and imaging.

  3. Technique for improving the quality of images from digital cameras using ink-jet printers and smoothed RGB transfer curves

    NASA Astrophysics Data System (ADS)

    Sampat, Nitin; Grim, John F.; O'Hara, James E.

    1998-04-01

    The digital camera market is growing at an explosive rate. At the same time, the quality of photographs printed on ink- jet printers continues to improve. Most of the consumer cameras are designed with the monitor as the target output device and ont the printer. When a user is printing his images from a camera, he/she needs to optimize the camera and printer combination in order to maximize image quality. We describe the details of one such method for improving image quality using a AGFA digital camera and an ink jet printer combination. Using Adobe PhotoShop, we generated optimum red, green and blue transfer curves that match the scene content to the printers output capabilities. Application of these curves to the original digital image resulted in a print with more shadow detail, no loss of highlight detail, a smoother tone scale, and more saturated colors. The image also exhibited an improved tonal scale and visually more pleasing images than those captured and printed without any 'correction'. While we report the results for one camera-printer combination we tested this technique on numbers digital cameras and printer combinations and in each case produced a better looking image. We also discuss the problems we encountered in implementing this technique.

  4. The Disk of 48 Lib Revealed by NPOI

    NASA Astrophysics Data System (ADS)

    Lembryk, Ludwik; Tycner, C.; Sigut, A.; Zavala, R. T.

    2013-01-01

    We present a study of the disk around the Be star 48 Lib, where NLTE numerical disk models are being compared to the spectral and interferometric data to constrain the physical properties of the inner disk structure. The computational models are generated using the BEDISK code, which accounts for heating and cooling of various atoms in the disk and assumes solar chemical composition. A large set of self-consistent disk models produced with the BEDISK code is in turn used to generate synthetic spectra and images assuming a wide range of inclination angles using the BERAY code. The aim of this project is to constrain the physical properties as well as the inclination angles using both spectroscopic and interferometric data. The interferometric data were obtained using the Naval Precision Optical Interferometer (NPOI), with the focus on Hydrogen Balmer-alpha emission, which is the strongest emission line present due to the circumstellar structure. Because 48 Lib shows clear asymmetric spectral lines, we discuss how we model the asymmetric peaks of the Halpha line by combining two models computed with different density structures. The corresponding synthetic images of these combined density structures are then Fourier transformed and compared to the interferometric data. This numerical strategy has the potential to easily model the commonly observed variation of the ratio of the violet-to-red (V/R ratio) emission peaks and constrain the long-term variability associated with the disk of 48 Lib as well as other emission-line stars that show similar variability.

  5. a High Precision dem Extraction Method Based on Insar Data

    NASA Astrophysics Data System (ADS)

    Wang, Xinshuang; Liu, Lingling; Shi, Xiaoliang; Huang, Xitao; Geng, Wei

    2018-04-01

    In the 13th Five-Year Plan for Geoinformatics Business, it is proposed that the new InSAR technology should be applied to surveying and mapping production, which will become the innovation driving force of geoinformatics industry. This paper will study closely around the new outline of surveying and mapping and then achieve the TerraSAR/TanDEM data of Bin County in Shaanxi Province in X band. The studying steps are as follows; Firstly, the baseline is estimated from the orbital data; Secondly, the interferometric pairs of SAR image are accurately registered; Thirdly, the interferogram is generated; Fourth, the interferometric correlation information is estimated and the flat-earth phase is removed. In order to solve the phase noise and the discontinuity phase existing in the interferometric image of phase, a GAMMA adaptive filtering method is adopted. Aiming at the "hole" problem of missing data in low coherent area, the interpolation method of low coherent area mask is used to assist the phase unwrapping. Then, the accuracy of the interferometric baseline is estimated from the ground control points. Finally, 1 : 50000 DEM is generated, and the existing DEM data is used to verify the accuracy through statistical analysis. The research results show that the improved InSAR data processing method in this paper can obtain the high-precision DEM of the study area, exactly the same with the topography of reference DEM. The R2 can reach to 0.9648, showing a strong positive correlation.

  6. Combining shearography and interferometric fringe projection in a single device for complete control of industrial applications

    NASA Astrophysics Data System (ADS)

    Blain, Pascal; Michel, Fabrice; Piron, Pierre; Renotte, Yvon; Habraken, Serge

    2013-08-01

    Noncontact optical measurement methods are essential tools in many industrial and research domains. A family of new noncontact optical measurement methods based on the polarization states splitting technique and monochromatic light projection as a way to overcome ambient lighting for in-situ measurement has been developed. Recent works on a birefringent element, a Savart plate, allow one to build a more flexible and robust interferometer. This interferometer is a multipurpose metrological device. On one hand the interferometer can be set in front of a charge-coupled device (CCD) camera. This optical measurement system is called a shearography interferometer and allows one to measure microdisplacements between two states of the studied object under coherent lighting. On the other hand, by producing and shifting multiple sinusoidal Young's interference patterns with this interferometer, and using a CCD camera, it is possible to build a three-dimensional structured light profilometer.

  7. Event-Driven Random-Access-Windowing CCD Imaging System

    NASA Technical Reports Server (NTRS)

    Monacos, Steve; Portillo, Angel; Ortiz, Gerardo; Alexander, James; Lam, Raymond; Liu, William

    2004-01-01

    A charge-coupled-device (CCD) based high-speed imaging system, called a realtime, event-driven (RARE) camera, is undergoing development. This camera is capable of readout from multiple subwindows [also known as regions of interest (ROIs)] within the CCD field of view. Both the sizes and the locations of the ROIs can be controlled in real time and can be changed at the camera frame rate. The predecessor of this camera was described in High-Frame-Rate CCD Camera Having Subwindow Capability (NPO- 30564) NASA Tech Briefs, Vol. 26, No. 12 (December 2002), page 26. The architecture of the prior camera requires tight coupling between camera control logic and an external host computer that provides commands for camera operation and processes pixels from the camera. This tight coupling limits the attainable frame rate and functionality of the camera. The design of the present camera loosens this coupling to increase the achievable frame rate and functionality. From a host computer perspective, the readout operation in the prior camera was defined on a per-line basis; in this camera, it is defined on a per-ROI basis. In addition, the camera includes internal timing circuitry. This combination of features enables real-time, event-driven operation for adaptive control of the camera. Hence, this camera is well suited for applications requiring autonomous control of multiple ROIs to track multiple targets moving throughout the CCD field of view. Additionally, by eliminating the need for control intervention by the host computer during the pixel readout, the present design reduces ROI-readout times to attain higher frame rates. This camera (see figure) includes an imager card consisting of a commercial CCD imager and two signal-processor chips. The imager card converts transistor/ transistor-logic (TTL)-level signals from a field programmable gate array (FPGA) controller card. These signals are transmitted to the imager card via a low-voltage differential signaling (LVDS) cable assembly. The FPGA controller card is connected to the host computer via a standard peripheral component interface (PCI).

  8. You are here: Earth as seen from Mars

    NASA Image and Video Library

    2004-03-11

    This is the first image ever taken of Earth from the surface of a planet beyond the Moon. It was taken by the Mars Exploration Rover Spirit one hour before sunrise on the 63rd martian day, or sol, of its mission. The image is a mosaic of images taken by the rover's navigation camera showing a broad view of the sky, and an image taken by the rover's panoramic camera of Earth. The contrast in the panoramic camera image was increased two times to make Earth easier to see. The inset shows a combination of four panoramic camera images zoomed in on Earth. The arrow points to Earth. Earth was too faint to be detected in images taken with the panoramic camera's color filters. http://photojournal.jpl.nasa.gov/catalog/PIA05547

  9. The sequence measurement system of the IR camera

    NASA Astrophysics Data System (ADS)

    Geng, Ai-hui; Han, Hong-xia; Zhang, Hai-bo

    2011-08-01

    Currently, the IR cameras are broadly used in the optic-electronic tracking, optic-electronic measuring, fire control and optic-electronic countermeasure field, but the output sequence of the most presently applied IR cameras in the project is complex and the giving sequence documents from the leave factory are not detailed. Aiming at the requirement that the continuous image transmission and image procession system need the detailed sequence of the IR cameras, the sequence measurement system of the IR camera is designed, and the detailed sequence measurement way of the applied IR camera is carried out. The FPGA programming combined with the SignalTap online observation way has been applied in the sequence measurement system, and the precise sequence of the IR camera's output signal has been achieved, the detailed document of the IR camera has been supplied to the continuous image transmission system, image processing system and etc. The sequence measurement system of the IR camera includes CameraLink input interface part, LVDS input interface part, FPGA part, CameraLink output interface part and etc, thereinto the FPGA part is the key composed part in the sequence measurement system. Both the video signal of the CmaeraLink style and the video signal of LVDS style can be accepted by the sequence measurement system, and because the image processing card and image memory card always use the CameraLink interface as its input interface style, the output signal style of the sequence measurement system has been designed into CameraLink interface. The sequence measurement system does the IR camera's sequence measurement work and meanwhile does the interface transmission work to some cameras. Inside the FPGA of the sequence measurement system, the sequence measurement program, the pixel clock modification, the SignalTap file configuration and the SignalTap online observation has been integrated to realize the precise measurement to the IR camera. Te sequence measurement program written by the verilog language combining the SignalTap tool on line observation can count the line numbers in one frame, pixel numbers in one line and meanwhile account the line offset and row offset of the image. Aiming at the complex sequence of the IR camera's output signal, the sequence measurement system of the IR camera accurately measures the sequence of the project applied camera, supplies the detailed sequence document to the continuous system such as image processing system and image transmission system and gives out the concrete parameters of the fval, lval, pixclk, line offset and row offset. The experiment shows that the sequence measurement system of the IR camera can get the precise sequence measurement result and works stably, laying foundation for the continuous system.

  10. Mars Descent Imager for Curiosity

    NASA Image and Video Library

    2010-07-19

    A pocketknife provides scale for this image of the Mars Descent Imager camera; the camera will fly on the Curiosity rover of NASA Mars Science Laboratory mission. Malin Space Science Systems, San Diego, Calif., supplied the camera for the mission.

  11. New generation of meteorology cameras

    NASA Astrophysics Data System (ADS)

    Janout, Petr; Blažek, Martin; Páta, Petr

    2017-12-01

    A new generation of the WILLIAM (WIde-field aLL-sky Image Analyzing Monitoring system) camera includes new features such as monitoring of rain and storm clouds during the day observation. Development of the new generation of weather monitoring cameras responds to the demand for monitoring of sudden weather changes. However, new WILLIAM cameras are ready to process acquired image data immediately, release warning against sudden torrential rains, and send it to user's cell phone and email. Actual weather conditions are determined from image data, and results of image processing are complemented by data from sensors of temperature, humidity, and atmospheric pressure. In this paper, we present the architecture, image data processing algorithms of mentioned monitoring camera and spatially-variant model of imaging system aberrations based on Zernike polynomials.

  12. Local residue coupling strategies by neural network for InSAR phase unwrapping

    NASA Astrophysics Data System (ADS)

    Refice, Alberto; Satalino, Giuseppe; Chiaradia, Maria T.

    1997-12-01

    Phase unwrapping is one of the toughest problems in interferometric SAR processing. The main difficulties arise from the presence of point-like error sources, called residues, which occur mainly in close couples due to phase noise. We present an assessment of a local approach to the resolution of these problems by means of a neural network. Using a multi-layer perceptron, trained with the back- propagation scheme on a series of simulated phase images, fashion the best pairing strategies for close residue couples. Results show that god efficiencies and accuracies can have been obtained, provided a sufficient number of training examples are supplied. Results show that good efficiencies and accuracies can be obtained, provided a sufficient number of training examples are supplied. The technique is tested also on real SAR ERS-1/2 tandem interferometric images of the Matera test site, showing a good reduction of the residue density. The better results obtained by use of the neural network as far as local criteria are adopted appear justified given the probabilistic nature of the noise process on SAR interferometric phase fields and allows to outline a specifically tailored implementation of the neural network approach as a very fast pre-processing step intended to decrease the residue density and give sufficiently clean images to be processed further by more conventional techniques.

  13. The ZpiM algorithm: a method for interferometric image reconstruction in SAR/SAS.

    PubMed

    Dias, José M B; Leitao, José M N

    2002-01-01

    This paper presents an effective algorithm for absolute phase (not simply modulo-2-pi) estimation from incomplete, noisy and modulo-2pi observations in interferometric aperture radar and sonar (InSAR/InSAS). The adopted framework is also representative of other applications such as optical interferometry, magnetic resonance imaging and diffraction tomography. The Bayesian viewpoint is adopted; the observation density is 2-pi-periodic and accounts for the interferometric pair decorrelation and system noise; the a priori probability of the absolute phase is modeled by a compound Gauss-Markov random field (CGMRF) tailored to piecewise smooth absolute phase images. We propose an iterative scheme for the computation of the maximum a posteriori probability (MAP) absolute phase estimate. Each iteration embodies a discrete optimization step (Z-step), implemented by network programming techniques and an iterative conditional modes (ICM) step (pi-step). Accordingly, the algorithm is termed ZpiM, where the letter M stands for maximization. An important contribution of the paper is the simultaneous implementation of phase unwrapping (inference of the 2pi-multiples) and smoothing (denoising of the observations). This improves considerably the accuracy of the absolute phase estimates compared to methods in which the data is low-pass filtered prior to unwrapping. A set of experimental results, comparing the proposed algorithm with alternative methods, illustrates the effectiveness of our approach.

  14. Speckle interferometry at the OAN-SPM México: astrometry of double stars measured in 2011

    NASA Astrophysics Data System (ADS)

    Guerrero, C. A.; Orlov, V. G.; Borges Fernandes, M.; Ángeles, F.

    2018-04-01

    We present speckle interferometric measurements of binary stars performed during 2011 February and April with the 1.5-m telescope and during 2011 July and November with the 2.1-m telescope of the Observatorio Astronómico Nacional, San Pedro Mártir, México, focusing on objects from the Washington Double Star Catalog with separations less than 1 arcsec. Among these objects, we have been interested in performing a follow-up observation of new double stars discovered by Hipparcos. For these observations, we developed a new detector, which is a combination of CCD Watec 120N with a third generation image intensifier. This image intensifier allows us to perform near-infrared speckle interferometric observations for the first time. In this paper, we report 761 astrometric measurements of 478 pairs, with angular separations ranging from 0.09 to 2.61 arcsec. We found that 722 of our measured separations are smaller than 1 arcsec. We estimated a mean error in separation of 16 mas and 1.29° in position angle. In order to overcome the usual 180° ambiguity inherent to speckle measurements, we created a shift-and-add reconstructed image of each source, to establish the true quadrant of the secondary star. We confirmed 40 double stars discovered by Hipparcos and found 4 field stars resolved as interferometric pairs for the first time, with separations smaller than 0.60 arcsec.

  15. Mathematical model of a DIC position sensing system within an optical trap

    NASA Astrophysics Data System (ADS)

    Wulff, Kurt D.; Cole, Daniel G.; Clark, Robert L.

    2005-08-01

    The quantitative study of displacements and forces of motor proteins and processes that occur at the microscopic level and below require a high level of sensitivity. For optical traps, two techniques for position sensing have been accepted and used quite extensively: quadrant photodiodes and an interferometric position sensing technique based on DIC imaging. While quadrant photodiodes have been studied in depth and mathematically characterized, a mathematical characterization of the interferometric position sensor has not been presented to the authors' knowledge. The interferometric position sensing method works off of the DIC imaging capabilities of a microscope. Circularly polarized light is sent into the microscope and the Wollaston prism used for DIC imaging splits the beam into its orthogonal components, displacing them by a set distance determined by the user. The distance between the axes of the beams is set so the beams overlap at the specimen plane and effectively share the trapped microsphere. A second prism then recombines the light beams and the exiting laser light's polarization is measured and related to position. In this paper we outline the mathematical characterization of a microsphere suspended in an optical trap using a DIC position sensing method. The sensitivity of this mathematical model is then compared to the QPD model. The mathematical model of a microsphere in an optical trap can serve as a calibration curve for an experimental setup.

  16. Phenology cameras observing boreal ecosystems of Finland

    NASA Astrophysics Data System (ADS)

    Peltoniemi, Mikko; Böttcher, Kristin; Aurela, Mika; Kolari, Pasi; Tanis, Cemal Melih; Linkosalmi, Maiju; Loehr, John; Metsämäki, Sari; Nadir Arslan, Ali

    2016-04-01

    Cameras have become useful tools for monitoring seasonality of ecosystems. Low-cost cameras facilitate validation of other measurements and allow extracting some key ecological features and moments from image time series. We installed a network of phenology cameras at selected ecosystem research sites in Finland. Cameras were installed above, on the level, or/and below the canopies. Current network hosts cameras taking time lapse images in coniferous and deciduous forests as well as at open wetlands offering thus possibilities to monitor various phenological and time-associated events and elements. In this poster, we present our camera network and give examples of image series use for research. We will show results about the stability of camera derived color signals, and based on that discuss about the applicability of cameras in monitoring time-dependent phenomena. We will also present results from comparisons between camera-derived color signal time series and daily satellite-derived time series (NVDI, NDWI, and fractional snow cover) from the Moderate Resolution Imaging Spectrometer (MODIS) at selected spruce and pine forests and in a wetland. We will discuss the applicability of cameras in supporting phenological observations derived from satellites, by considering the possibility of cameras to monitor both above and below canopy phenology and snow.

  17. Light-Directed Ranging System Implementing Single Camera System for Telerobotics Applications

    NASA Technical Reports Server (NTRS)

    Wells, Dennis L. (Inventor); Li, Larry C. (Inventor); Cox, Brian J. (Inventor)

    1997-01-01

    A laser-directed ranging system has utility for use in various fields, such as telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a single video camera and a directional light source such as a laser mounted on a camera platform, and a remotely positioned operator. In one embodiment, the position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. The laser is offset vertically and horizontally from the camera, and the laser/camera platform is directed by the user to point the laser and the camera toward a target device. The image produced by the video camera is processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. A reference point is defined at a point in the video frame, which may be located outside of the image area of the camera. The disparity between the digital image of the laser spot and the reference point is calculated for use in a ranging analysis to determine range to the target.

  18. A digital gigapixel large-format tile-scan camera.

    PubMed

    Ben-Ezra, M

    2011-01-01

    Although the resolution of single-lens reflex (SLR) and medium-format digital cameras has increased in recent years, applications for cultural-heritage preservation and computational photography require even higher resolutions. Addressing this issue, a large-format cameras' large image planes can achieve very high resolution without compromising pixel size and thus can provide high-quality, high-resolution images.This digital large-format tile scan camera can acquire high-quality, high-resolution images of static scenes. It employs unique calibration techniques and a simple algorithm for focal-stack processing of very large images with significant magnification variations. The camera automatically collects overlapping focal stacks and processes them into a high-resolution, extended-depth-of-field image.

  19. System Engineering the Space Infrared Interferometric Telescope (SPIRIT)

    NASA Technical Reports Server (NTRS)

    Hyde, Tristram T.; Leisawitz, David T.; Rinehart, Stephen

    2007-01-01

    The Space Infrared Interferometric Telescope (SPIRIT) was designed to accomplish three scientific objectives: (1) learn how planetary systems form from protostellar disks and how they acquire their inhomogeneous chemical composition; (2) characterize the family of extrasolar planetary systems by imaging the structure in debris disks to understand how and where planets of different types form; and (3) learn how high-redshift galaxies formed and merged to form the present-day population of galaxies. SPIRIT will accomplish these objectives through infrared observations with a two aperture interferometric instrument. This paper gives an overview of SPIRIT design and operation, and how the three design cycle concept study was completed. The error budget for several key performance values allocates tolerances to all contributing factors, and a performance model of the spacecraft plus instrument system demonstrates meeting those allocations with margin.

  20. HD139614: the Interferometric Case for a Group-Ib Pre-Transitional Young Disk

    NASA Technical Reports Server (NTRS)

    Labadie, Lucas; Matter, Alexis; Kreplin, Alexander; Lopez, Bruno; Wolf, Sebastian; Weigelt, Gerd; Ertel, Steve; Berger, Jean-Philippe; Pott, Jorg-Uwe; Danchi, William C.

    2014-01-01

    The Herbig Ae star HD139614 is a group-Ib object, which featureless SED indicates disk flaring and a possible pre-transitional evolutionary stage. We present mid- and near-IR interferometric results collected with MIDI, AMBER and PIONIER with the aim of constraining the spatial structure of the 0.1-10 AU disk region and assess its possible multi-component structure. A two-component disk model composed of an optically thin 2-AU wide inner disk and an outer temperature-gradient disk starting at 5.6 AU reproduces well the observations. This is an additional argument to the idea that group-I HAeBe inner disks could be already in the disk-clearing transient stage. HD139614 will become a prime target for mid-IR interferometric imaging with the second-generation instrument MATISSE of the VLTI.

  1. Widely tunable semiconductor lasers with three interferometric arms.

    PubMed

    Su, Guan-Lin; Wu, Ming C

    2017-09-04

    We present a comprehensive study for a new three-branch widely tunable semiconductor laser based on a self-imaging, lossless multi-mode interference (MMI) coupler. We have developed a general theoretical framework that is applicable to all types of interferometric lasers. Our analysis showed that the three-branch laser offers high side-mode suppression ratios (SMSRs) while maintaining a wide tuning range and a low threshold modal gain of the lasing mode. We also present the design rules for tuning over the dense-wavelength division multiplexing grid over the C-band.

  2. The AzTEC/SMA Interferometric Imaging Survey of Submillimeter-selected High-redshift Galaxies

    NASA Astrophysics Data System (ADS)

    Younger, Joshua D.; Fazio, Giovanni G.; Huang, Jia-Sheng; Yun, Min S.; Wilson, Grant W.; Ashby, Matthew L. N.; Gurwell, Mark A.; Peck, Alison B.; Petitpas, Glen R.; Wilner, David J.; Hughes, David H.; Aretxaga, Itziar; Kim, Sungeun; Scott, Kimberly S.; Austermann, Jason; Perera, Thushara; Lowenthal, James D.

    2009-10-01

    We present results from a continuing interferometric survey of high-redshift submillimeter galaxies (SMGs) with the Submillimeter Array, including high-resolution (beam size ~2 arcsec) imaging of eight additional AzTEC 1.1 mm selected sources in the COSMOS field, for which we obtain six reliable (peak signal-to-noise ratio (S/N) >5 or peak S/N >4 with multiwavelength counterparts within the beam) and two moderate significance (peak S/N >4) detections. When combined with previous detections, this yields an unbiased sample of millimeter-selected SMGs with complete interferometric follow up. With this sample in hand, we (1) empirically confirm the radio-submillimeter association, (2) examine the submillimeter morphology—including the nature of SMGs with multiple radio counterparts and constraints on the physical scale of the far infrared—of the sample, and (3) find additional evidence for a population of extremely luminous, radio-dim SMGs that peaks at higher redshift than previous, radio-selected samples. In particular, the presence of such a population of high-redshift sources has important consequences for models of galaxy formation—which struggle to account for such objects even under liberal assumptions—and dust production models given the limited time since the big bang.

  3. Geometric rectification of camera-captured document images.

    PubMed

    Liang, Jian; DeMenthon, Daniel; Doermann, David

    2008-04-01

    Compared to typical scanners, handheld cameras offer convenient, flexible, portable, and non-contact image capture, which enables many new applications and breathes new life into existing ones. However, camera-captured documents may suffer from distortions caused by non-planar document shape and perspective projection, which lead to failure of current OCR technologies. We present a geometric rectification framework for restoring the frontal-flat view of a document from a single camera-captured image. Our approach estimates 3D document shape from texture flow information obtained directly from the image without requiring additional 3D/metric data or prior camera calibration. Our framework provides a unified solution for both planar and curved documents and can be applied in many, especially mobile, camera-based document analysis applications. Experiments show that our method produces results that are significantly more OCR compatible than the original images.

  4. SPARTAN Near-IR Camera | SOAR

    Science.gov Websites

    SPARTAN Near-IR Camera SPARTAN Cookbook Ohio State Infrared Imager/Spectrograph (OSIRIS) - NO LONGER Instrumentation at SOAR»SPARTAN Near-IR Camera SPARTAN Near-IR Camera System Overview The Spartan Infrared Camera is a high spatial resolution near-IR imager. Spartan has a focal plane conisisting of four "

  5. The Art of Astrophotography

    NASA Astrophysics Data System (ADS)

    Morison, Ian

    2017-02-01

    1. Imaging star trails; 2. Imaging a constellation with a DSLR and tripod; 3. Imaging the Milky Way with a DSLR and tracking mount; 4. Imaging the Moon with a compact camera or smartphone; 5. Imaging the Moon with a DSLR; 6. Imaging the Pleiades Cluster with a DSLR and small refractor; 7. Imaging the Orion Nebula, M42, with a modified Canon DSLR; 8. Telescopes and their accessories for use in astroimaging; 9. Towards stellar excellence; 10. Cooling a DSLR camera to reduce sensor noise; 11. Imaging the North American and Pelican Nebulae; 12. Combating light pollution - the bane of astrophotographers; 13. Imaging planets with an astronomical video camera or Canon DSLR; 14. Video imaging the Moon with a webcam or DSLR; 15. Imaging the Sun in white light; 16. Imaging the Sun in the light of its H-alpha emission; 17. Imaging meteors; 18. Imaging comets; 19. Using a cooled 'one shot colour' camera; 20. Using a cooled monochrome CCD camera; 21. LRGB colour imaging; 22. Narrow band colour imaging; Appendix A. Telescopes for imaging; Appendix B. Telescope mounts; Appendix C. The effects of the atmosphere; Appendix D. Auto guiding; Appendix E. Image calibration; Appendix F. Practical aspects of astroimaging.

  6. Comparison and evaluation of datasets for off-angle iris recognition

    NASA Astrophysics Data System (ADS)

    Kurtuncu, Osman M.; Cerme, Gamze N.; Karakaya, Mahmut

    2016-05-01

    In this paper, we investigated the publicly available iris recognition datasets and their data capture procedures in order to determine if they are suitable for the stand-off iris recognition research. Majority of the iris recognition datasets include only frontal iris images. Even if a few datasets include off-angle iris images, the frontal and off-angle iris images are not captured at the same time. The comparison of the frontal and off-angle iris images shows not only differences in the gaze angle but also change in pupil dilation and accommodation as well. In order to isolate the effect of the gaze angle from other challenging issues including dilation and accommodation, the frontal and off-angle iris images are supposed to be captured at the same time by using two different cameras. Therefore, we developed an iris image acquisition platform by using two cameras in this work where one camera captures frontal iris image and the other one captures iris images from off-angle. Based on the comparison of Hamming distance between frontal and off-angle iris images captured with the two-camera- setup and one-camera-setup, we observed that Hamming distance in two-camera-setup is less than one-camera-setup ranging from 0.05 to 0.001. These results show that in order to have accurate results in the off-angle iris recognition research, two-camera-setup is necessary in order to distinguish the challenging issues from each other.

  7. Sub-Camera Calibration of a Penta-Camera

    NASA Astrophysics Data System (ADS)

    Jacobsen, K.; Gerke, M.

    2016-03-01

    Penta cameras consisting of a nadir and four inclined cameras are becoming more and more popular, having the advantage of imaging also facades in built up areas from four directions. Such system cameras require a boresight calibration of the geometric relation of the cameras to each other, but also a calibration of the sub-cameras. Based on data sets of the ISPRS/EuroSDR benchmark for multi platform photogrammetry the inner orientation of the used IGI Penta DigiCAM has been analyzed. The required image coordinates of the blocks Dortmund and Zeche Zollern have been determined by Pix4Dmapper and have been independently adjusted and analyzed by program system BLUH. With 4.1 million image points in 314 images respectively 3.9 million image points in 248 images a dense matching was provided by Pix4Dmapper. With up to 19 respectively 29 images per object point the images are well connected, nevertheless the high number of images per object point are concentrated to the block centres while the inclined images outside the block centre are satisfying but not very strongly connected. This leads to very high values for the Student test (T-test) of the finally used additional parameters or in other words, additional parameters are highly significant. The estimated radial symmetric distortion of the nadir sub-camera corresponds to the laboratory calibration of IGI, but there are still radial symmetric distortions also for the inclined cameras with a size exceeding 5μm even if mentioned as negligible based on the laboratory calibration. Radial and tangential effects of the image corners are limited but still available. Remarkable angular affine systematic image errors can be seen especially in the block Zeche Zollern. Such deformations are unusual for digital matrix cameras, but it can be caused by the correlation between inner and exterior orientation if only parallel flight lines are used. With exception of the angular affinity the systematic image errors for corresponding cameras of both blocks have the same trend, but as usual for block adjustments with self calibration, they still show significant differences. Based on the very high number of image points the remaining image residuals can be safely determined by overlaying and averaging the image residuals corresponding to their image coordinates. The size of the systematic image errors, not covered by the used additional parameters, is in the range of a square mean of 0.1 pixels corresponding to 0.6μm. They are not the same for both blocks, but show some similarities for corresponding cameras. In general the bundle block adjustment with a satisfying set of additional parameters, checked by remaining systematic errors, is required for use of the whole geometric potential of the penta camera. Especially for object points on facades, often only in two images and taken with a limited base length, the correct handling of systematic image errors is important. At least in the analyzed data sets the self calibration of sub-cameras by bundle block adjustment suffers from the correlation of the inner to the exterior calibration due to missing crossing flight directions. As usual, the systematic image errors differ from block to block even without the influence of the correlation to the exterior orientation.

  8. Laser line scan underwater imaging by complementary metal-oxide-semiconductor camera

    NASA Astrophysics Data System (ADS)

    He, Zhiyi; Luo, Meixing; Song, Xiyu; Wang, Dundong; He, Ning

    2017-12-01

    This work employs the complementary metal-oxide-semiconductor (CMOS) camera to acquire images in a scanning manner for laser line scan (LLS) underwater imaging to alleviate backscatter impact of seawater. Two operating features of the CMOS camera, namely the region of interest (ROI) and rolling shutter, can be utilized to perform image scan without the difficulty of translating the receiver above the target as the traditional LLS imaging systems have. By the dynamically reconfigurable ROI of an industrial CMOS camera, we evenly divided the image into five subareas along the pixel rows and then scanned them by changing the ROI region automatically under the synchronous illumination by the fun beams of the lasers. Another scanning method was explored by the rolling shutter operation of the CMOS camera. The fun beam lasers were turned on/off to illuminate the narrow zones on the target in a good correspondence to the exposure lines during the rolling procedure of the camera's electronic shutter. The frame synchronization between the image scan and the laser beam sweep may be achieved by either the strobe lighting output pulse or the external triggering pulse of the industrial camera. Comparison between the scanning and nonscanning images shows that contrast of the underwater image can be improved by our LLS imaging techniques, with higher stability and feasibility than the mechanically controlled scanning method.

  9. New opportunities for quality enhancing of images captured by passive THz camera

    NASA Astrophysics Data System (ADS)

    Trofimov, Vyacheslav A.; Trofimov, Vladislav V.

    2014-10-01

    As it is well-known, the passive THz camera allows seeing concealed object without contact with a person and this camera is non-dangerous for a person. Obviously, efficiency of using the passive THz camera depends on its temperature resolution. This characteristic specifies possibilities of the detection for concealed object: minimal size of the object; maximal distance of the detection; image quality. Computer processing of the THz image may lead to many times improving of the image quality without any additional engineering efforts. Therefore, developing of modern computer code for its application to THz images is urgent problem. Using appropriate new methods one may expect such temperature resolution which will allow to see banknote in pocket of a person without any real contact. Modern algorithms for computer processing of THz images allow also to see object inside the human body using a temperature trace on the human skin. This circumstance enhances essentially opportunity of passive THz camera applications for counterterrorism problems. We demonstrate opportunities, achieved at present time, for the detection both of concealed objects and of clothes components due to using of computer processing of images captured by passive THz cameras, manufactured by various companies. Another important result discussed in the paper consists in observation of both THz radiation emitted by incandescent lamp and image reflected from ceramic floorplate. We consider images produced by THz passive cameras manufactured by Microsemi Corp., and ThruVision Corp., and Capital Normal University (Beijing, China). All algorithms for computer processing of the THz images under consideration in this paper were developed by Russian part of author list. Keywords: THz wave, passive imaging camera, computer processing, security screening, concealed and forbidden objects, reflected image, hand seeing, banknote seeing, ceramic floorplate, incandescent lamp.

  10. Dense grid of narrow bandpass filters for the JST/T250 telescope: summary of results

    NASA Astrophysics Data System (ADS)

    Brauneck, Ulf; Sprengard, Ruediger; Bourquin, Sebastien; Marín-Franch, Antonio

    2018-01-01

    On the Javalambre mountain in Spain, the Centro de Estudios de Fisica del Cosmos de Aragon has setup two telescopes, the JST/T250 and the JAST/T80. The JAST/T80 telescope integrates T80Cam, a large format, single CCD camera while the JST/T250 will mount the JPCam instrument, a 1.2Gpix camera equipped with a 14-CCD mosaic using the new large format e2v 9.2k×9.2k 10-μm pixel detectors. Both T80Cam and JPCam integrate a large number of filters in dimensions of 106.8×106.8 mm2 and 101.7×95.5 mm2, respectively. For this instrument, SCHOTT manufactured 56 specially designed steep edged bandpass interference filters, which were recently completed. The filter set consists of bandpass filters in the range between 348.5 and 910 nm and a longpass filter at 915 nm. Most of the filters have full-width at half-maximum (FWHM) of 14.5 nm and a blocking between 250 and 1050 nm with optical density of OD5. Absorptive color glass substrates in combination with interference filters were used to minimize residual reflection in order to avoid ghost images. In spite of containing absorptive elements, the filters show the maximum possible transmission. This was achieved by using magnetron sputtering for the filter coating process. The most important requirement for the continuous photometric survey is the tight tolerancing of the central wavelengths and FWHM of the filters. This insures each bandpass has a defined overlap with its neighbors. A high image quality required a low transmitted wavefront error (<λ/4 locally and <λ/2 on the whole aperture), which was achieved even by combining two or three substrates. We report on the spectral and interferometric results measured on the whole set of filters.

  11. How Many Pixels Does It Take to Make a Good 4"×6" Print? Pixel Count Wars Revisited

    NASA Astrophysics Data System (ADS)

    Kriss, Michael A.

    Digital still cameras emerged following the introduction of the Sony Mavica analog prototype camera in 1981. These early cameras produced poor image quality and did not challenge film cameras for overall quality. By 1995 digital still cameras in expensive SLR formats had 6 mega-pixels and produced high quality images (with significant image processing). In 2005 significant improvement in image quality was apparent and lower prices for digital still cameras (DSCs) started a rapid decline in film usage and film camera sells. By 2010 film usage was mostly limited to professionals and the motion picture industry. The rise of DSCs was marked by a “pixel war” where the driving feature of the cameras was the pixel count where even moderate cost, ˜120, DSCs would have 14 mega-pixels. The improvement of CMOS technology pushed this trend of lower prices and higher pixel counts. Only the single lens reflex cameras had large sensors and large pixels. The drive for smaller pixels hurt the quality aspects of the final image (sharpness, noise, speed, and exposure latitude). Only today are camera manufactures starting to reverse their course and producing DSCs with larger sensors and pixels. This paper will explore why larger pixels and sensors are key to the future of DSCs.

  12. Light field rendering with omni-directional camera

    NASA Astrophysics Data System (ADS)

    Todoroki, Hiroshi; Saito, Hideo

    2003-06-01

    This paper presents an approach to capture visual appearance of a real environment such as an interior of a room. We propose the method for generating arbitrary viewpoint images by building light field with the omni-directional camera, which can capture the wide circumferences. Omni-directional camera used in this technique is a special camera with the hyperbolic mirror in the upper part of a camera, so that we can capture luminosity in the environment in the range of 360 degree of circumferences in one image. We apply the light field method, which is one technique of Image-Based-Rendering(IBR), for generating the arbitrary viewpoint images. The light field is a kind of the database that records the luminosity information in the object space. We employ the omni-directional camera for constructing the light field, so that we can collect many view direction images in the light field. Thus our method allows the user to explore the wide scene, that can acheive realistic representation of virtual enviroment. For demonstating the proposed method, we capture image sequence in our lab's interior environment with an omni-directional camera, and succesfully generate arbitray viewpoint images for virual tour of the environment.

  13. A telephoto camera system with shooting direction control by gaze detection

    NASA Astrophysics Data System (ADS)

    Teraya, Daiki; Hachisu, Takumi; Yendo, Tomohiro

    2015-05-01

    For safe driving, it is important for driver to check traffic conditions such as traffic lights, or traffic signs as early as soon. If on-vehicle camera takes image of important objects to understand traffic conditions from long distance and shows these to driver, driver can understand traffic conditions earlier. To take image of long distance objects clearly, the focal length of camera must be long. When the focal length is long, on-vehicle camera doesn't have enough field of view to check traffic conditions. Therefore, in order to get necessary images from long distance, camera must have long-focal length and controllability of shooting direction. In previous study, driver indicates shooting direction on displayed image taken by a wide-angle camera, a direction controllable camera takes telescopic image, and displays these to driver. However, driver uses a touch panel to indicate the shooting direction in previous study. It is cause of disturb driving. So, we propose a telephoto camera system for driving support whose shooting direction is controlled by driver's gaze to avoid disturbing drive. This proposed system is composed of a gaze detector and an active telephoto camera whose shooting direction is controlled. We adopt non-wear detecting method to avoid hindrance to drive. The gaze detector measures driver's gaze by image processing. The shooting direction of the active telephoto camera is controlled by galvanometer scanners and the direction can be switched within a few milliseconds. We confirmed that the proposed system takes images of gazing straight ahead of subject by experiments.

  14. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization

    NASA Astrophysics Data System (ADS)

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj

    2015-03-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.

  15. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization

    PubMed Central

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj

    2017-01-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery. PMID:28943703

  16. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization.

    PubMed

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D; Shekhar, Raj

    2015-03-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool ( rdCalib ; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker ® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.

  17. Subarray Processing for Projection-based RFI Mitigation in Radio Astronomical Interferometers

    NASA Astrophysics Data System (ADS)

    Burnett, Mitchell C.; Jeffs, Brian D.; Black, Richard A.; Warnick, Karl F.

    2018-04-01

    Radio Frequency Interference (RFI) is a major problem for observations in Radio Astronomy (RA). Adaptive spatial filtering techniques such as subspace projection are promising candidates for RFI mitigation; however, for radio interferometric imaging arrays, these have primarily been used in engineering demonstration experiments rather than mainstream scientific observations. This paper considers one reason that adoption of such algorithms is limited: RFI decorrelates across the interferometric array because of long baseline lengths. This occurs when the relative RFI time delay along a baseline is large compared to the frequency channel inverse bandwidth used in the processing chain. Maximum achievable excision of the RFI is limited by covariance matrix estimation error when identifying interference subspace parameters, and decorrelation of the RFI introduces errors that corrupt the subspace estimate, rendering subspace projection ineffective over the entire array. In this work, we present an algorithm that overcomes this challenge of decorrelation by applying subspace projection via subarray processing (SP-SAP). Each subarray is designed to have a set of elements with high mutual correlation in the interferer for better estimation of subspace parameters. In an RFI simulation scenario for the proposed ngVLA interferometric imaging array with 15 kHz channel bandwidth for correlator processing, we show that compared to the former approach of applying subspace projection on the full array, SP-SAP improves mitigation of the RFI on the order of 9 dB. An example of improved image synthesis and reduced RFI artifacts for a simulated image “phantom” using the SP-SAP algorithm is presented.

  18. Depth estimation and camera calibration of a focused plenoptic camera for visual odometry

    NASA Astrophysics Data System (ADS)

    Zeller, Niclas; Quint, Franz; Stilla, Uwe

    2016-08-01

    This paper presents new and improved methods of depth estimation and camera calibration for visual odometry with a focused plenoptic camera. For depth estimation we adapt an algorithm previously used in structure-from-motion approaches to work with images of a focused plenoptic camera. In the raw image of a plenoptic camera, scene patches are recorded in several micro-images under slightly different angles. This leads to a multi-view stereo-problem. To reduce the complexity, we divide this into multiple binocular stereo problems. For each pixel with sufficient gradient we estimate a virtual (uncalibrated) depth based on local intensity error minimization. The estimated depth is characterized by the variance of the estimate and is subsequently updated with the estimates from other micro-images. Updating is performed in a Kalman-like fashion. The result of depth estimation in a single image of the plenoptic camera is a probabilistic depth map, where each depth pixel consists of an estimated virtual depth and a corresponding variance. Since the resulting image of the plenoptic camera contains two plains: the optical image and the depth map, camera calibration is divided into two separate sub-problems. The optical path is calibrated based on a traditional calibration method. For calibrating the depth map we introduce two novel model based methods, which define the relation of the virtual depth, which has been estimated based on the light-field image, and the metric object distance. These two methods are compared to a well known curve fitting approach. Both model based methods show significant advantages compared to the curve fitting method. For visual odometry we fuse the probabilistic depth map gained from one shot of the plenoptic camera with the depth data gained by finding stereo correspondences between subsequent synthesized intensity images of the plenoptic camera. These images can be synthesized totally focused and thus finding stereo correspondences is enhanced. In contrast to monocular visual odometry approaches, due to the calibration of the individual depth maps, the scale of the scene can be observed. Furthermore, due to the light-field information better tracking capabilities compared to the monocular case can be expected. As result, the depth information gained by the plenoptic camera based visual odometry algorithm proposed in this paper has superior accuracy and reliability compared to the depth estimated from a single light-field image.

  19. High-Resolution Mars Camera Test Image of Moon Infrared

    NASA Image and Video Library

    2005-09-13

    This crescent view of Earth Moon in infrared wavelengths comes from a camera test by NASA Mars Reconnaissance Orbiter spacecraft on its way to Mars. This image was taken by taken by the High Resolution Imaging Science Experiment camera Sept. 8, 2005.

  20. Design of a MATLAB(registered trademark) Image Comparison and Analysis Tool for Augmentation of the Results of the Ann Arbor Distortion Test

    DTIC Science & Technology

    2016-06-25

    The equipment used in this procedure includes: Ann Arbor distortion tester with 50-line grating reticule, IQeye 720 digital video camera with 12...and import them into MATLAB. In order to digitally capture images of the distortion in an optical sample, an IQeye 720 video camera with a 12... video camera and Ann Arbor distortion tester. Figure 8. Computer interface for capturing images seen by IQeye 720 camera. Once an image was

  1. Heterogeneous Vision Data Fusion for Independently Moving Cameras

    DTIC Science & Technology

    2010-03-01

    target detection , tracking , and identification over a large terrain. The goal of the project is to investigate and evaluate the existing image...fusion algorithms, develop new real-time algorithms for Category-II image fusion, and apply these algorithms in moving target detection and tracking . The...moving target detection and classification. 15. SUBJECT TERMS Image Fusion, Target Detection , Moving Cameras, IR Camera, EO Camera 16. SECURITY

  2. Operation and Performance of the Mars Exploration Rover Imaging System on the Martian Surface

    NASA Technical Reports Server (NTRS)

    Maki, Justin N.; Litwin, Todd; Herkenhoff, Ken

    2005-01-01

    This slide presentation details the Mars Exploration Rover (MER) imaging system. Over 144,000 images have been gathered from all Mars Missions, with 83.5% of them being gathered by MER. Each Rover has 9 cameras (Navcam, front and rear Hazcam, Pancam, Microscopic Image, Descent Camera, Engineering Camera, Science Camera) and produces 1024 x 1024 (1 Megapixel) images in the same format. All onboard image processing code is implemented in flight software and includes extensive processing capabilities such as autoexposure, flat field correction, image orientation, thumbnail generation, subframing, and image compression. Ground image processing is done at the Jet Propulsion Laboratory's Multimission Image Processing Laboratory using Video Image Communication and Retrieval (VICAR) while stereo processing (left/right pairs) is provided for raw image, radiometric correction; solar energy maps,triangulation (Cartesian 3-spaces) and slope maps.

  3. Blinded evaluation of the effects of high definition and magnification on perceived image quality in laryngeal imaging.

    PubMed

    Otto, Kristen J; Hapner, Edie R; Baker, Michael; Johns, Michael M

    2006-02-01

    Advances in commercial video technology have improved office-based laryngeal imaging. This study investigates the perceived image quality of a true high-definition (HD) video camera and the effect of magnification on laryngeal videostroboscopy. We performed a prospective, dual-armed, single-blinded analysis of a standard laryngeal videostroboscopic examination comparing 3 separate add-on camera systems: a 1-chip charge-coupled device (CCD) camera, a 3-chip CCD camera, and a true 720p (progressive scan) HD camera. Displayed images were controlled for magnification and image size (20-inch [50-cm] display, red-green-blue, and S-video cable for 1-chip and 3-chip cameras; digital visual interface cable and HD monitor for HD camera). Ten blinded observers were then asked to rate the following 5 items on a 0-to-100 visual analog scale: resolution, color, ability to see vocal fold vibration, sense of depth perception, and clarity of blood vessels. Eight unblinded observers were then asked to rate the difference in perceived resolution and clarity of laryngeal examination images when displayed on a 10-inch (25-cm) monitor versus a 42-inch (105-cm) monitor. A visual analog scale was used. These monitors were controlled for actual resolution capacity. For each item evaluated, randomized block design analysis demonstrated that the 3-chip camera scored significantly better than the 1-chip camera (p < .05). For the categories of color and blood vessel discrimination, the 3-chip camera scored significantly better than the HD camera (p < .05). For magnification alone, observers rated the 42-inch monitor statistically better than the 10-inch monitor. The expense of new medical technology must be judged against its added value. This study suggests that HD laryngeal imaging may not add significant value over currently available video systems, in perceived image quality, when a small monitor is used. Although differences in clarity between standard and HD cameras may not be readily apparent on small displays, a large display size coupled with HD technology may impart improved diagnosis of subtle vocal fold lesions and vibratory anomalies.

  4. Full-frame, high-speed 3D shape and deformation measurements using stereo-digital image correlation and a single color high-speed camera

    NASA Astrophysics Data System (ADS)

    Yu, Liping; Pan, Bing

    2017-08-01

    Full-frame, high-speed 3D shape and deformation measurement using stereo-digital image correlation (stereo-DIC) technique and a single high-speed color camera is proposed. With the aid of a skillfully designed pseudo stereo-imaging apparatus, color images of a test object surface, composed of blue and red channel images from two different optical paths, are recorded by a high-speed color CMOS camera. The recorded color images can be separated into red and blue channel sub-images using a simple but effective color crosstalk correction method. These separated blue and red channel sub-images are processed by regular stereo-DIC method to retrieve full-field 3D shape and deformation on the test object surface. Compared with existing two-camera high-speed stereo-DIC or four-mirror-adapter-assisted singe-camera high-speed stereo-DIC, the proposed single-camera high-speed stereo-DIC technique offers prominent advantages of full-frame measurements using a single high-speed camera but without sacrificing its spatial resolution. Two real experiments, including shape measurement of a curved surface and vibration measurement of a Chinese double-side drum, demonstrated the effectiveness and accuracy of the proposed technique.

  5. Comparison of myocardial perfusion imaging between the new high-speed gamma camera and the standard anger camera.

    PubMed

    Tanaka, Hirokazu; Chikamori, Taishiro; Hida, Satoshi; Uchida, Kenji; Igarashi, Yuko; Yokoyama, Tsuyoshi; Takahashi, Masaki; Shiba, Chie; Yoshimura, Mana; Tokuuye, Koichi; Yamashina, Akira

    2013-01-01

    Cadmium-zinc-telluride (CZT) solid-state detectors have been recently introduced into the field of myocardial perfusion imaging. The aim of this study was to prospectively compare the diagnostic performance of the CZT high-speed gamma camera (Discovery NM 530c) with that of the standard 3-head gamma camera in the same group of patients. The study group consisted of 150 consecutive patients who underwent a 1-day stress-rest (99m)Tc-sestamibi or tetrofosmin imaging protocol. Image acquisition was performed first on a standard gamma camera with a 15-min scan time each for stress and for rest. All scans were immediately repeated on a CZT camera with a 5-min scan time for stress and a 3-min scan time for rest, using list mode. The correlations between the CZT camera and the standard camera for perfusion and function analyses were strong within narrow Bland-Altman limits of agreement. Using list mode analysis, image quality for stress was rated as good or excellent in 97% of the 3-min scans, and in 100% of the ≥4-min scans. For CZT scans at rest, similarly, image quality was rated as good or excellent in 94% of the 1-min scans, and in 100% of the ≥2-min scans. The novel CZT camera provides excellent image quality, which is equivalent to standard myocardial single-photon emission computed tomography, despite a short scan time of less than half of the standard time.

  6. Research into a Single-aperture Light Field Camera System to Obtain Passive Ground-based 3D Imagery of LEO Objects

    NASA Astrophysics Data System (ADS)

    Bechis, K.; Pitruzzello, A.

    2014-09-01

    This presentation describes our ongoing research into using a ground-based light field camera to obtain passive, single-aperture 3D imagery of LEO objects. Light field cameras are an emerging and rapidly evolving technology for passive 3D imaging with a single optical sensor. The cameras use an array of lenslets placed in front of the camera focal plane, which provides angle of arrival information for light rays originating from across the target, allowing range to target and 3D image to be obtained from a single image using monocular optics. The technology, which has been commercially available for less than four years, has the potential to replace dual-sensor systems such as stereo cameras, dual radar-optical systems, and optical-LIDAR fused systems, thus reducing size, weight, cost, and complexity. We have developed a prototype system for passive ranging and 3D imaging using a commercial light field camera and custom light field image processing algorithms. Our light field camera system has been demonstrated for ground-target surveillance and threat detection applications, and this paper presents results of our research thus far into applying this technology to the 3D imaging of LEO objects. The prototype 3D imaging camera system developed by Northrop Grumman uses a Raytrix R5 C2GigE light field camera connected to a Windows computer with an nVidia graphics processing unit (GPU). The system has a frame rate of 30 Hz, and a software control interface allows for automated camera triggering and light field image acquisition to disk. Custom image processing software then performs the following steps: (1) image refocusing, (2) change detection, (3) range finding, and (4) 3D reconstruction. In Step (1), a series of 2D images are generated from each light field image; the 2D images can be refocused at up to 100 different depths. Currently, steps (1) through (3) are automated, while step (4) requires some user interaction. A key requirement for light field camera operation is that the target must be within the near-field (Fraunhofer distance) of the collecting optics. For example, in visible light the near-field of a 1-m telescope extends out to about 3,500 km, while the near-field of the AEOS telescope extends out over 46,000 km. For our initial proof of concept, we have integrated our light field camera with a 14-inch Meade LX600 advanced coma-free telescope, to image various surrogate ground targets at up to tens of kilometers range. Our experiments with the 14-inch telescope have assessed factors and requirements that are traceable and scalable to a larger-aperture system that would have the near-field distance needed to obtain 3D images of LEO objects. The next step would be to integrate a light field camera with a 1-m or larger telescope and evaluate its 3D imaging capability against LEO objects. 3D imaging of LEO space objects with light field camera technology can potentially provide a valuable new tool for space situational awareness, especially for those situations where laser or radar illumination of the target objects is not feasible.

  7. An image compression algorithm for a high-resolution digital still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    The Electronic Still Camera (ESC) project will provide for the capture and transmission of high-quality images without the use of film. The image quality will be superior to video and will approach the quality of 35mm film. The camera, which will have the same general shape and handling as a 35mm camera, will be able to send images to earth in near real-time. Images will be stored in computer memory (RAM) in removable cartridges readable by a computer. To save storage space, the image will be compressed and reconstructed at the time of viewing. Both lossless and loss-y image compression algorithms are studied, described, and compared.

  8. Structure-From for Calibration of a Vehicle Camera System with Non-Overlapping Fields-Of in AN Urban Environment

    NASA Astrophysics Data System (ADS)

    Hanel, A.; Stilla, U.

    2017-05-01

    Vehicle environment cameras observing traffic participants in the area around a car and interior cameras observing the car driver are important data sources for driver intention recognition algorithms. To combine information from both camera groups, a camera system calibration can be performed. Typically, there is no overlapping field-of-view between environment and interior cameras. Often no marked reference points are available in environments, which are a large enough to cover a car for the system calibration. In this contribution, a calibration method for a vehicle camera system with non-overlapping camera groups in an urban environment is described. A-priori images of an urban calibration environment taken with an external camera are processed with the structure-frommotion method to obtain an environment point cloud. Images of the vehicle interior, taken also with an external camera, are processed to obtain an interior point cloud. Both point clouds are tied to each other with images of both image sets showing the same real-world objects. The point clouds are transformed into a self-defined vehicle coordinate system describing the vehicle movement. On demand, videos can be recorded with the vehicle cameras in a calibration drive. Poses of vehicle environment cameras and interior cameras are estimated separately using ground control points from the respective point cloud. All poses of a vehicle camera estimated for different video frames are optimized in a bundle adjustment. In an experiment, a point cloud is created from images of an underground car park, as well as a point cloud of the interior of a Volkswagen test car is created. Videos of two environment and one interior cameras are recorded. Results show, that the vehicle camera poses are estimated successfully especially when the car is not moving. Position standard deviations in the centimeter range can be achieved for all vehicle cameras. Relative distances between the vehicle cameras deviate between one and ten centimeters from tachymeter reference measurements.

  9. The effect of microchannel plate gain depression on PAPA photon counting cameras

    NASA Astrophysics Data System (ADS)

    Sams, Bruce J., III

    1991-03-01

    PAPA (precision analog photon address) cameras are photon counting imagers which employ microchannel plates (MCPs) for image intensification. They have been used extensively in astronomical speckle imaging. The PAPA camera can produce artifacts when light incident on its MCP is highly concentrated. The effect is exacerbated by adjusting the strobe detection level too low, so that the camera accepts very small MCP pulses. The artifacts can occur even at low total count rates if the image has highly a concentrated bright spot. This paper describes how to optimize PAPA camera electronics, and describes six techniques which can avoid or minimize addressing errors.

  10. Multi-focused microlens array optimization and light field imaging study based on Monte Carlo method.

    PubMed

    Li, Tian-Jiao; Li, Sai; Yuan, Yuan; Liu, Yu-Dong; Xu, Chuan-Long; Shuai, Yong; Tan, He-Ping

    2017-04-03

    Plenoptic cameras are used for capturing flames in studies of high-temperature phenomena. However, simulations of plenoptic camera models can be used prior to the experiment improve experimental efficiency and reduce cost. In this work, microlens arrays, which are based on the established light field camera model, are optimized into a hexagonal structure with three types of microlenses. With this improved plenoptic camera model, light field imaging of static objects and flame are simulated using the calibrated parameters of the Raytrix camera (R29). The optimized models improve the image resolution, imaging screen utilization, and shooting range of depth of field.

  11. Extreme ultra-violet movie camera for imaging microsecond time scale magnetic reconnection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chai, Kil-Byoung; Bellan, Paul M.

    2013-12-15

    An ultra-fast extreme ultra-violet (EUV) movie camera has been developed for imaging magnetic reconnection in the Caltech spheromak/astrophysical jet experiment. The camera consists of a broadband Mo:Si multilayer mirror, a fast decaying YAG:Ce scintillator, a visible light block, and a high-speed visible light CCD camera. The camera can capture EUV images as fast as 3.3 × 10{sup 6} frames per second with 0.5 cm spatial resolution. The spectral range is from 20 eV to 60 eV. EUV images reveal strong, transient, highly localized bursts of EUV radiation when magnetic reconnection occurs.

  12. Space-variant restoration of images degraded by camera motion blur.

    PubMed

    Sorel, Michal; Flusser, Jan

    2008-02-01

    We examine the problem of restoration from multiple images degraded by camera motion blur. We consider scenes with significant depth variations resulting in space-variant blur. The proposed algorithm can be applied if the camera moves along an arbitrary curve parallel to the image plane, without any rotations. The knowledge of camera trajectory and camera parameters is not necessary. At the input, the user selects a region where depth variations are negligible. The algorithm belongs to the group of variational methods that estimate simultaneously a sharp image and a depth map, based on the minimization of a cost functional. To initialize the minimization, it uses an auxiliary window-based depth estimation algorithm. Feasibility of the algorithm is demonstrated by three experiments with real images.

  13. Method used to test the imaging consistency of binocular camera's left-right optical system

    NASA Astrophysics Data System (ADS)

    Liu, Meiying; Wang, Hu; Liu, Jie; Xue, Yaoke; Yang, Shaodong; Zhao, Hui

    2016-09-01

    To binocular camera, the consistency of optical parameters of the left and the right optical system is an important factor that will influence the overall imaging consistency. In conventional testing procedure of optical system, there lacks specifications suitable for evaluating imaging consistency. In this paper, considering the special requirements of binocular optical imaging system, a method used to measure the imaging consistency of binocular camera is presented. Based on this method, a measurement system which is composed of an integrating sphere, a rotary table and a CMOS camera has been established. First, let the left and the right optical system capture images in normal exposure time under the same condition. Second, a contour image is obtained based on the multiple threshold segmentation result and the boundary is determined using the slope of contour lines near the pseudo-contour line. Third, the constraint of gray level based on the corresponding coordinates of left-right images is established and the imaging consistency could be evaluated through standard deviation σ of the imaging grayscale difference D (x, y) between the left and right optical system. The experiments demonstrate that the method is suitable for carrying out the imaging consistency testing for binocular camera. When the standard deviation 3σ distribution of imaging gray difference D (x, y) between the left and right optical system of the binocular camera does not exceed 5%, it is believed that the design requirements have been achieved. This method could be used effectively and paves the way for the imaging consistency testing of the binocular camera.

  14. From a Million Miles Away, NASA Camera Shows Moon Crossing Face of Earth

    NASA Image and Video Library

    2015-08-05

    This animation still image shows the far side of the moon, illuminated by the sun, as it crosses between the DISCOVR spacecraft's Earth Polychromatic Imaging Camera (EPIC) camera and telescope, and the Earth - one million miles away. Credits: NASA/NOAA A NASA camera aboard the Deep Space Climate Observatory (DSCOVR) satellite captured a unique view of the moon as it moved in front of the sunlit side of Earth last month. The series of test images shows the fully illuminated “dark side” of the moon that is never visible from Earth. The images were captured by NASA’s Earth Polychromatic Imaging Camera (EPIC), a four megapixel CCD camera and telescope on the DSCOVR satellite orbiting 1 million miles from Earth. From its position between the sun and Earth, DSCOVR conducts its primary mission of real-time solar wind monitoring for the National Oceanic and Atmospheric Administration (NOAA).

  15. Imaging of breast cancer with mid- and long-wave infrared camera.

    PubMed

    Joro, R; Lääperi, A-L; Dastidar, P; Soimakallio, S; Kuukasjärvi, T; Toivonen, T; Saaristo, R; Järvenpää, R

    2008-01-01

    In this novel study the breasts of 15 women with palpable breast cancer were preoperatively imaged with three technically different infrared (IR) cameras - micro bolometer (MB), quantum well (QWIP) and photo voltaic (PV) - to compare their ability to differentiate breast cancer from normal tissue. The IR images were processed, the data for frequency analysis were collected from dynamic IR images by pixel-based analysis and from each image selectively windowed regional analysis was carried out, based on angiogenesis and nitric oxide production of cancer tissue causing vasomotor and cardiogenic frequency differences compared to normal tissue. Our results show that the GaAs QWIP camera and the InSb PV camera demonstrate the frequency difference between normal and cancerous breast tissue; the PV camera more clearly. With selected image processing operations more detailed frequency analyses could be applied to the suspicious area. The MB camera was not suitable for tissue differentiation, as the difference between noise and effective signal was unsatisfactory.

  16. The Panoramic Camera (PanCam) Instrument for the ESA ExoMars Rover

    NASA Astrophysics Data System (ADS)

    Griffiths, A.; Coates, A.; Jaumann, R.; Michaelis, H.; Paar, G.; Barnes, D.; Josset, J.

    The recently approved ExoMars rover is the first element of the ESA Aurora programme and is slated to deliver the Pasteur exobiology payload to Mars by 2013. The 0.7 kg Panoramic Camera will provide multispectral stereo images with 65° field-of- view (1.1 mrad/pixel) and high resolution (85 µrad/pixel) monoscopic "zoom" images with 5° field-of-view. The stereo Wide Angle Cameras (WAC) are based on Beagle 2 Stereo Camera System heritage. The Panoramic Camera instrument is designed to fulfil the digital terrain mapping requirements of the mission as well as providing multispectral geological imaging, colour and stereo panoramic images, solar images for water vapour abundance and dust optical depth measurements and to observe retrieved subsurface samples before ingestion into the rest of the Pasteur payload. Additionally the High Resolution Camera (HRC) can be used for high resolution imaging of interesting targets detected in the WAC panoramas and of inaccessible locations on crater or valley walls.

  17. Webcam network and image database for studies of phenological changes of vegetation and snow cover in Finland, image time series from 2014 to 2016

    NASA Astrophysics Data System (ADS)

    Peltoniemi, Mikko; Aurela, Mika; Böttcher, Kristin; Kolari, Pasi; Loehr, John; Karhu, Jouni; Linkosalmi, Maiju; Melih Tanis, Cemal; Tuovinen, Juha-Pekka; Nadir Arslan, Ali

    2018-01-01

    In recent years, monitoring of the status of ecosystems using low-cost web (IP) or time lapse cameras has received wide interest. With broad spatial coverage and high temporal resolution, networked cameras can provide information about snow cover and vegetation status, serve as ground truths to Earth observations and be useful for gap-filling of cloudy areas in Earth observation time series. Networked cameras can also play an important role in supplementing laborious phenological field surveys and citizen science projects, which also suffer from observer-dependent observation bias. We established a network of digital surveillance cameras for automated monitoring of phenological activity of vegetation and snow cover in the boreal ecosystems of Finland. Cameras were mounted at 14 sites, each site having 1-3 cameras. Here, we document the network, basic camera information and access to images in the permanent data repository (http://www.zenodo.org/communities/phenology_camera/). Individual DOI-referenced image time series consist of half-hourly images collected between 2014 and 2016 (https://doi.org/10.5281/zenodo.1066862). Additionally, we present an example of a colour index time series derived from images from two contrasting sites.

  18. The imaging system design of three-line LMCCD mapping camera

    NASA Astrophysics Data System (ADS)

    Zhou, Huai-de; Liu, Jin-Guo; Wu, Xing-Xing; Lv, Shi-Liang; Zhao, Ying; Yu, Da

    2011-08-01

    In this paper, the authors introduced the theory about LMCCD (line-matrix CCD) mapping camera firstly. On top of the introduction were consists of the imaging system of LMCCD mapping camera. Secondly, some pivotal designs which were Introduced about the imaging system, such as the design of focal plane module, the video signal's procession, the controller's design of the imaging system, synchronous photography about forward and nadir and backward camera and the nadir camera of line-matrix CCD. At last, the test results of LMCCD mapping camera imaging system were introduced. The results as following: the precision of synchronous photography about forward and nadir and backward camera is better than 4 ns and the nadir camera of line-matrix CCD is better than 4 ns too; the photography interval of line-matrix CCD of the nadir camera can satisfy the butter requirements of LMCCD focal plane module; the SNR tested in laboratory is better than 95 under typical working condition(the solar incidence degree is 30, the reflectivity of the earth's surface is 0.3) of each CCD image; the temperature of the focal plane module is controlled under 30° in a working period of 15 minutes. All of these results can satisfy the requirements about the synchronous photography, the temperature control of focal plane module and SNR, Which give the guarantee of precision for satellite photogrammetry.

  19. OVMS-plus at the LBT: disturbance compensation simplified

    NASA Astrophysics Data System (ADS)

    Böhm, Michael; Pott, Jörg-Uwe; Borelli, José; Hinz, Phil; Defrère, Denis; Downey, Elwood; Hill, John; Summers, Kellee; Conrad, Al; Kürster, Martin; Herbst, Tom; Sawodny, Oliver

    2016-07-01

    In this paper we will briefly revisit the optical vibration measurement system (OVMS) at the Large Binocular Telescope (LBT) and how these values are used for disturbance compensation and particularly for the LBT Interferometer (LBTI) and the LBT Interferometric Camera for Near-Infrared and Visible Adaptive Interferometry for Astronomy (LINC-NIRVANA). We present the now centralized software architecture, called OVMS+, on which our approach is based and illustrate several challenges faced during the implementation phase. Finally, we will present measurement results from LBTI proving the effectiveness of the approach and the ability to compensate for a large fraction of the telescope induced vibrations.

  20. VizieR Online Data Catalog: 1992-1997 binary star speckle measurements (Balega+, 1999)

    NASA Astrophysics Data System (ADS)

    Balega, I. I.; Balega, Y. Y.; Maksimov, A. F.; Pluzhnik, E. A.; Shkhagosheva, Z. U.; Vasyuk, V. A.

    2000-11-01

    We present the results of speckle interferometric measurements of binary stars made with the television photon-counting camera at the 6-m Big Azimuthal Telescope (BTA) and 1-m telescope of the Special Astrophysical Observatory (SAO) between August 1992 and May 1997. The data contain 89 observations of 62 star systems on the large telescope and 21 on the smaller one. For the 6-m aperture 18 systems remained unresolved. The measured angular separation ranged from 39 mas, two times above the BTA diffraction limit, to 1593 mas. (3 data files).

  1. Binary star speckle measurements during 1992-1997 from the SAO 6-m and 1-m telescopes in Zelenchuk

    NASA Astrophysics Data System (ADS)

    Balega, I. I.; Balega, Y. Y.; Maksimov, A. F.; Pluzhnik, E. A.; Shkhagosheva, Z. U.; Vasyuk, V. A.

    1999-12-01

    We present the results of speckle interferometric measurements of binary stars made with the television photon-counting camera at the 6-m Big Azimuthal Telescope (BTA) and 1-m telescope of the Special Astrophysical Observatory (SAO) between August 1992 and May 1997. The data contain 89 observations of 62 star systems on the large telescope and 21 on the smaller one. For the 6-m aperture 18 systems remained unresolved. The measured angular separation ranged from 39 mas, two times above the BTA diffraction limit, to 1593 mas.

  2. A Comparative Study of Microscopic Images Captured by a Box Type Digital Camera Versus a Standard Microscopic Photography Camera Unit

    PubMed Central

    Desai, Nandini J.; Gupta, B. D.; Patel, Pratik Narendrabhai

    2014-01-01

    Introduction: Obtaining images of slides viewed by a microscope can be invaluable for both diagnosis and teaching.They can be transferred among technologically-advanced hospitals for further consultation and evaluation. But a standard microscopic photography camera unit (MPCU)(MIPS-Microscopic Image projection System) is costly and not available in resource poor settings. The aim of our endeavour was to find a comparable and cheaper alternative method for photomicrography. Materials and Methods: We used a NIKON Coolpix S6150 camera (box type digital camera) with Olympus CH20i microscope and a fluorescent microscope for the purpose of this study. Results: We got comparable results for capturing images of light microscopy, but the results were not as satisfactory for fluorescent microscopy. Conclusion: A box type digital camera is a comparable, less expensive and convenient alternative to microscopic photography camera unit. PMID:25478350

  3. Traffic Sign Recognition with Invariance to Lighting in Dual-Focal Active Camera System

    NASA Astrophysics Data System (ADS)

    Gu, Yanlei; Panahpour Tehrani, Mehrdad; Yendo, Tomohiro; Fujii, Toshiaki; Tanimoto, Masayuki

    In this paper, we present an automatic vision-based traffic sign recognition system, which can detect and classify traffic signs at long distance under different lighting conditions. To realize this purpose, the traffic sign recognition is developed in an originally proposed dual-focal active camera system. In this system, a telephoto camera is equipped as an assistant of a wide angle camera. The telephoto camera can capture a high accuracy image for an object of interest in the view field of the wide angle camera. The image from the telephoto camera provides enough information for recognition when the accuracy of traffic sign is low from the wide angle camera. In the proposed system, the traffic sign detection and classification are processed separately for different images from the wide angle camera and telephoto camera. Besides, in order to detect traffic sign from complex background in different lighting conditions, we propose a type of color transformation which is invariant to light changing. This color transformation is conducted to highlight the pattern of traffic signs by reducing the complexity of background. Based on the color transformation, a multi-resolution detector with cascade mode is trained and used to locate traffic signs at low resolution in the image from the wide angle camera. After detection, the system actively captures a high accuracy image of each detected traffic sign by controlling the direction and exposure time of the telephoto camera based on the information from the wide angle camera. Moreover, in classification, a hierarchical classifier is constructed and used to recognize the detected traffic signs in the high accuracy image from the telephoto camera. Finally, based on the proposed system, a set of experiments in the domain of traffic sign recognition is presented. The experimental results demonstrate that the proposed system can effectively recognize traffic signs at low resolution in different lighting conditions.

  4. Image quality enhancement method for on-orbit remote sensing cameras using invariable modulation transfer function.

    PubMed

    Li, Jin; Liu, Zilong

    2017-07-24

    Remote sensing cameras in the visible/near infrared range are essential tools in Earth-observation, deep-space exploration, and celestial navigation. Their imaging performance, i.e. image quality here, directly determines the target-observation performance of a spacecraft, and even the successful completion of a space mission. Unfortunately, the camera itself, such as a optical system, a image sensor, and a electronic system, limits the on-orbit imaging performance. Here, we demonstrate an on-orbit high-resolution imaging method based on the invariable modulation transfer function (IMTF) of cameras. The IMTF, which is stable and invariable to the changing of ground targets, atmosphere, and environment on orbit or on the ground, depending on the camera itself, is extracted using a pixel optical focal-plane (PFP). The PFP produces multiple spatial frequency targets, which are used to calculate the IMTF at different frequencies. The resulting IMTF in combination with a constrained least-squares filter compensates for the IMTF, which represents the removal of the imaging effects limited by the camera itself. This method is experimentally confirmed. Experiments on an on-orbit panchromatic camera indicate that the proposed method increases 6.5 times of the average gradient, 3.3 times of the edge intensity, and 1.56 times of the MTF value compared to the case when IMTF is not used. This opens a door to push the limitation of a camera itself, enabling high-resolution on-orbit optical imaging.

  5. Rapid assessment of forest canopy and light regime using smartphone hemispherical photography.

    PubMed

    Bianchi, Simone; Cahalan, Christine; Hale, Sophie; Gibbons, James Michael

    2017-12-01

    Hemispherical photography (HP), implemented with cameras equipped with "fisheye" lenses, is a widely used method for describing forest canopies and light regimes. A promising technological advance is the availability of low-cost fisheye lenses for smartphone cameras. However, smartphone camera sensors cannot record a full hemisphere. We investigate whether smartphone HP is a cheaper and faster but still adequate operational alternative to traditional cameras for describing forest canopies and light regimes. We collected hemispherical pictures with both smartphone and traditional cameras in 223 forest sample points, across different overstory species and canopy densities. The smartphone image acquisition followed a faster and simpler protocol than that for the traditional camera. We automatically thresholded all images. We processed the traditional camera images for Canopy Openness (CO) and Site Factor estimation. For smartphone images, we took two pictures with different orientations per point and used two processing protocols: (i) we estimated and averaged total canopy gap from the two single pictures, and (ii) merging the two pictures together, we formed images closer to full hemispheres and estimated from them CO and Site Factors. We compared the same parameters obtained from different cameras and estimated generalized linear mixed models (GLMMs) between them. Total canopy gap estimated from the first processing protocol for smartphone pictures was on average significantly higher than CO estimated from traditional camera images, although with a consistent bias. Canopy Openness and Site Factors estimated from merged smartphone pictures of the second processing protocol were on average significantly higher than those from traditional cameras images, although with relatively little absolute differences and scatter. Smartphone HP is an acceptable alternative to HP using traditional cameras, providing similar results with a faster and cheaper methodology. Smartphone outputs can be directly used as they are for ecological studies, or converted with specific models for a better comparison to traditional cameras.

  6. Comparison of Interferometric Time-Series Analysis Techniques with Implications for Future Mission Design

    NASA Astrophysics Data System (ADS)

    Werner, C. L.; Wegmuller, U.; Strozzi, T.; Wiesmann, A.

    2006-12-01

    Principle contributors to the noise in differential SAR interferograms are temporal phase stability of the surface, geometry relating to baseline and surface slope, and propagation path delay variations due to tropospheric water vapor and the ionosphere. Time series analysis of multiple interferograms generated from a stack of SAR SLC images seeks to determine the deformation history of the surface while reducing errors. Only those scatterers within a resolution element that are stable and coherent for each interferometric pair contribute to the desired deformation signal. Interferograms with baselines exceeding 1/3 the critical baseline have substantial geometrical decorrelation for distributed targets. Short baseline pairs with multiple reference scenes can be combined using least-squares estimation to obtain a global deformation solution. Alternately point-like persistent scatterers can be identified in scenes that do not exhibit geometrical decorrelation associated with large baselines. In this approach interferograms are formed from a stack of SAR complex images using a single reference scene. Stable distributed scatter pixels are excluded however due to the presence of large baselines. We apply both point- based and short-baseline methodologies and compare results for a stack of fine-beam Radarsat data acquired in 2002-2004 over a rapidly subsiding oil field near Lost Hills, CA. We also investigate the density of point-like scatters with respect to image resolution. The primary difficulty encountered when applying time series methods is phase unwrapping errors due to spatial and temporal gaps. Phase unwrapping requires sufficient spatial and temporal sampling. Increasing the SAR range bandwidth increases the range resolution as well as increasing the critical interferometric baseline that defines the required satellite orbital tube diameter. Sufficient spatial sampling also permits unwrapping because of the reduced phase/pixel gradient. Short time intervals further reduce the differential phase due to deformation when the deformation is continuous. Lower frequency systems (L- vs. C-Band) substantially improve the ability to unwrap the phase correctly by directly reducing both interferometric phase amplitude and temporal decorrelation.

  7. Video auto stitching in multicamera surveillance system

    NASA Astrophysics Data System (ADS)

    He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang

    2012-01-01

    This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.

  8. Video auto stitching in multicamera surveillance system

    NASA Astrophysics Data System (ADS)

    He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang

    2011-12-01

    This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.

  9. Video monitoring in the Gadria debris flow catchment: preliminary results of large scale particle image velocimetry (LSPIV)

    NASA Astrophysics Data System (ADS)

    Theule, Joshua; Crema, Stefano; Comiti, Francesco; Cavalli, Marco; Marchi, Lorenzo

    2015-04-01

    Large scale particle image velocimetry (LSPIV) is a technique mostly used in rivers to measure two dimensional velocities from high resolution images at high frame rates. This technique still needs to be thoroughly explored in the field of debris flow studies. The Gadria debris flow monitoring catchment in Val Venosta (Italian Alps) has been equipped with four MOBOTIX M12 video cameras. Two cameras are located in a sediment trap located close to the alluvial fan apex, one looking upstream and the other looking down and more perpendicular to the flow. The third camera is in the next reach upstream from the sediment trap at a closer proximity to the flow. These three cameras are connected to a field shelter equipped with power supply and a server collecting all the monitoring data. The fourth camera is located in an active gully, the camera is activated by a rain gauge when there is one minute of rainfall. Before LSPIV can be used, the highly distorted images need to be corrected and accurate reference points need to be made. We decided to use IMGRAFT (an opensource image georectification toolbox) which can correct distorted images using reference points and camera location, and then finally rectifies the batch of images onto a DEM grid (or the DEM grid onto the image coordinates). With the orthorectified images, we used the freeware Fudaa-LSPIV (developed by EDF, IRSTEA, and DeltaCAD Company) to generate the LSPIV calculations of the flow events. Calculated velocities can easily be checked manually because of the already orthorectified images. During the monitoring program (since 2011) we recorded three debris flow events at the sediment trap area (each with very different surge dynamics). The camera in the gully was in operation in 2014 which managed to record granular flows and rockfalls, which particle tracking may be more appropriate for velocity measurements. The four cameras allows us to explore the limitations of camera distance, angle, frame rate, and image quality.

  10. Engineering design criteria for an image intensifier/image converter camera

    NASA Technical Reports Server (NTRS)

    Sharpsteen, J. T.; Lund, D. L.; Stoap, L. J.; Solheim, C. D.

    1976-01-01

    The design, display, and evaluation of an image intensifier/image converter camera which can be utilized in various requirements of spaceshuttle experiments are described. An image intensifier tube was utilized in combination with two brassboards as power supply and used for evaluation of night photography in the field. Pictures were obtained showing field details which would have been undistinguishable to the naked eye or to an ordinary camera.

  11. Ultra-fast framing camera tube

    DOEpatents

    Kalibjian, Ralph

    1981-01-01

    An electronic framing camera tube features focal plane image dissection and synchronized restoration of the dissected electron line images to form two-dimensional framed images. Ultra-fast framing is performed by first streaking a two-dimensional electron image across a narrow slit, thereby dissecting the two-dimensional electron image into sequential electron line images. The dissected electron line images are then restored into a framed image by a restorer deflector operated synchronously with the dissector deflector. The number of framed images on the tube's viewing screen is equal to the number of dissecting slits in the tube. The distinguishing features of this ultra-fast framing camera tube are the focal plane dissecting slits, and the synchronously-operated restorer deflector which restores the dissected electron line images into a two-dimensional framed image. The framing camera tube can produce image frames having high spatial resolution of optical events in the sub-100 picosecond range.

  12. a Spatio-Spectral Camera for High Resolution Hyperspectral Imaging

    NASA Astrophysics Data System (ADS)

    Livens, S.; Pauly, K.; Baeck, P.; Blommaert, J.; Nuyts, D.; Zender, J.; Delauré, B.

    2017-08-01

    Imaging with a conventional frame camera from a moving remotely piloted aircraft system (RPAS) is by design very inefficient. Less than 1 % of the flying time is used for collecting light. This unused potential can be utilized by an innovative imaging concept, the spatio-spectral camera. The core of the camera is a frame sensor with a large number of hyperspectral filters arranged on the sensor in stepwise lines. It combines the advantages of frame cameras with those of pushbroom cameras. By acquiring images in rapid succession, such a camera can collect detailed hyperspectral information, while retaining the high spatial resolution offered by the sensor. We have developed two versions of a spatio-spectral camera and used them in a variety of conditions. In this paper, we present a summary of three missions with the in-house developed COSI prototype camera (600-900 nm) in the domains of precision agriculture (fungus infection monitoring in experimental wheat plots), horticulture (crop status monitoring to evaluate irrigation management in strawberry fields) and geology (meteorite detection on a grassland field). Additionally, we describe the characteristics of the 2nd generation, commercially available ButterflEYE camera offering extended spectral range (475-925 nm), and we discuss future work.

  13. Hydrogen peroxide plasma sterilization of a waterproof, high-definition video camera case for intraoperative imaging in veterinary surgery.

    PubMed

    Adin, Christopher A; Royal, Kenneth D; Moore, Brandon; Jacob, Megan

    2018-06-13

    To evaluate the safety and usability of a wearable, waterproof high-definition camera/case for acquisition of surgical images by sterile personnel. An in vitro study to test the efficacy of biodecontamination of camera cases. Usability for intraoperative image acquisition was assessed in clinical procedures. Two waterproof GoPro Hero4 Silver camera cases were inoculated by immersion in media containing Staphylococcus pseudointermedius or Escherichia coli at ≥5.50E+07 colony forming units/mL. Cases were biodecontaminated by manual washing and hydrogen peroxide plasma sterilization. Cultures were obtained by swab and by immersion in enrichment broth before and after each contamination/decontamination cycle (n = 4). The cameras were then applied by a surgeon in clinical procedures by using either a headband or handheld mode and were assessed for usability according to 5 user characteristics. Cultures of all poststerilization swabs were negative. One of 8 cultures was positive in enrichment broth, consistent with a low level of contamination in 1 sample. Usability of the camera was considered poor in headband mode, with limited battery life, inability to control camera functions, and lack of zoom function affecting image quality. Handheld operation of the camera by the primary surgeon improved usability, allowing close-up still and video intraoperative image acquisition. Vaporized hydrogen peroxide sterilization of this camera case was considered effective for biodecontamination. Handheld operation improved usability for intraoperative image acquisition. Vaporized hydrogen peroxide sterilization and thorough manual washing of a waterproof camera may provide cost effective intraoperative image acquisition for documentation purposes. © 2018 The American College of Veterinary Surgeons.

  14. Selecting a digital camera for telemedicine.

    PubMed

    Patricoski, Chris; Ferguson, A Stewart

    2009-06-01

    The digital camera is an essential component of store-and-forward telemedicine (electronic consultation). There are numerous makes and models of digital cameras on the market, and selecting a suitable consumer-grade camera can be complicated. Evaluation of digital cameras includes investigating the features and analyzing image quality. Important features include the camera settings, ease of use, macro capabilities, method of image transfer, and power recharging. Consideration needs to be given to image quality, especially as it relates to color (skin tones) and detail. It is important to know the level of the photographer and the intended application. The goal is to match the characteristics of the camera with the telemedicine program requirements. In the end, selecting a digital camera is a combination of qualitative (subjective) and quantitative (objective) analysis. For the telemedicine program in Alaska in 2008, the camera evaluation and decision process resulted in a specific selection based on the criteria developed for our environment.

  15. An Accurate Co-registration Method for Airborne Repeat-pass InSAR

    NASA Astrophysics Data System (ADS)

    Dong, X. T.; Zhao, Y. H.; Yue, X. J.; Han, C. M.

    2017-10-01

    Interferometric Synthetic Aperture Radar (InSAR) technology plays a significant role in topographic mapping and surface deformation detection. Comparing with spaceborne repeat-pass InSAR, airborne repeat-pass InSAR solves the problems of long revisit time and low-resolution images. Due to the advantages of flexible, accurate, and fast obtaining abundant information, airborne repeat-pass InSAR is significant in deformation monitoring of shallow ground. In order to getting precise ground elevation information and interferometric coherence of deformation monitoring from master and slave images, accurate co-registration must be promised. Because of side looking, repeat observing path and long baseline, there are very different initial slant ranges and flight heights between repeat flight paths. The differences of initial slant ranges and flight height lead to the pixels, located identical coordinates on master and slave images, correspond to different size of ground resolution cells. The mismatching phenomenon performs very obvious on the long slant range parts of master image and slave image. In order to resolving the different sizes of pixels and getting accurate co-registration results, a new method is proposed based on Range-Doppler (RD) imaging model. VV-Polarization C-band airborne repeat-pass InSAR images were used in experiment. The experiment result shows that the proposed method leads to superior co-registration accuracy.

  16. System for interferometric distortion measurements that define an optical path

    DOEpatents

    Bokor, Jeffrey; Naulleau, Patrick

    2003-05-06

    An improved phase-shifting point diffraction interferometer can measure both distortion and wavefront aberration. In the preferred embodiment, the interferometer employs an object-plane pinhole array comprising a plurality of object pinholes located between the test optic and the source of electromagnetic radiation and an image-plane mask array that is positioned in the image plane of the test optic. The image-plane mask array comprises a plurality of test windows and corresponding reference pinholes, wherein the positions of the plurality of pinholes in the object-plane pinhole array register with those of the plurality of test windows in image-plane mask array. Electromagnetic radiation that is directed into a first pinhole of object-plane pinhole array thereby creating a first corresponding test beam image on the image-plane mask array. Where distortion is relatively small, it can be directly measured interferometrically by measuring the separation distance between and the orientation of the test beam and reference-beam pinhole and repeating this process for at least one other pinhole of the plurality of pinholes of the object-plane pinhole array. Where the distortion is relative large, it can be measured by using interferometry to direct the stage motion, of a stage supporting the image-plane mask array, and then use the final stage motion as a measure of the distortion.

  17. The fresnel interferometric imager

    NASA Astrophysics Data System (ADS)

    Koechlin, Laurent; Serre, Denis; Deba, Paul; Pelló, Roser; Peillon, Christelle; Duchon, Paul; Gomez de Castro, Ana Ines; Karovska, Margarita; Désert, Jean-Michel; Ehrenreich, David; Hebrard, Guillaume; Lecavelier Des Etangs, Alain; Ferlet, Roger; Sing, David; Vidal-Madjar, Alfred

    2009-03-01

    The Fresnel Interferometric Imager has been proposed to the European Space Agency (ESA) Cosmic Vision plan as a class L mission. This mission addresses several themes of the CV Plan: Exoplanet study, Matter in extreme conditions, and The Universe taking shape. This paper is an abridged version of the original ESA proposal. We have removed most of the technical and financial issues, to concentrate on the instrumental design and astrophysical missions. The instrument proposed is an ultra-lightweight telescope, featuring a novel optical concept based on diffraction focussing. It yields high dynamic range images, while releasing constraints on positioning and manufacturing of the main optical elements. This concept should open the way to very large apertures in space. In this two spacecraft formation-flying instrument, one spacecraft holds the focussing element: the Fresnel interferometric array; the other spacecraft holds the field optics, focal instrumentation, and detectors. The Fresnel array proposed here is a 3.6 ×3.6 m square opaque foil punched with 105 to 106 void “subapertures”. Focusing is achieved with no other optical element: the shape and positioning of the subapertures (holes in the foil) is responsible for beam combining by diffraction, and 5% to 10% of the total incident light ends up into a sharp focus. The consequence of this high number of subapertures is high dynamic range images. In addition, as it uses only a combination of vacuum and opaque material, this focussing method is potentially efficient over a very broad wavelength domain. The focal length of such diffractive focussing devices is wavelength dependent. However, this can be corrected. We have tested optically the efficiency of the chromatism correction on artificial sources (500 < λ < 750 nm): the images are diffraction limited, and the dynamic range measured on an artificial double source reaches 6.2 10 - 6. We have also validated numerical simulation algorithms for larger Fresnel interferometric arrays. These simulations yield a dynamic range (rejection factor) close to 10 - 8 for arrays such as the 3.6 m one we propose. A dynamic range of 10 - 8 allows detection of objects at contrasts as high as than 10 - 9 in most of the field. The astrophysical applications cover many objects in the IR, visible an UV domains. Examples are presented, taking advantage of the high angular resolution and dynamic range capabilities of this concept.

  18. Removing the depth-degeneracy in optical frequency domain imaging with frequency shifting

    PubMed Central

    Yun, S. H.; Tearney, G. J.; de Boer, J. F.; Bouma, B. E.

    2009-01-01

    A novel technique using an acousto-optic frequency shifter in optical frequency domain imaging (OFDI) is presented. The frequency shift eliminates the ambiguity between positive and negative differential delays, effectively doubling the interferometric ranging depth while avoiding image cross-talk. A signal processing algorithm is demonstrated to accommodate nonlinearity in the tuning slope of the wavelength-swept OFDI laser source. PMID:19484034

  19. Demonstration of a Corner-cube-interferometer LWIR Hyperspectral Imager

    NASA Astrophysics Data System (ADS)

    Renhorn, Ingmar G. E.; Svensson, Thomas; Cronström, Staffan; Hallberg, Tomas; Persson, Rolf; Lindell, Roland; Boreman, Glenn D.

    2010-01-01

    An interferometric long-wavelength infrared (LWIR) hyperspectral imager is demonstrated, based on a Michelson corner-cube interferometer. This class of system is inherently mechanically robust, and should have advantages over Sagnac-interferometer systems in terms of relaxed beamsplitter-coating specifications, and wider unvignetted field of view. Preliminary performance data from the laboratory prototype system are provided regarding imaging, spectral resolution, and fidelity of acquired spectra.

  20. The Age of Planet Host κ Andromedae Based on Interferometric Observations

    NASA Astrophysics Data System (ADS)

    Jones, Jeremy; White, Russel J.; Quinn, Samuel N.; Baines, Ellyn K.; Boyajian, Tabetha S.; Ireland, Michael; CHARA Team

    2016-01-01

    We present CHARA Array interferometric observations, obtained with the PAVO beam combiner in the optical (~750 nm), of κ Andromedae. This nearby (51.6 pc) B9/A0V star hosts a directly-imaged low mass companion. Observations made at multiple orientations show the star to be oblate (~15%), consistent with its large projected rotational velocity (vsini = 161.6 ± 22.2 km s-1). The interferometric observations, combined with photometry and the vsini are used to constrain an oblate star model of κ And, enabling us to determine its fundamental properties (e.g., average radius, bolometric luminosity, and equatorial velocity). These stellar properties are compared to the predictions of MESA evolution models to determine an age and mass for the star. The best fit model favors a young age for the system (< 100 Myr), which implies that κ And b has a mass around the limit separating planets and brown dwarfs.

  1. Guide-star-based computational adaptive optics for broadband interferometric tomography

    PubMed Central

    Adie, Steven G.; Shemonski, Nathan D.; Graf, Benedikt W.; Ahmad, Adeel; Scott Carney, P.; Boppart, Stephen A.

    2012-01-01

    We present a method for the numerical correction of optical aberrations based on indirect sensing of the scattered wavefront from point-like scatterers (“guide stars”) within a three-dimensional broadband interferometric tomogram. This method enables the correction of high-order monochromatic and chromatic aberrations utilizing guide stars that are revealed after numerical compensation of defocus and low-order aberrations of the optical system. Guide-star-based aberration correction in a silicone phantom with sparse sub-resolution-sized scatterers demonstrates improvement of resolution and signal-to-noise ratio over a large isotome. Results in highly scattering muscle tissue showed improved resolution of fine structure over an extended volume. Guide-star-based computational adaptive optics expands upon the use of image metrics for numerically optimizing the aberration correction in broadband interferometric tomography, and is analogous to phase-conjugation and time-reversal methods for focusing in turbid media. PMID:23284179

  2. Sky camera geometric calibration using solar observations

    DOE PAGES

    Urquhart, Bryan; Kurtz, Ben; Kleissl, Jan

    2016-09-05

    A camera model and associated automated calibration procedure for stationary daytime sky imaging cameras is presented. The specific modeling and calibration needs are motivated by remotely deployed cameras used to forecast solar power production where cameras point skyward and use 180° fisheye lenses. Sun position in the sky and on the image plane provides a simple and automated approach to calibration; special equipment or calibration patterns are not required. Sun position in the sky is modeled using a solar position algorithm (requiring latitude, longitude, altitude and time as inputs). Sun position on the image plane is detected using a simple image processing algorithm. Themore » performance evaluation focuses on the calibration of a camera employing a fisheye lens with an equisolid angle projection, but the camera model is general enough to treat most fixed focal length, central, dioptric camera systems with a photo objective lens. Calibration errors scale with the noise level of the sun position measurement in the image plane, but the calibration is robust across a large range of noise in the sun position. In conclusion, calibration performance on clear days ranged from 0.94 to 1.24 pixels root mean square error.« less

  3. Alternative images for perpendicular parking : a usability test of a multi-camera parking assistance system.

    DOT National Transportation Integrated Search

    2004-10-01

    The parking assistance system evaluated consisted of four outward facing cameras whose images could be presented on a monitor on the center console. The images presented varied in the location of the virtual eye point of the camera (the height above ...

  4. Camera artifacts in IUE spectra

    NASA Technical Reports Server (NTRS)

    Bruegman, O. W.; Crenshaw, D. M.

    1994-01-01

    This study of emission line mimicking features in the IUE cameras has produced an atlas of artifiacts in high-dispersion images with an accompanying table of prominent artifacts and a table of prominent artifacts in the raw images along with a medium image of the sky background for each IUE camera.

  5. A low-cost dual-camera imaging system for aerial applicators

    USDA-ARS?s Scientific Manuscript database

    Agricultural aircraft provide a readily available remote sensing platform as low-cost and easy-to-use consumer-grade cameras are being increasingly used for aerial imaging. In this article, we report on a dual-camera imaging system we recently assembled that can capture RGB and near-infrared (NIR) i...

  6. Left Panorama of Spirit's Landing Site

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Left Panorama of Spirit's Landing Site

    This is a version of the first 3-D stereo image from the rover's navigation camera, showing only the view from the left stereo camera onboard the Mars Exploration Rover Spirit. The left and right camera images are combined to produce a 3-D image.

  7. Generating Stereoscopic Television Images With One Camera

    NASA Technical Reports Server (NTRS)

    Coan, Paul P.

    1996-01-01

    Straightforward technique for generating stereoscopic television images involves use of single television camera translated laterally between left- and right-eye positions. Camera acquires one of images (left- or right-eye image), and video signal from image delayed while camera translated to position where it acquires other image. Length of delay chosen so both images displayed simultaneously or as nearly simultaneously as necessary to obtain stereoscopic effect. Technique amenable to zooming in on small areas within broad scenes. Potential applications include three-dimensional viewing of geological features and meteorological events from spacecraft and aircraft, inspection of workpieces moving along conveyor belts, and aiding ground and water search-and-rescue operations. Also used to generate and display imagery for public education and general information, and possible for medical purposes.

  8. Non-destructive evaluation of impact damage on carbon fiber laminates: Comparison between ESPI and Shearography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pagliarulo, V., E-mail: v.pagliarulo@isasi.cnr.it; Ferraro, P.; Lopresto, V.

    2016-06-28

    The aim of this paper is to investigate the ability of two different interferometric NDT techniques to detect and evaluate barely visible impact damage on composite laminates. The interferometric techniques allow to investigate large and complex structures. Electronic Speckle Pattern Interferometry (ESPI) works through real-time surface illumination by visible laser (i.e. 532 nm) and the range and the accuracy are related to the wavelength. While the ESPI works with the “classic” holographic configuration, that is reference beam and object beam, the Shearography uses the object image itself as reference: two object images are overlapped creating a shear image. This makes themore » method much less sensitive to external vibrations and noise but with one difference, it measures the first derivative of the displacement. In this work, different specimens at different impact energies have been investigated by means of both methods. The delaminated areas have been estimated and compared.« less

  9. Interferometric superlocalization of two incoherent optical point sources.

    PubMed

    Nair, Ranjith; Tsang, Mankei

    2016-02-22

    A novel interferometric method - SLIVER (Super Localization by Image inVERsion interferometry) - is proposed for estimating the separation of two incoherent point sources with a mean squared error that does not deteriorate as the sources are brought closer. The essential component of the interferometer is an image inversion device that inverts the field in the transverse plane about the optical axis, assumed to pass through the centroid of the sources. The performance of the device is analyzed using the Cramér-Rao bound applied to the statistics of spatially-unresolved photon counting using photon number-resolving and on-off detectors. The analysis is supported by Monte-Carlo simulations of the maximum likelihood estimator for the source separation, demonstrating the superlocalization effect for separations well below that set by the Rayleigh criterion. Simulations indicating the robustness of SLIVER to mismatch between the optical axis and the centroid are also presented. The results are valid for any imaging system with a circularly symmetric point-spread function.

  10. UCXp camera imaging principle and key technologies of data post-processing

    NASA Astrophysics Data System (ADS)

    Yuan, Fangyan; Li, Guoqing; Zuo, Zhengli; Liu, Jianmin; Wu, Liang; Yu, Xiaoping; Zhao, Haitao

    2014-03-01

    The large format digital aerial camera product UCXp was introduced into the Chinese market in 2008, the image consists of 17310 columns and 11310 rows with a pixel size of 6 mm. The UCXp camera has many advantages compared with the same generation camera, with multiple lenses exposed almost at the same time and no oblique lens. The camera has a complex imaging process whose principle will be detailed in this paper. On the other hand, the UCXp image post-processing method, including data pre-processing and orthophoto production, will be emphasized in this article. Based on the data of new Beichuan County, this paper will describe the data processing and effects.

  11. Radiation characteristics and implosion dynamics of tungsten wire array Z-pinches on the YANG accelerator

    NASA Astrophysics Data System (ADS)

    Huang, Xian-Bin; Yang, Li-Bing; Li, Jing; Zhou, Shao-Tong; Ren, Xiao-Dong; Zhang, Si-Qun; Dan, Jia-Kun; Cai, Hong-Chun; Duan, Shu-Chao; Chen, Guang-Hua; Zhang, Zheng-Wei; Ouyang, Kai; Li, Jun; Zhang, Zhao-Hui; Zhou, Rong-Guo; Wang, Gui-Lin

    2012-05-01

    We investigated the radiation characteristics and implosion dynamics of low-wire-number cylindrical tungsten wire array Z-pinches on the YANG accelerator with a peak current 0.8-1.1 MA and a rising time ~ 90 ns. The arrays are made up of (8-32) × 5 μm wires 6/10 mm in diameter and 15 mm in height. The highest X-ray power obtained in the experiments was about 0.37 TW with the total radiation energy ~ 13 kJ and the energy conversion efficiency ~ 9% (24 × 5 μm wires, 6 mm in diameter). Most of the X-ray emissions from tungsten Z-pinch plasmas were distributed in the spectral band of 100-600 eV, peaked at 250 and 375 eV. The dominant wavelengths of the wire ablation and the magneto-Rayleigh—Taylor instability were found and analyzed through measuring the time-gated self-emission and laser interferometric images. Through analyzing the implosion trajectories obtained by an optical streak camera, the run-in velocities of the Z-pinch plasmas at the end of the implosion phase were determined to be about (1.3-2.1) × 107 cm/s.

  12. A mobile device-based imaging spectrometer for environmental monitoring by attaching a lightweight small module to a commercial digital camera.

    PubMed

    Cai, Fuhong; Lu, Wen; Shi, Wuxiong; He, Sailing

    2017-11-15

    Spatially-explicit data are essential for remote sensing of ecological phenomena. Lately, recent innovations in mobile device platforms have led to an upsurge in on-site rapid detection. For instance, CMOS chips in smart phones and digital cameras serve as excellent sensors for scientific research. In this paper, a mobile device-based imaging spectrometer module (weighing about 99 g) is developed and equipped on a Single Lens Reflex camera. Utilizing this lightweight module, as well as commonly used photographic equipment, we demonstrate its utility through a series of on-site multispectral imaging, including ocean (or lake) water-color sensing and plant reflectance measurement. Based on the experiments we obtain 3D spectral image cubes, which can be further analyzed for environmental monitoring. Moreover, our system can be applied to many kinds of cameras, e.g., aerial camera and underwater camera. Therefore, any camera can be upgraded to an imaging spectrometer with the help of our miniaturized module. We believe it has the potential to become a versatile tool for on-site investigation into many applications.

  13. Single-snapshot 2D color measurement by plenoptic imaging system

    NASA Astrophysics Data System (ADS)

    Masuda, Kensuke; Yamanaka, Yuji; Maruyama, Go; Nagai, Sho; Hirai, Hideaki; Meng, Lingfei; Tosic, Ivana

    2014-03-01

    Plenoptic cameras enable capture of directional light ray information, thus allowing applications such as digital refocusing, depth estimation, or multiband imaging. One of the most common plenoptic camera architectures contains a microlens array at the conventional image plane and a sensor at the back focal plane of the microlens array. We leverage the multiband imaging (MBI) function of this camera and develop a single-snapshot, single-sensor high color fidelity camera. Our camera is based on a plenoptic system with XYZ filters inserted in the pupil plane of the main lens. To achieve high color measurement precision of this system, we perform an end-to-end optimization of the system model that includes light source information, object information, optical system information, plenoptic image processing and color estimation processing. Optimized system characteristics are exploited to build an XYZ plenoptic colorimetric camera prototype that achieves high color measurement precision. We describe an application of our colorimetric camera to color shading evaluation of display and show that it achieves color accuracy of ΔE<0.01.

  14. Camera-Model Identification Using Markovian Transition Probability Matrix

    NASA Astrophysics Data System (ADS)

    Xu, Guanshuo; Gao, Shang; Shi, Yun Qing; Hu, Ruimin; Su, Wei

    Detecting the (brands and) models of digital cameras from given digital images has become a popular research topic in the field of digital forensics. As most of images are JPEG compressed before they are output from cameras, we propose to use an effective image statistical model to characterize the difference JPEG 2-D arrays of Y and Cb components from the JPEG images taken by various camera models. Specifically, the transition probability matrices derived from four different directional Markov processes applied to the image difference JPEG 2-D arrays are used to identify statistical difference caused by image formation pipelines inside different camera models. All elements of the transition probability matrices, after a thresholding technique, are directly used as features for classification purpose. Multi-class support vector machines (SVM) are used as the classification tool. The effectiveness of our proposed statistical model is demonstrated by large-scale experimental results.

  15. Interferometric synthetic aperture radar (InSAR)—its past, present and future

    USGS Publications Warehouse

    Lu, Zhong; Kwoun, Oh-Ig; Rykhus, R.P.

    2007-01-01

    Very simply, interferometric synthetic aperture radar (InSAR) involves the use of two or more synthetic aperture radar (SAR) images of the same area to extract landscape topography and its deformation patterns. A SAR system transmits electromagnetic waves at a wavelength that can range from a few millimeters to tens of centimeters and therefore can operate during day and night under all-weather conditions. Using SAR processing technique (Curlander and McDonough, 1991), both the intensity and phase of the reflected (or backscattered) radar signal of each ground resolution element (a few meters to tens of meters) can be calculated in the form of a complex-valued SAR image that represents the reflectivity of the ground surface. The amplitude or intensity of the SAR image is determined primarily by terrain slope, surface roughness, and dielectric constants, whereas the phase of the SAR image is determined primarily by the distance between the satellite antenna and the ground targets. InSAR imaging utilizes the interaction of electromagnetic waves, referred to as interference, to measure precise distances between the satellite antenna and ground resolution elements to derive landscape topography and its subtle change in elevation.

  16. Scalable splitting algorithms for big-data interferometric imaging in the SKA era

    NASA Astrophysics Data System (ADS)

    Onose, Alexandru; Carrillo, Rafael E.; Repetti, Audrey; McEwen, Jason D.; Thiran, Jean-Philippe; Pesquet, Jean-Christophe; Wiaux, Yves

    2016-11-01

    In the context of next-generation radio telescopes, like the Square Kilometre Array (SKA), the efficient processing of large-scale data sets is extremely important. Convex optimization tasks under the compressive sensing framework have recently emerged and provide both enhanced image reconstruction quality and scalability to increasingly larger data sets. We focus herein mainly on scalability and propose two new convex optimization algorithmic structures able to solve the convex optimization tasks arising in radio-interferometric imaging. They rely on proximal splitting and forward-backward iterations and can be seen, by analogy, with the CLEAN major-minor cycle, as running sophisticated CLEAN-like iterations in parallel in multiple data, prior, and image spaces. Both methods support any convex regularization function, in particular, the well-studied ℓ1 priors promoting image sparsity in an adequate domain. Tailored for big-data, they employ parallel and distributed computations to achieve scalability, in terms of memory and computational requirements. One of them also exploits randomization, over data blocks at each iteration, offering further flexibility. We present simulation results showing the feasibility of the proposed methods as well as their advantages compared to state-of-the-art algorithmic solvers. Our MATLAB code is available online on GitHub.

  17. Automatic Calibration of an Airborne Imaging System to an Inertial Navigation Unit

    NASA Technical Reports Server (NTRS)

    Ansar, Adnan I.; Clouse, Daniel S.; McHenry, Michael C.; Zarzhitsky, Dimitri V.; Pagdett, Curtis W.

    2013-01-01

    This software automatically calibrates a camera or an imaging array to an inertial navigation system (INS) that is rigidly mounted to the array or imager. In effect, it recovers the coordinate frame transformation between the reference frame of the imager and the reference frame of the INS. This innovation can automatically derive the camera-to-INS alignment using image data only. The assumption is that the camera fixates on an area while the aircraft flies on orbit. The system then, fully automatically, solves for the camera orientation in the INS frame. No manual intervention or ground tie point data is required.

  18. Plenoptic Image Motion Deblurring.

    PubMed

    Chandramouli, Paramanand; Jin, Meiguang; Perrone, Daniele; Favaro, Paolo

    2018-04-01

    We propose a method to remove motion blur in a single light field captured with a moving plenoptic camera. Since motion is unknown, we resort to a blind deconvolution formulation, where one aims to identify both the blur point spread function and the latent sharp image. Even in the absence of motion, light field images captured by a plenoptic camera are affected by a non-trivial combination of both aliasing and defocus, which depends on the 3D geometry of the scene. Therefore, motion deblurring algorithms designed for standard cameras are not directly applicable. Moreover, many state of the art blind deconvolution algorithms are based on iterative schemes, where blurry images are synthesized through the imaging model. However, current imaging models for plenoptic images are impractical due to their high dimensionality. We observe that plenoptic cameras introduce periodic patterns that can be exploited to obtain highly parallelizable numerical schemes to synthesize images. These schemes allow extremely efficient GPU implementations that enable the use of iterative methods. We can then cast blind deconvolution of a blurry light field image as a regularized energy minimization to recover a sharp high-resolution scene texture and the camera motion. Furthermore, the proposed formulation can handle non-uniform motion blur due to camera shake as demonstrated on both synthetic and real light field data.

  19. Automated interferometric synthetic aperture microscopy and computational adaptive optics for improved optical coherence tomography.

    PubMed

    Xu, Yang; Liu, Yuan-Zhi; Boppart, Stephen A; Carney, P Scott

    2016-03-10

    In this paper, we introduce an algorithm framework for the automation of interferometric synthetic aperture microscopy (ISAM). Under this framework, common processing steps such as dispersion correction, Fourier domain resampling, and computational adaptive optics aberration correction are carried out as metrics-assisted parameter search problems. We further present the results of this algorithm applied to phantom and biological tissue samples and compare with manually adjusted results. With the automated algorithm, near-optimal ISAM reconstruction can be achieved without manual adjustment. At the same time, the technical barrier for the nonexpert using ISAM imaging is also significantly lowered.

  20. Interferometric rotation sensor

    NASA Technical Reports Server (NTRS)

    Walsh, T. M. (Inventor)

    1973-01-01

    An interferometric rotation sensor and control system is provided which includes a compound prism interferometer and an associated direction control system. Light entering the interferometer is split into two paths with the light in the respective paths being reflected an unequal number of times, and then being recombined at an exit aperture in phase differing relationships. Incoming light is deviated from the optical axis of the device by an angle, alpha. The angle causes a similar displacement of the two component images at the exit aperture which results in a fringe pattern. Fringe numbers are directly related to angle alpha. Various control systems of the interferometer are given.

  1. Space Interferometry Mission: Measuring the Universe

    NASA Technical Reports Server (NTRS)

    Marr, James; Dallas, Saterios; Laskin, Robert; Unwin, Stephen; Yu, Jeffrey

    1991-01-01

    The Space Interferometry Mission (SIM) will be the NASA Origins Program's first space based long baseline interferometric observatory. SIM will use a 10 m Michelson stellar interferometer to provide 4 microarcsecond precision absolute position measurements of stars down to 20th magnitude over its 5 yr. mission lifetime. SIM will also provide technology demonstrations of synthesis imaging and interferometric nulling. This paper describes the what, why and how of the SIM mission, including an overall mission and system description, science objectives, general description of how SIM makes its measurements, description of the design concepts now under consideration, operations concept, and supporting technology program.

  2. Dense depth maps from correspondences derived from perceived motion

    NASA Astrophysics Data System (ADS)

    Kirby, Richard; Whitaker, Ross

    2017-01-01

    Many computer vision applications require finding corresponding points between images and using the corresponding points to estimate disparity. Today's correspondence finding algorithms primarily use image features or pixel intensities common between image pairs. Some 3-D computer vision applications, however, do not produce the desired results using correspondences derived from image features or pixel intensities. Two examples are the multimodal camera rig and the center region of a coaxial camera rig. We present an image correspondence finding technique that aligns pairs of image sequences using optical flow fields. The optical flow fields provide information about the structure and motion of the scene, which are not available in still images but can be used in image alignment. We apply the technique to a dual focal length stereo camera rig consisting of a visible light-infrared camera pair and to a coaxial camera rig. We test our method on real image sequences and compare our results with the state-of-the-art multimodal and structure from motion (SfM) algorithms. Our method produces more accurate depth and scene velocity reconstruction estimates than the state-of-the-art multimodal and SfM algorithms.

  3. Relating transverse ray error and light fields in plenoptic camera images

    NASA Astrophysics Data System (ADS)

    Schwiegerling, Jim; Tyo, J. Scott

    2013-09-01

    Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. The camera image is focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The resultant image is an array of circular exit pupil images, each corresponding to the overlying lenslet. The position of the lenslet encodes the spatial information of the scene, whereas as the sensor pixels encode the angular information for light incident on the lenslet. The 4D light field is therefore described by the 2D spatial information and 2D angular information captured by the plenoptic camera. In aberration theory, the transverse ray error relates the pupil coordinates of a given ray to its deviation from the ideal image point in the image plane and is consequently a 4D function as well. We demonstrate a technique for modifying the traditional transverse ray error equations to recover the 4D light field of a general scene. In the case of a well corrected optical system, this light field is easily related to the depth of various objects in the scene. Finally, the effects of sampling with both the lenslet array and the camera sensor on the 4D light field data are analyzed to illustrate the limitations of such systems.

  4. New Imaging Operation Scheme at VLTI

    NASA Astrophysics Data System (ADS)

    Haubois, Xavier

    2018-04-01

    After PIONIER and GRAVITY, MATISSE will soon complete the set of 4 telescope beam combiners at VLTI. Together with recent developments in the image reconstruction algorithms, the VLTI aims to develop its operation scheme to allow optimized and adaptive UV plane coverage. The combination of spectro-imaging instruments, optimized operation framework and image reconstruction algorithms should lead to an increase of the reliability and quantity of the interferometric images. In this contribution, I will present the status of this new scheme as well as possible synergies with other instruments.

  5. Comparison of the effectiveness of three retinal camera technologies for malarial retinopathy detection in Malawi

    NASA Astrophysics Data System (ADS)

    Soliz, Peter; Nemeth, Sheila C.; Barriga, E. Simon; Harding, Simon P.; Lewallen, Susan; Taylor, Terrie E.; MacCormick, Ian J.; Joshi, Vinayak S.

    2016-03-01

    The purpose of this study was to test the suitability of three available camera technologies (desktop, portable, and iphone based) for imaging comatose children who presented with clinical symptoms of malaria. Ultimately, the results of the project would form the basis for a design of a future camera to screen for malaria retinopathy (MR) in a resource challenged environment. The desktop, portable, and i-phone based cameras were represented by the Topcon, Pictor Plus, and Peek cameras, respectively. These cameras were tested on N=23 children presenting with symptoms of cerebral malaria (CM) at a malaria clinic, Queen Elizabeth Teaching Hospital in Malawi, Africa. Each patient was dilated for binocular indirect ophthalmoscopy (BIO) exam by an ophthalmologist followed by imaging with all three cameras. Each of the cases was graded according to an internationally established protocol and compared to the BIO as the clinical ground truth. The reader used three principal retinal lesions as markers for MR: hemorrhages, retinal whitening, and vessel discoloration. The study found that the mid-priced Pictor Plus hand-held camera performed considerably better than the lower price mobile phone-based camera, and slightly the higher priced table top camera. When comparing the readings of digital images against the clinical reference standard (BIO), the Pictor Plus camera had sensitivity and specificity for MR of 100% and 87%, respectively. This compares to a sensitivity and specificity of 87% and 75% for the i-phone based camera and 100% and 75% for the desktop camera. The drawback of all the cameras were their limited field of view which did not allow complete view of the periphery where vessel discoloration occurs most frequently. The consequence was that vessel discoloration was not addressed in this study. None of the cameras offered real-time image quality assessment to ensure high quality images to afford the best possible opportunity for reading by a remotely located specialist.

  6. Advanced imaging system

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This document describes the Advanced Imaging System CCD based camera. The AIS1 camera system was developed at Photometric Ltd. in Tucson, Arizona as part of a Phase 2 SBIR contract No. NAS5-30171 from the NASA/Goddard Space Flight Center in Greenbelt, Maryland. The camera project was undertaken as a part of the Space Telescope Imaging Spectrograph (STIS) project. This document is intended to serve as a complete manual for the use and maintenance of the camera system. All the different parts of the camera hardware and software are discussed and complete schematics and source code listings are provided.

  7. Research on the electro-optical assistant landing system based on the dual camera photogrammetry algorithm

    NASA Astrophysics Data System (ADS)

    Mi, Yuhe; Huang, Yifan; Li, Lin

    2015-08-01

    Based on the location technique of beacon photogrammetry, Dual Camera Photogrammetry (DCP) algorithm was used to assist helicopters landing on the ship. In this paper, ZEMAX was used to simulate the two Charge Coupled Device (CCD) cameras imaging four beacons on both sides of the helicopter and output the image to MATLAB. Target coordinate systems, image pixel coordinate systems, world coordinate systems and camera coordinate systems were established respectively. According to the ideal pin-hole imaging model, the rotation matrix and translation vector of the target coordinate systems and the camera coordinate systems could be obtained by using MATLAB to process the image information and calculate the linear equations. On the basis mentioned above, ambient temperature and the positions of the beacons and cameras were changed in ZEMAX to test the accuracy of the DCP algorithm in complex sea status. The numerical simulation shows that in complex sea status, the position measurement accuracy can meet the requirements of the project.

  8. Ranging Apparatus and Method Implementing Stereo Vision System

    NASA Technical Reports Server (NTRS)

    Li, Larry C. (Inventor); Cox, Brian J. (Inventor)

    1997-01-01

    A laser-directed ranging system for use in telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a left and right video camera mounted on a camera platform, and a remotely positioned operator. The position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. A laser is provided between the left and right video camera and is directed by the user to point to a target device. The images produced by the left and right video cameras are processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. The horizontal disparity between the two processed images is calculated for use in a stereometric ranging analysis from which range is determined.

  9. A 3D photographic capsule endoscope system with full field of view

    NASA Astrophysics Data System (ADS)

    Ou-Yang, Mang; Jeng, Wei-De; Lai, Chien-Cheng; Kung, Yi-Chinn; Tao, Kuan-Heng

    2013-09-01

    Current capsule endoscope uses one camera to capture the surface image in the intestine. It can only observe the abnormal point, but cannot know the exact information of this abnormal point. Using two cameras can generate 3D images, but the visual plane changes while capsule endoscope rotates. It causes that two cameras can't capture the images information completely. To solve this question, this research provides a new kind of capsule endoscope to capture 3D images, which is 'A 3D photographic capsule endoscope system'. The system uses three cameras to capture images in real time. The advantage is increasing the viewing range up to 2.99 times respect to the two camera system. The system can accompany 3D monitor provides the exact information of symptom points, helping doctors diagnose the disease.

  10. A detailed comparison of single-camera light-field PIV and tomographic PIV

    NASA Astrophysics Data System (ADS)

    Shi, Shengxian; Ding, Junfei; Atkinson, Callum; Soria, Julio; New, T. H.

    2018-03-01

    This paper conducts a comprehensive study between the single-camera light-field particle image velocimetry (LF-PIV) and the multi-camera tomographic particle image velocimetry (Tomo-PIV). Simulation studies were first performed using synthetic light-field and tomographic particle images, which extensively examine the difference between these two techniques by varying key parameters such as pixel to microlens ratio (PMR), light-field camera Tomo-camera pixel ratio (LTPR), particle seeding density and tomographic camera number. Simulation results indicate that the single LF-PIV can achieve accuracy consistent with that of multi-camera Tomo-PIV, but requires the use of overall greater number of pixels. Experimental studies were then conducted by simultaneously measuring low-speed jet flow with single-camera LF-PIV and four-camera Tomo-PIV systems. Experiments confirm that given a sufficiently high pixel resolution, a single-camera LF-PIV system can indeed deliver volumetric velocity field measurements for an equivalent field of view with a spatial resolution commensurate with those of multi-camera Tomo-PIV system, enabling accurate 3D measurements in applications where optical access is limited.

  11. High dynamic range image acquisition based on multiplex cameras

    NASA Astrophysics Data System (ADS)

    Zeng, Hairui; Sun, Huayan; Zhang, Tinghua

    2018-03-01

    High dynamic image is an important technology of photoelectric information acquisition, providing higher dynamic range and more image details, and it can better reflect the real environment, light and color information. Currently, the method of high dynamic range image synthesis based on different exposure image sequences cannot adapt to the dynamic scene. It fails to overcome the effects of moving targets, resulting in the phenomenon of ghost. Therefore, a new high dynamic range image acquisition method based on multiplex cameras system was proposed. Firstly, different exposure images sequences were captured with the camera array, using the method of derivative optical flow based on color gradient to get the deviation between images, and aligned the images. Then, the high dynamic range image fusion weighting function was established by combination of inverse camera response function and deviation between images, and was applied to generated a high dynamic range image. The experiments show that the proposed method can effectively obtain high dynamic images in dynamic scene, and achieves good results.

  12. Convolutional Neural Network-Based Shadow Detection in Images Using Visible Light Camera Sensor.

    PubMed

    Kim, Dong Seop; Arsalan, Muhammad; Park, Kang Ryoung

    2018-03-23

    Recent developments in intelligence surveillance camera systems have enabled more research on the detection, tracking, and recognition of humans. Such systems typically use visible light cameras and images, in which shadows make it difficult to detect and recognize the exact human area. Near-infrared (NIR) light cameras and thermal cameras are used to mitigate this problem. However, such instruments require a separate NIR illuminator, or are prohibitively expensive. Existing research on shadow detection in images captured by visible light cameras have utilized object and shadow color features for detection. Unfortunately, various environmental factors such as illumination change and brightness of background cause detection to be a difficult task. To overcome this problem, we propose a convolutional neural network-based shadow detection method. Experimental results with a database built from various outdoor surveillance camera environments, and from the context-aware vision using image-based active recognition (CAVIAR) open database, show that our method outperforms previous works.

  13. Convolutional Neural Network-Based Shadow Detection in Images Using Visible Light Camera Sensor

    PubMed Central

    Kim, Dong Seop; Arsalan, Muhammad; Park, Kang Ryoung

    2018-01-01

    Recent developments in intelligence surveillance camera systems have enabled more research on the detection, tracking, and recognition of humans. Such systems typically use visible light cameras and images, in which shadows make it difficult to detect and recognize the exact human area. Near-infrared (NIR) light cameras and thermal cameras are used to mitigate this problem. However, such instruments require a separate NIR illuminator, or are prohibitively expensive. Existing research on shadow detection in images captured by visible light cameras have utilized object and shadow color features for detection. Unfortunately, various environmental factors such as illumination change and brightness of background cause detection to be a difficult task. To overcome this problem, we propose a convolutional neural network-based shadow detection method. Experimental results with a database built from various outdoor surveillance camera environments, and from the context-aware vision using image-based active recognition (CAVIAR) open database, show that our method outperforms previous works. PMID:29570690

  14. SU-D-BRC-07: System Design for a 3D Volumetric Scintillation Detector Using SCMOS Cameras

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Darne, C; Robertson, D; Alsanea, F

    2016-06-15

    Purpose: The purpose of this project is to build a volumetric scintillation detector for quantitative imaging of 3D dose distributions of proton beams accurately in near real-time. Methods: The liquid scintillator (LS) detector consists of a transparent acrylic tank (20×20×20 cm{sup 3}) filled with a liquid scintillator that when irradiated with protons generates scintillation light. To track rapid spatial and dose variations in spot scanning proton beams we used three scientific-complementary metal-oxide semiconductor (sCMOS) imagers (2560×2160 pixels). The cameras collect optical signal from three orthogonal projections. To reduce system footprint two mirrors oriented at 45° to the tank surfaces redirectmore » scintillation light to cameras for capturing top and right views. Selection of fixed focal length objective lenses for these cameras was based on their ability to provide large depth of field (DoF) and required field of view (FoV). Multiple cross-hairs imprinted on the tank surfaces allow for image corrections arising from camera perspective and refraction. Results: We determined that by setting sCMOS to 16-bit dynamic range, truncating its FoV (1100×1100 pixels) to image the entire volume of the LS detector, and using 5.6 msec integration time imaging rate can be ramped up to 88 frames per second (fps). 20 mm focal length lens provides a 20 cm imaging DoF and 0.24 mm/pixel resolution. Master-slave camera configuration enable the slaves to initiate image acquisition instantly (within 2 µsec) after receiving a trigger signal. A computer with 128 GB RAM was used for spooling images from the cameras and can sustain a maximum recording time of 2 min per camera at 75 fps. Conclusion: The three sCMOS cameras are capable of high speed imaging. They can therefore be used for quick, high-resolution, and precise mapping of dose distributions from scanned spot proton beams in three dimensions.« less

  15. Doppler synthetic aperture radar interferometry: a novel SAR interferometry for height mapping using ultra-narrowband waveforms

    NASA Astrophysics Data System (ADS)

    Yazıcı, Birsen; Son, Il-Young; Cagri Yanik, H.

    2018-05-01

    This paper introduces a new and novel radar interferometry based on Doppler synthetic aperture radar (Doppler-SAR) paradigm. Conventional SAR interferometry relies on wideband transmitted waveforms to obtain high range resolution. Topography of a surface is directly related to the range difference between two antennas configured at different positions. Doppler-SAR is a novel imaging modality that uses ultra-narrowband continuous waves (UNCW). It takes advantage of high resolution Doppler information provided by UNCWs to form high resolution SAR images. We introduce the theory of Doppler-SAR interferometry. We derive an interferometric phase model and develop the equations of height mapping. Unlike conventional SAR interferometry, we show that the topography of a scene is related to the difference in Doppler frequency between two antennas configured at different velocities. While the conventional SAR interferometry uses range, Doppler and Doppler due to interferometric phase in height mapping; Doppler-SAR interferometry uses Doppler, Doppler-rate and Doppler-rate due to interferometric phase in height mapping. We demonstrate our theory in numerical simulations. Doppler-SAR interferometry offers the advantages of long-range, robust, environmentally friendly operations; low-power, low-cost, lightweight systems suitable for low-payload platforms, such as micro-satellites; and passive applications using sources of opportunity transmitting UNCW.

  16. THE AzTEC/SMA INTERFEROMETRIC IMAGING SURVEY OF SUBMILLIMETER-SELECTED HIGH-REDSHIFT GALAXIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Younger, Joshua D.; Fazio, Giovanni G.; Huang Jiasheng

    We present results from a continuing interferometric survey of high-redshift submillimeter galaxies (SMGs) with the Submillimeter Array, including high-resolution (beam size approx2 arcsec) imaging of eight additional AzTEC 1.1 mm selected sources in the COSMOS field, for which we obtain six reliable (peak signal-to-noise ratio (S/N) >5 or peak S/N >4 with multiwavelength counterparts within the beam) and two moderate significance (peak S/N >4) detections. When combined with previous detections, this yields an unbiased sample of millimeter-selected SMGs with complete interferometric follow up. With this sample in hand, we (1) empirically confirm the radio-submillimeter association, (2) examine the submillimeter morphology-includingmore » the nature of SMGs with multiple radio counterparts and constraints on the physical scale of the far infrared-of the sample, and (3) find additional evidence for a population of extremely luminous, radio-dim SMGs that peaks at higher redshift than previous, radio-selected samples. In particular, the presence of such a population of high-redshift sources has important consequences for models of galaxy formation-which struggle to account for such objects even under liberal assumptions-and dust production models given the limited time since the big bang.« less

  17. Spotlight SAR interferometry for terrain elevation mapping and interferometric change detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eichel, P.H.; Ghiglia, D.C.; Jakowatz, C.V. Jr.

    1996-02-01

    In this report, we employ an approach quite different from any previous work; we show that a new methodology leads to a simpler and clearer understanding of the fundamental principles of SAR interferometry. This methodology also allows implementation of an important collection mode that has not been demonstrated to date. Specifically, we introduce the following six new concepts for the processing of interferometric SAR (INSAR) data: (1) processing using spotlight mode SAR imaging (allowing ultra-high resolution), as opposed to conventional strip-mapping techniques; (2) derivation of the collection geometry constraints required to avoid decorrelation effects in two-pass INSAR; (3) derivation ofmore » maximum likelihood estimators for phase difference and the change parameter employed in interferometric change detection (ICD); (4) processing for the two-pass case wherein the platform ground tracks make a large crossing angle; (5) a robust least-squares method for two-dimensional phase unwrapping formulated as a solution to Poisson`s equation, instead of using traditional path-following techniques; and (6) the existence of a simple linear scale factor that relates phase differences between two SAR images to terrain height. We show both theoretical analysis, as well as numerous examples that employ real SAR collections to demonstrate the innovations listed above.« less

  18. Mitigation Atmospheric Effects in Interferogram with Using Integrated Meris/modis Data and a Case Study Over Southern California

    NASA Astrophysics Data System (ADS)

    Wang, X.; Zhang, P.; Sun, Z.

    2018-04-01

    Interferometric synthetic aperture radar(InSAR), as a space geodetictechnology, had been testified a high potential means of earth observation providing a method fordigital elevation model (DEM) and surface deformation monitoring of high precision. However, the accuracy of the interferometric synthetic aperture radar is mainly limited by the effects of atmospheric water vapor. In order to effectively measure topography or surface deformations by synthetic aperture radar interferometry (InSAR), it is necessary to mitigate the effects of atmospheric water vapor on the interferometric signals. This paper analyzed the atmospheric effects on the interferogram quantitatively, and described a result of estimating Precipitable Water Vapor (PWV) from the the Medium Resolution Imaging Spectrometer (MERIS), Moderate Resolution Imaging Spectroradiometer (MODIS) and the ground-based GPS, compared the MERIS/MODIS PWV with the GPS PWV. Finally, a case study for mitigating atmospheric effects in interferogramusing with using the integration of MERIS and MODIS PWV overSouthern California is given. The result showed that such integration approach benefits removing or reducing the atmospheric phase contribution from the corresponding interferogram, the integrated Zenith Path Delay Difference Maps (ZPDDM) of MERIS and MODIS helps reduce the water vapor effects efficiently, the standard deviation (STD) of interferogram is improved by 23 % after the water vapor correction than the original interferogram.

  19. Remote camera observations of lava dome growth at Mount St. Helens, Washington, October 2004 to February 2006: Chapter 11 in A volcano rekindled: the renewed eruption of Mount St. Helens, 2004-2006

    USGS Publications Warehouse

    Poland, Michael P.; Dzurisin, Daniel; LaHusen, Richard G.; Major, John J.; Lapcewich, Dennis; Endo, Elliot T.; Gooding, Daniel J.; Schilling, Steve P.; Janda, Christine G.; Sherrod, David R.; Scott, William E.; Stauffer, Peter H.

    2008-01-01

    Images from a Web-based camera (Webcam) located 8 km north of Mount St. Helens and a network of remote, telemetered digital cameras were used to observe eruptive activity at the volcano between October 2004 and February 2006. The cameras offered the advantages of low cost, low power, flexibility in deployment, and high spatial and temporal resolution. Images obtained from the cameras provided important insights into several aspects of dome extrusion, including rockfalls, lava extrusion rates, and explosive activity. Images from the remote, telemetered digital cameras were assembled into time-lapse animations of dome extrusion that supported monitoring, research, and outreach efforts. The wide-ranging utility of remote camera imagery should motivate additional work, especially to develop the three-dimensional quantitative capabilities of terrestrial camera networks.

  20. Overview of Digital Forensics Algorithms in Dslr Cameras

    NASA Astrophysics Data System (ADS)

    Aminova, E.; Trapeznikov, I.; Priorov, A.

    2017-05-01

    The widespread usage of the mobile technologies and the improvement of the digital photo devices getting has led to more frequent cases of falsification of images including in the judicial practice. Consequently, the actual task for up-to-date digital image processing tools is the development of algorithms for determining the source and model of the DSLR (Digital Single Lens Reflex) camera and improve image formation algorithms. Most research in this area based on the mention that the extraction of unique sensor trace of DSLR camera could be possible on the certain stage of the imaging process into the camera. It is considered that the study focuses on the problem of determination of unique feature of DSLR cameras based on optical subsystem artifacts and sensor noises.

  1. Pulsed-neutron imaging by a high-speed camera and center-of-gravity processing

    NASA Astrophysics Data System (ADS)

    Mochiki, K.; Uragaki, T.; Koide, J.; Kushima, Y.; Kawarabayashi, J.; Taketani, A.; Otake, Y.; Matsumoto, Y.; Su, Y.; Hiroi, K.; Shinohara, T.; Kai, T.

    2018-01-01

    Pulsed-neutron imaging is attractive technique in the research fields of energy-resolved neutron radiography and RANS (RIKEN) and RADEN (J-PARC/JAEA) are small and large accelerator-driven pulsed-neutron facilities for its imaging, respectively. To overcome the insuficient spatial resolution of the conunting type imaging detectors like μ NID, nGEM and pixelated detectors, camera detectors combined with a neutron color image intensifier were investigated. At RANS center-of-gravity technique was applied to spots image obtained by a CCD camera and the technique was confirmed to be effective for improving spatial resolution. At RADEN a high-frame-rate CMOS camera was used and super resolution technique was applied and it was recognized that the spatial resolution was futhermore improved.

  2. Automatic Calibration of Stereo-Cameras Using Ordinary Chess-Board Patterns

    NASA Astrophysics Data System (ADS)

    Prokos, A.; Kalisperakis, I.; Petsa, E.; Karras, G.

    2012-07-01

    Automation of camera calibration is facilitated by recording coded 2D patterns. Our toolbox for automatic camera calibration using images of simple chess-board patterns is freely available on the Internet. But it is unsuitable for stereo-cameras whose calibration implies recovering camera geometry and their true-to-scale relative orientation. In contrast to all reported methods requiring additional specific coding to establish an object space coordinate system, a toolbox for automatic stereo-camera calibration relying on ordinary chess-board patterns is presented here. First, the camera calibration algorithm is applied to all image pairs of the pattern to extract nodes of known spacing, order them in rows and columns, and estimate two independent camera parameter sets. The actual node correspondences on stereo-pairs remain unknown. Image pairs of a textured 3D scene are exploited for finding the fundamental matrix of the stereo-camera by applying RANSAC to point matches established with the SIFT algorithm. A node is then selected near the centre of the left image; its match on the right image is assumed as the node closest to the corresponding epipolar line. This yields matches for all nodes (since these have already been ordered), which should also satisfy the 2D epipolar geometry. Measures for avoiding mismatching are taken. With automatically estimated initial orientation values, a bundle adjustment is performed constraining all pairs on a common (scaled) relative orientation. Ambiguities regarding the actual exterior orientations of the stereo-camera with respect to the pattern are irrelevant. Results from this automatic method show typical precisions not above 1/4 pixels for 640×480 web cameras.

  3. A Reconfigurable Real-Time Compressive-Sampling Camera for Biological Applications

    PubMed Central

    Fu, Bo; Pitter, Mark C.; Russell, Noah A.

    2011-01-01

    Many applications in biology, such as long-term functional imaging of neural and cardiac systems, require continuous high-speed imaging. This is typically not possible, however, using commercially available systems. The frame rate and the recording time of high-speed cameras are limited by the digitization rate and the capacity of on-camera memory. Further restrictions are often imposed by the limited bandwidth of the data link to the host computer. Even if the system bandwidth is not a limiting factor, continuous high-speed acquisition results in very large volumes of data that are difficult to handle, particularly when real-time analysis is required. In response to this issue many cameras allow a predetermined, rectangular region of interest (ROI) to be sampled, however this approach lacks flexibility and is blind to the image region outside of the ROI. We have addressed this problem by building a camera system using a randomly-addressable CMOS sensor. The camera has a low bandwidth, but is able to capture continuous high-speed images of an arbitrarily defined ROI, using most of the available bandwidth, while simultaneously acquiring low-speed, full frame images using the remaining bandwidth. In addition, the camera is able to use the full-frame information to recalculate the positions of targets and update the high-speed ROIs without interrupting acquisition. In this way the camera is capable of imaging moving targets at high-speed while simultaneously imaging the whole frame at a lower speed. We have used this camera system to monitor the heartbeat and blood cell flow of a water flea (Daphnia) at frame rates in excess of 1500 fps. PMID:22028852

  4. Image quality assessment for selfies with and without super resolution

    NASA Astrophysics Data System (ADS)

    Kubota, Aya; Gohshi, Seiichi

    2018-04-01

    With the advent of cellphone cameras, in particular, on smartphones, many people now take photos of themselves alone and with others in the frame; such photos are popularly known as "selfies". Most smartphones are equipped with two cameras: the front-facing and rear cameras. The camera located on the back of the smartphone is referred to as the "out-camera," whereas the one located on the front of the smartphone is called the "in-camera." In-cameras are mainly used for selfies. Some smartphones feature high-resolution cameras. However, the original image quality cannot be obtained because smartphone cameras often have low-performance lenses. Super resolution (SR) is one of the recent technological advancements that has increased image resolution. We developed a new SR technology that can be processed on smartphones. Smartphones with new SR technology are currently available in the market have already registered sales. However, the effective use of new SR technology has not yet been verified. Comparing the image quality with and without SR on smartphone display is necessary to confirm the usefulness of this new technology. Methods that are based on objective and subjective assessments are required to quantitatively measure image quality. It is known that the typical object assessment value, such as Peak Signal to Noise Ratio (PSNR), does not go together with how we feel when we assess image/video. When digital broadcast started, the standard was determined using subjective assessment. Although subjective assessment usually comes at high cost because of personnel expenses for observers, the results are highly reproducible when they are conducted under right conditions and statistical analysis. In this study, the subjective assessment results for selfie images are reported.

  5. Interferometric synthetic aperture radar: Building tomorrow's tools today

    USGS Publications Warehouse

    Lu, Zhong

    2006-01-01

    A synthetic aperture radar (SAR) system transmits electromagnetic (EM) waves at a wavelength that can range from a few millimeters to tens of centimeters. The radar wave propagates through the atmosphere and interacts with the Earth’s surface. Part of the energy is reflected back to the SAR system and recorded. Using a sophisticated image processing technique, called SAR processing (Curlander and McDonough, 1991), both the intensity and phase of the reflected (or backscattered) signal of each ground resolution element (a few meters to tens of meters) can be calculated in the form of a complex-valued SAR image representing the reflectivity of the ground surface. The amplitude or intensity of the SAR image is determined primarily by terrain slope, surface roughness, and dielectric constants, whereas the phase of the SAR image is determined primarily by the distance between the satellite antenna and the ground targets, slowing of the signal by the atmosphere, and the interaction of EM waves with ground surface. Interferometric SAR (InSAR) imaging, a recently developed remote sensing technique, utilizes the interaction of EM waves, referred to as interference, to measure precise distances. Very simply, InSAR involves the use of two or more SAR images of the same area to extract landscape topography and its deformation patterns.

  6. An efficient multiple exposure image fusion in JPEG domain

    NASA Astrophysics Data System (ADS)

    Hebbalaguppe, Ramya; Kakarala, Ramakrishna

    2012-01-01

    In this paper, we describe a method to fuse multiple images taken with varying exposure times in the JPEG domain. The proposed algorithm finds its application in HDR image acquisition and image stabilization for hand-held devices like mobile phones, music players with cameras, digital cameras etc. Image acquisition at low light typically results in blurry and noisy images for hand-held camera's. Altering camera settings like ISO sensitivity, exposure times and aperture for low light image capture results in noise amplification, motion blur and reduction of depth-of-field respectively. The purpose of fusing multiple exposures is to combine the sharp details of the shorter exposure images with high signal-to-noise-ratio (SNR) of the longer exposure images. The algorithm requires only a single pass over all images, making it efficient. It comprises of - sigmoidal boosting of shorter exposed images, image fusion, artifact removal and saturation detection. Algorithm does not need more memory than a single JPEG macro block to be kept in memory making it feasible to be implemented as the part of a digital cameras hardware image processing engine. The Artifact removal step reuses the JPEGs built-in frequency analysis and hence benefits from the considerable optimization and design experience that is available for JPEG.

  7. Thermal Effects on Camera Focal Length in Messenger Star Calibration and Orbital Imaging

    NASA Astrophysics Data System (ADS)

    Burmeister, S.; Elgner, S.; Preusker, F.; Stark, A.; Oberst, J.

    2018-04-01

    We analyse images taken by the MErcury Surface, Space ENviorment, GEochemistry, and Ranging (MESSENGER) spacecraft for the camera's thermal response in the harsh thermal environment near Mercury. Specifically, we study thermally induced variations in focal length of the Mercury Dual Imaging System (MDIS). Within the several hundreds of images of star fields, the Wide Angle Camera (WAC) typically captures up to 250 stars in one frame of the panchromatic channel. We measure star positions and relate these to the known star coordinates taken from the Tycho-2 catalogue. We solve for camera pointing, the focal length parameter and two non-symmetrical distortion parameters for each image. Using data from the temperature sensors on the camera focal plane we model a linear focal length function in the form of f(T) = A0 + A1 T. Next, we use images from MESSENGER's orbital mapping mission. We deal with large image blocks, typically used for the production of a high-resolution digital terrain models (DTM). We analyzed images from the combined quadrangles H03 and H07, a selected region, covered by approx. 10,600 images, in which we identified about 83,900 tiepoints. Using bundle block adjustments, we solved for the unknown coordinates of the control points, the pointing of the camera - as well as the camera's focal length. We then fit the above linear function with respect to the focal plane temperature. As a result, we find a complex response of the camera to thermal conditions of the spacecraft. To first order, we see a linear increase by approx. 0.0107 mm per degree temperature for the Narrow-Angle Camera (NAC). This is in agreement with the observed thermal response seen in images of the panchromatic channel of the WAC. Unfortunately, further comparisons of results from the two methods, both of which use different portions of the available image data, are limited. If leaving uncorrected, these effects may pose significant difficulties in the photogrammetric analysis, specifically these may be responsible for erroneous longwavelength trends in topographic models.

  8. CAOS-CMOS camera.

    PubMed

    Riza, Nabeel A; La Torre, Juan Pablo; Amin, M Junaid

    2016-06-13

    Proposed and experimentally demonstrated is the CAOS-CMOS camera design that combines the coded access optical sensor (CAOS) imager platform with the CMOS multi-pixel optical sensor. The unique CAOS-CMOS camera engages the classic CMOS sensor light staring mode with the time-frequency-space agile pixel CAOS imager mode within one programmable optical unit to realize a high dynamic range imager for extreme light contrast conditions. The experimentally demonstrated CAOS-CMOS camera is built using a digital micromirror device, a silicon point-photo-detector with a variable gain amplifier, and a silicon CMOS sensor with a maximum rated 51.3 dB dynamic range. White light imaging of three different brightness simultaneously viewed targets, that is not possible by the CMOS sensor, is achieved by the CAOS-CMOS camera demonstrating an 82.06 dB dynamic range. Applications for the camera include industrial machine vision, welding, laser analysis, automotive, night vision, surveillance and multispectral military systems.

  9. Color reproduction software for a digital still camera

    NASA Astrophysics Data System (ADS)

    Lee, Bong S.; Park, Du-Sik; Nam, Byung D.

    1998-04-01

    We have developed a color reproduction software for a digital still camera. The image taken by the camera was colorimetrically reproduced on the monitor after characterizing the camera and the monitor, and color matching between two devices. The reproduction was performed at three levels; level processing, gamma correction, and color transformation. The image contrast was increased after the level processing adjusting the level of dark and bright portions of the image. The relationship between the level processed digital values and the measured luminance values of test gray samples was calculated, and the gamma of the camera was obtained. The method for getting the unknown monitor gamma was proposed. As a result, the level processed values were adjusted by the look-up table created by the camera and the monitor gamma correction. For a color transformation matrix for the camera, 3 by 3 or 3 by 4 matrix was used, which was calculated by the regression between the gamma corrected values and the measured tristimulus values of each test color samples the various reproduced images were displayed on the dialogue box implemented in our software, which were generated according to four illuminations for the camera and three color temperatures for the monitor. An user can easily choose he best reproduced image comparing each others.

  10. Blur spot limitations in distal endoscope sensors

    NASA Astrophysics Data System (ADS)

    Yaron, Avi; Shechterman, Mark; Horesh, Nadav

    2006-02-01

    In years past, the picture quality of electronic video systems was limited by the image sensor. In the present, the resolution of miniature image sensors, as in medical endoscopy, is typically superior to the resolution of the optical system. This "excess resolution" is utilized by Visionsense to create stereoscopic vision. Visionsense has developed a single chip stereoscopic camera that multiplexes the horizontal dimension of the image sensor into two (left and right) images, compensates the blur phenomena, and provides additional depth resolution without sacrificing planar resolution. The camera is based on a dual-pupil imaging objective and an image sensor coated by an array of microlenses (a plenoptic camera). The camera has the advantage of being compact, providing simultaneous acquisition of left and right images, and offering resolution comparable to a dual chip stereoscopic camera with low to medium resolution imaging lenses. A stereoscopic vision system provides an improved 3-dimensional perspective of intra-operative sites that is crucial for advanced minimally invasive surgery and contributes to surgeon performance. An additional advantage of single chip stereo sensors is improvement of tolerance to electronic signal noise.

  11. Automatic Orientation of Large Blocks of Oblique Images

    NASA Astrophysics Data System (ADS)

    Rupnik, E.; Nex, F.; Remondino, F.

    2013-05-01

    Nowadays, multi-camera platforms combining nadir and oblique cameras are experiencing a revival. Due to their advantages such as ease of interpretation, completeness through mitigation of occluding areas, as well as system accessibility, they have found their place in numerous civil applications. However, automatic post-processing of such imagery still remains a topic of research. Configuration of cameras poses a challenge on the traditional photogrammetric pipeline used in commercial software and manual measurements are inevitable. For large image blocks it is certainly an impediment. Within theoretical part of the work we review three common least square adjustment methods and recap on possible ways for a multi-camera system orientation. In the practical part we present an approach that successfully oriented a block of 550 images acquired with an imaging system composed of 5 cameras (Canon Eos 1D Mark III) with different focal lengths. Oblique cameras are rotated in the four looking directions (forward, backward, left and right) by 45° with respect to the nadir camera. The workflow relies only upon open-source software: a developed tool to analyse image connectivity and Apero to orient the image block. The benefits of the connectivity tool are twofold: in terms of computational time and success of Bundle Block Adjustment. It exploits the georeferenced information provided by the Applanix system in constraining feature point extraction to relevant images only, and guides the concatenation of images during the relative orientation. Ultimately an absolute transformation is performed resulting in mean re-projection residuals equal to 0.6 pix.

  12. Error modeling and analysis of star cameras for a class of 1U spacecraft

    NASA Astrophysics Data System (ADS)

    Fowler, David M.

    As spacecraft today become increasingly smaller, the demand for smaller components and sensors rises as well. The smartphone, a cutting edge consumer technology, has impressive collections of both sensors and processing capabilities and may have the potential to fill this demand in the spacecraft market. If the technologies of a smartphone can be used in space, the cost of building miniature satellites would drop significantly and give a boost to the aerospace and scientific communities. Concentrating on the problem of spacecraft orientation, this study sets ground to determine the capabilities of a smartphone camera when acting as a star camera. Orientations determined from star images taken from a smartphone camera are compared to those of higher quality cameras in order to determine the associated accuracies. The results of the study reveal the abilities of low-cost off-the-shelf imagers in space and give a starting point for future research in the field. The study began with a complete geometric calibration of each analyzed imager such that all comparisons start from the same base. After the cameras were calibrated, image processing techniques were introduced to correct for atmospheric, lens, and image sensor effects. Orientations for each test image are calculated through methods of identifying the stars exposed on each image. Analyses of these orientations allow the overall errors of each camera to be defined and provide insight into the abilities of low-cost imagers.

  13. Night Vision Camera

    NASA Technical Reports Server (NTRS)

    1996-01-01

    PixelVision, Inc. developed the Night Video NV652 Back-illuminated CCD Camera, based on the expertise of a former Jet Propulsion Laboratory employee and a former employee of Scientific Imaging Technologies, Inc. The camera operates without an image intensifier, using back-illuminated and thinned CCD technology to achieve extremely low light level imaging performance. The advantages of PixelVision's system over conventional cameras include greater resolution and better target identification under low light conditions, lower cost and a longer lifetime. It is used commercially for research and aviation.

  14. Center for Coastline Security Technology, Year 3

    DTIC Science & Technology

    2008-05-01

    Polarization control for 3D Imaging with the Sony SRX-R105 Digital Cinema Projectors 3.4 HDMAX Camera and Sony SRX-R105 Projector Configuration for 3D...HDMAX Camera Pair Figure 3.2 Sony SRX-R105 Digital Cinema Projector Figure 3.3 Effect of camera rotation on projected overlay image. Figure 3.4...system that combines a pair of FAU’s HD-MAX video cameras with a pair of Sony SRX-R105 digital cinema projectors for stereo imaging and projection

  15. Dual cameras acquisition and display system of retina-like sensor camera and rectangular sensor camera

    NASA Astrophysics Data System (ADS)

    Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu

    2015-04-01

    For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.

  16. The Effect of Camera Angle and Image Size on Source Credibility and Interpersonal Attraction.

    ERIC Educational Resources Information Center

    McCain, Thomas A.; Wakshlag, Jacob J.

    The purpose of this study was to examine the effects of two nonverbal visual variables (camera angle and image size) on variables developed in a nonmediated context (source credibility and interpersonal attraction). Camera angle and image size were manipulated in eight video taped television newscasts which were subsequently presented to eight…

  17. Development of a piecewise linear omnidirectional 3D image registration method

    NASA Astrophysics Data System (ADS)

    Bae, Hyunsoo; Kang, Wonjin; Lee, SukGyu; Kim, Youngwoo

    2016-12-01

    This paper proposes a new piecewise linear omnidirectional image registration method. The proposed method segments an image captured by multiple cameras into 2D segments defined by feature points of the image and then stitches each segment geometrically by considering the inclination of the segment in the 3D space. Depending on the intended use of image registration, the proposed method can be used to improve image registration accuracy or reduce the computation time in image registration because the trade-off between the computation time and image registration accuracy can be controlled for. In general, nonlinear image registration methods have been used in 3D omnidirectional image registration processes to reduce image distortion by camera lenses. The proposed method depends on a linear transformation process for omnidirectional image registration, and therefore it can enhance the effectiveness of the geometry recognition process, increase image registration accuracy by increasing the number of cameras or feature points of each image, increase the image registration speed by reducing the number of cameras or feature points of each image, and provide simultaneous information on shapes and colors of captured objects.

  18. SpectraCAM SPM: a camera system with high dynamic range for scientific and medical applications

    NASA Astrophysics Data System (ADS)

    Bhaskaran, S.; Baiko, D.; Lungu, G.; Pilon, M.; VanGorden, S.

    2005-08-01

    A scientific camera system having high dynamic range designed and manufactured by Thermo Electron for scientific and medical applications is presented. The newly developed CID820 image sensor with preamplifier-per-pixel technology is employed in this camera system. The 4 Mega-pixel imaging sensor has a raw dynamic range of 82dB. Each high-transparent pixel is based on a preamplifier-per-pixel architecture and contains two photogates for non-destructive readout of the photon-generated charge (NDRO). Readout is achieved via parallel row processing with on-chip correlated double sampling (CDS). The imager is capable of true random pixel access with a maximum operating speed of 4MHz. The camera controller consists of a custom camera signal processor (CSP) with an integrated 16-bit A/D converter and a PowerPC-based CPU running a Linux embedded operating system. The imager is cooled to -40C via three-stage cooler to minimize dark current. The camera housing is sealed and is designed to maintain the CID820 imager in the evacuated chamber for at least 5 years. Thermo Electron has also developed custom software and firmware to drive the SpectraCAM SPM camera. Included in this firmware package is the new Extreme DRTM algorithm that is designed to extend the effective dynamic range of the camera by several orders of magnitude up to 32-bit dynamic range. The RACID Exposure graphical user interface image analysis software runs on a standard PC that is connected to the camera via Gigabit Ethernet.

  19. A combined microphone and camera calibration technique with application to acoustic imaging.

    PubMed

    Legg, Mathew; Bradley, Stuart

    2013-10-01

    We present a calibration technique for an acoustic imaging microphone array, combined with a digital camera. Computer vision and acoustic time of arrival data are used to obtain microphone coordinates in the camera reference frame. Our new method allows acoustic maps to be plotted onto the camera images without the need for additional camera alignment or calibration. Microphones and cameras may be placed in an ad-hoc arrangement and, after calibration, the coordinates of the microphones are known in the reference frame of a camera in the array. No prior knowledge of microphone positions, inter-microphone spacings, or air temperature is required. This technique is applied to a spherical microphone array and a mean difference of 3 mm was obtained between the coordinates obtained with this calibration technique and those measured using a precision mechanical method.

  20. High Speed Digital Camera Technology Review

    NASA Technical Reports Server (NTRS)

    Clements, Sandra D.

    2009-01-01

    A High Speed Digital Camera Technology Review (HSD Review) is being conducted to evaluate the state-of-the-shelf in this rapidly progressing industry. Five HSD cameras supplied by four camera manufacturers participated in a Field Test during the Space Shuttle Discovery STS-128 launch. Each camera was also subjected to Bench Tests in the ASRC Imaging Development Laboratory. Evaluation of the data from the Field and Bench Tests is underway. Representatives from the imaging communities at NASA / KSC and the Optical Systems Group are participating as reviewers. A High Speed Digital Video Camera Draft Specification was updated to address Shuttle engineering imagery requirements based on findings from this HSD Review. This draft specification will serve as the template for a High Speed Digital Video Camera Specification to be developed for the wider OSG imaging community under OSG Task OS-33.

  1. Generalized assorted pixel camera: postcapture control of resolution, dynamic range, and spectrum.

    PubMed

    Yasuma, Fumihito; Mitsunaga, Tomoo; Iso, Daisuke; Nayar, Shree K

    2010-09-01

    We propose the concept of a generalized assorted pixel (GAP) camera, which enables the user to capture a single image of a scene and, after the fact, control the tradeoff between spatial resolution, dynamic range and spectral detail. The GAP camera uses a complex array (or mosaic) of color filters. A major problem with using such an array is that the captured image is severely under-sampled for at least some of the filter types. This leads to reconstructed images with strong aliasing. We make four contributions in this paper: 1) we present a comprehensive optimization method to arrive at the spatial and spectral layout of the color filter array of a GAP camera. 2) We develop a novel algorithm for reconstructing the under-sampled channels of the image while minimizing aliasing artifacts. 3) We demonstrate how the user can capture a single image and then control the tradeoff of spatial resolution to generate a variety of images, including monochrome, high dynamic range (HDR) monochrome, RGB, HDR RGB, and multispectral images. 4) Finally, the performance of our GAP camera has been verified using extensive simulations that use multispectral images of real world scenes. A large database of these multispectral images has been made available at http://www1.cs.columbia.edu/CAVE/projects/gap_camera/ for use by the research community.

  2. Handheld hyperspectral imager system for chemical/biological and environmental applications

    NASA Astrophysics Data System (ADS)

    Hinnrichs, Michele; Piatek, Bob

    2004-08-01

    A small, hand held, battery operated imaging infrared spectrometer, Sherlock, has been developed by Pacific Advanced Technology and was field tested in early 2003. The Sherlock spectral imaging camera has been designed for remote gas leak detection, however, the architecture of the camera is versatile enough that it can be applied to numerous other applications such as homeland security, chemical/biological agent detection, medical and pharmaceutical applications as well as standard research and development. This paper describes the Sherlock camera, theory of operations, shows current applications and touches on potential future applications for the camera. The Sherlock has an embedded Power PC and performs real-time-image processing function in an embedded FPGA. The camera has a built in LCD display as well as output to a standard monitor, or NTSC display. It has several I/O ports, ethernet, firewire, RS232 and thus can be easily controlled from a remote location. In addition, software upgrades can be performed over the ethernet eliminating the need to send the camera back to the factory for a retrofit. Using the USB port a mouse and key board can be connected and the camera can be used in a laboratory environment as a stand alone imaging spectrometer.

  3. Hand-held hyperspectral imager for chemical/biological and environmental applications

    NASA Astrophysics Data System (ADS)

    Hinnrichs, Michele; Piatek, Bob

    2004-03-01

    A small, hand held, battery operated imaging infrared spectrometer, Sherlock, has been developed by Pacific Advanced Technology and was field tested in early 2003. The Sherlock spectral imaging camera has been designed for remote gas leak detection, however, the architecture of the camera is versatile enough that it can be applied to numerous other applications such as homeland security, chemical/biological agent detection, medical and pharmaceutical applications as well as standard research and development. This paper describes the Sherlock camera, theory of operations, shows current applications and touches on potential future applications for the camera. The Sherlock has an embedded Power PC and performs real-time-image processing function in an embedded FPGA. The camera has a built in LCD display as well as output to a standard monitor, or NTSC display. It has several I/O ports, ethernet, firewire, RS232 and thus can be easily controlled from a remote location. In addition, software upgrades can be performed over the ethernet eliminating the need to send the camera back to the factory for a retrofit. Using the USB port a mouse and key board can be connected and the camera can be used in a laboratory environment as a stand alone imaging spectrometer.

  4. Image dynamic range test and evaluation of Gaofen-2 dual cameras

    NASA Astrophysics Data System (ADS)

    Zhang, Zhenhua; Gan, Fuping; Wei, Dandan

    2015-12-01

    In order to fully understand the dynamic range of Gaofen-2 satellite data and support the data processing, application and next satellites development, in this article, we evaluated the dynamic range by calculating some statistics such as maximum ,minimum, average and stand deviation of four images obtained at the same time by Gaofen-2 dual cameras in Beijing area; then the maximum ,minimum, average and stand deviation of each longitudinal overlap of PMS1,PMS2 were calculated respectively for the evaluation of each camera's dynamic range consistency; and these four statistics of each latitudinal overlap of PMS1,PMS2 were calculated respectively for the evaluation of the dynamic range consistency between PMS1 and PMS2 at last. The results suggest that there is a wide dynamic range of DN value in the image obtained by PMS1 and PMS2 which contains rich information of ground objects; in general, the consistency of dynamic range between the single camera images is in close agreement, but also a little difference, so do the dual cameras. The consistency of dynamic range between the single camera images is better than the dual cameras'.

  5. Digital micromirror device camera with per-pixel coded exposure for high dynamic range imaging.

    PubMed

    Feng, Wei; Zhang, Fumin; Wang, Weijing; Xing, Wei; Qu, Xinghua

    2017-05-01

    In this paper, we overcome the limited dynamic range of the conventional digital camera, and propose a method of realizing high dynamic range imaging (HDRI) from a novel programmable imaging system called a digital micromirror device (DMD) camera. The unique feature of the proposed new method is that the spatial and temporal information of incident light in our DMD camera can be flexibly modulated, and it enables the camera pixels always to have reasonable exposure intensity by DMD pixel-level modulation. More importantly, it allows different light intensity control algorithms used in our programmable imaging system to achieve HDRI. We implement the optical system prototype, analyze the theory of per-pixel coded exposure for HDRI, and put forward an adaptive light intensity control algorithm to effectively modulate the different light intensity to recover high dynamic range images. Via experiments, we demonstrate the effectiveness of our method and implement the HDRI on different objects.

  6. The Limited Duty/Chief Warrant Officer Professional Guidebook

    DTIC Science & Technology

    1985-01-01

    subsurface imaging . They plan and manage the operation of imaging commands and activities, combat camera groups and aerial reconnaissance imaging...picture and video systems used in aerial, surface and subsurface imaging . They supervise the operation of imaging commands and activities, combat camera

  7. Test Image of Earth Rocks by Mars Camera Stereo

    NASA Image and Video Library

    2010-11-16

    This stereo view of terrestrial rocks combines two images taken by a testing twin of the Mars Hand Lens Imager MAHLI camera on NASA Mars Science Laboratory. 3D glasses are necessary to view this image.

  8. High-frame rate multiport CCD imager and camera

    NASA Astrophysics Data System (ADS)

    Levine, Peter A.; Patterson, David R.; Esposito, Benjamin J.; Tower, John R.; Lawler, William B.

    1993-01-01

    A high frame rate visible CCD camera capable of operation up to 200 frames per second is described. The camera produces a 256 X 256 pixel image by using one quadrant of a 512 X 512 16-port, back illuminated CCD imager. Four contiguous outputs are digitally reformatted into a correct, 256 X 256 image. This paper details the architecture and timing used for the CCD drive circuits, analog processing, and the digital reformatter.

  9. The advantages of using a Lucky Imaging camera for observations of microlensing events

    NASA Astrophysics Data System (ADS)

    Sajadian, Sedighe; Rahvar, Sohrab; Dominik, Martin; Hundertmark, Markus

    2016-05-01

    In this work, we study the advantages of using a Lucky Imaging camera for the observations of potential planetary microlensing events. Our aim is to reduce the blending effect and enhance exoplanet signals in binary lensing systems composed of an exoplanet and the corresponding parent star. We simulate planetary microlensing light curves based on present microlensing surveys and follow-up telescopes where one of them is equipped with a Lucky Imaging camera. This camera is used at the Danish 1.54-m follow-up telescope. Using a specific observational strategy, for an Earth-mass planet in the resonance regime, where the detection probability in crowded fields is smaller, Lucky Imaging observations improve the detection efficiency which reaches 2 per cent. Given the difficulty of detecting the signal of an Earth-mass planet in crowded-field imaging even in the resonance regime with conventional cameras, we show that Lucky Imaging can substantially improve the detection efficiency.

  10. Suppressing the image smear of the vibration modulation transfer function for remote-sensing optical cameras.

    PubMed

    Li, Jin; Liu, Zilong; Liu, Si

    2017-02-20

    In on-board photographing processes of satellite cameras, the platform vibration can generate image motion, distortion, and smear, which seriously affect the image quality and image positioning. In this paper, we create a mathematical model of a vibrating modulate transfer function (VMTF) for a remote-sensing camera. The total MTF of a camera is reduced by the VMTF, which means the image quality is degraded. In order to avoid the degeneration of the total MTF caused by vibrations, we use an Mn-20Cu-5Ni-2Fe (M2052) manganese copper alloy material to fabricate a vibration-isolation mechanism (VIM). The VIM can transform platform vibration energy into irreversible thermal energy with its internal twin crystals structure. Our experiment shows the M2052 manganese copper alloy material is good enough to suppress image motion below 125 Hz, which is the vibration frequency of satellite platforms. The camera optical system has a higher MTF after suppressing the vibration of the M2052 material than before.

  11. Digital fundus image grading with the non-mydriatic Visucam(PRO NM) versus the FF450(plus) camera in diabetic retinopathy.

    PubMed

    Neubauer, Aljoscha S; Rothschuh, Antje; Ulbig, Michael W; Blum, Marcus

    2008-03-01

    Grading diabetic retinopathy in clinical trials is frequently based on 7-field stereo photography of the fundus in diagnostic mydriasis. In terms of image quality, the FF450(plus) camera (Carl Zeiss Meditec AG, Jena, Germany) defines a high-quality reference. The aim of the study was to investigate if the fully digital fundus camera Visucam(PRO NM) could serve as an alternative in clinical trials requiring 7-field stereo photography. A total of 128 eyes of diabetes patients were enrolled in the randomized, controlled, prospective trial. Seven-field stereo photography was performed with the Visucam(PRO NM) and the FF450(plus) camera, in random order, both in diagnostic mydriasis. The resulting 256 image sets from the two camera systems were graded for retinopathy levels and image quality (on a scale of 1-5); both were anonymized and blinded to the image source. On FF450(plus) stereoscopic imaging, 20% of the patients had no or mild diabetic retinopathy (ETDRS level < or = 20) and 29% had no macular oedema. No patient had to be excluded as a result of image quality. Retinopathy level did not influence the quality of grading or of images. Excellent overall correspondence was obtained between the two fundus cameras regarding retinopathy levels (kappa 0.87) and macular oedema (kappa 0.80). In diagnostic mydriasis the image quality of the Visucam was graded slightly as better than that of the FF450(plus) (2.20 versus 2.41; p < 0.001), especially for pupils < 7 mm in mydriasis. The non-mydriatic Visucam(PRO NM) offers good image quality and is suitable as a more cost-efficient and easy-to-operate camera for applications and clinical trials requiring 7-field stereo photography.

  12. Poro-elastic Rebound Along the Landers 1992 Earthquake Surface Rupture

    NASA Technical Reports Server (NTRS)

    Peltzer, G.; Rosen, P.; Rogez, F.; Hudnut, K.

    1998-01-01

    Maps of post-seismic surface displacement after the 1992, Landers, California earthquake, generated by interferometric processing of ERS-1 Synthetic Aperture Radar (SAR) images, reveal effects of various deformation processes near the 1992 surface rupture.

  13. A projector calibration method for monocular structured light system based on digital image correlation

    NASA Astrophysics Data System (ADS)

    Feng, Zhixin

    2018-02-01

    Projector calibration is crucial for a camera-projector three-dimensional (3-D) structured light measurement system, which has one camera and one projector. In this paper, a novel projector calibration method is proposed based on digital image correlation. In the method, the projector is viewed as an inverse camera, and a plane calibration board with feature points is used to calibrate the projector. During the calibration processing, a random speckle pattern is projected onto the calibration board with different orientations to establish the correspondences between projector images and camera images. Thereby, dataset for projector calibration are generated. Then the projector can be calibrated using a well-established camera calibration algorithm. The experiment results confirm that the proposed method is accurate and reliable for projector calibration.

  14. Laser Light-field Fusion for Wide-field Lensfree On-chip Phase Contrast Microscopy of Nanoparticles

    NASA Astrophysics Data System (ADS)

    Kazemzadeh, Farnoud; Wong, Alexander

    2016-12-01

    Wide-field lensfree on-chip microscopy, which leverages holography principles to capture interferometric light-field encodings without lenses, is an emerging imaging modality with widespread interest given the large field-of-view compared to lens-based techniques. In this study, we introduce the idea of laser light-field fusion for lensfree on-chip phase contrast microscopy for detecting nanoparticles, where interferometric laser light-field encodings acquired using a lensfree, on-chip setup with laser pulsations at different wavelengths are fused to produce marker-free phase contrast images of particles at the nanometer scale. As a proof of concept, we demonstrate, for the first time, a wide-field lensfree on-chip instrument successfully detecting 300 nm particles across a large field-of-view of ~30 mm2 without any specialized or intricate sample preparation, or the use of synthetic aperture- or shift-based techniques.

  15. Quantitative mass imaging of single biological macromolecules.

    PubMed

    Young, Gavin; Hundt, Nikolas; Cole, Daniel; Fineberg, Adam; Andrecka, Joanna; Tyler, Andrew; Olerinyova, Anna; Ansari, Ayla; Marklund, Erik G; Collier, Miranda P; Chandler, Shane A; Tkachenko, Olga; Allen, Joel; Crispin, Max; Billington, Neil; Takagi, Yasuharu; Sellers, James R; Eichmann, Cédric; Selenko, Philipp; Frey, Lukas; Riek, Roland; Galpin, Martin R; Struwe, Weston B; Benesch, Justin L P; Kukura, Philipp

    2018-04-27

    The cellular processes underpinning life are orchestrated by proteins and their interactions. The associated structural and dynamic heterogeneity, despite being key to function, poses a fundamental challenge to existing analytical and structural methodologies. We used interferometric scattering microscopy to quantify the mass of single biomolecules in solution with 2% sequence mass accuracy, up to 19-kilodalton resolution, and 1-kilodalton precision. We resolved oligomeric distributions at high dynamic range, detected small-molecule binding, and mass-imaged proteins with associated lipids and sugars. These capabilities enabled us to characterize the molecular dynamics of processes as diverse as glycoprotein cross-linking, amyloidogenic protein aggregation, and actin polymerization. Interferometric scattering mass spectrometry allows spatiotemporally resolved measurement of a broad range of biomolecular interactions, one molecule at a time. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  16. Interferometric Reflectance Imaging Sensor (IRIS)—A Platform Technology for Multiplexed Diagnostics and Digital Detection

    PubMed Central

    Avci, Oguzhan; Lortlar Ünlü, Nese; Yalçın Özkumur, Ayça; Ünlü, M. Selim

    2015-01-01

    Over the last decade, the growing need in disease diagnostics has stimulated rapid development of new technologies with unprecedented capabilities. Recent emerging infectious diseases and epidemics have revealed the shortcomings of existing diagnostics tools, and the necessity for further improvements. Optical biosensors can lay the foundations for future generation diagnostics by providing means to detect biomarkers in a highly sensitive, specific, quantitative and multiplexed fashion. Here, we review an optical sensing technology, Interferometric Reflectance Imaging Sensor (IRIS), and the relevant features of this multifunctional platform for quantitative, label-free and dynamic detection. We discuss two distinct modalities for IRIS: (i) low-magnification (ensemble biomolecular mass measurements) and (ii) high-magnification (digital detection of individual nanoparticles) along with their applications, including label-free detection of multiplexed protein chips, measurement of single nucleotide polymorphism, quantification of transcription factor DNA binding, and high sensitivity digital sensing and characterization of nanoparticles and viruses. PMID:26205273

  17. Digital elevation model generation from satellite interferometric synthetic aperture radar: Chapter 5

    USGS Publications Warehouse

    Lu, Zhong; Dzurisin, Daniel; Jung, Hyung-Sup; Zhang, Lei; Lee, Wonjin; Lee, Chang-Wook

    2012-01-01

    An accurate digital elevation model (DEM) is a critical data set for characterizing the natural landscape, monitoring natural hazards, and georeferencing satellite imagery. The ideal interferometric synthetic aperture radar (InSAR) configuration for DEM production is a single-pass two-antenna system. Repeat-pass single-antenna satellite InSAR imagery, however, also can be used to produce useful DEMs. DEM generation from InSAR is advantageous in remote areas where the photogrammetric approach to DEM generation is hindered by inclement weather conditions. There are many sources of errors in DEM generation from repeat-pass InSAR imagery, for example, inaccurate determination of the InSAR baseline, atmospheric delay anomalies, and possible surface deformation because of tectonic, volcanic, or other sources during the time interval spanned by the images. This chapter presents practical solutions to identify and remove various artifacts in repeat-pass satellite InSAR images to generate a high-quality DEM.

  18. Laser Light-field Fusion for Wide-field Lensfree On-chip Phase Contrast Microscopy of Nanoparticles.

    PubMed

    Kazemzadeh, Farnoud; Wong, Alexander

    2016-12-13

    Wide-field lensfree on-chip microscopy, which leverages holography principles to capture interferometric light-field encodings without lenses, is an emerging imaging modality with widespread interest given the large field-of-view compared to lens-based techniques. In this study, we introduce the idea of laser light-field fusion for lensfree on-chip phase contrast microscopy for detecting nanoparticles, where interferometric laser light-field encodings acquired using a lensfree, on-chip setup with laser pulsations at different wavelengths are fused to produce marker-free phase contrast images of particles at the nanometer scale. As a proof of concept, we demonstrate, for the first time, a wide-field lensfree on-chip instrument successfully detecting 300 nm particles across a large field-of-view of ~30 mm 2 without any specialized or intricate sample preparation, or the use of synthetic aperture- or shift-based techniques.

  19. Brute-force mapmaking with compact interferometers: a MITEoR northern sky map from 128 to 175 MHz

    NASA Astrophysics Data System (ADS)

    Zheng, H.; Tegmark, M.; Dillon, J. S.; Liu, A.; Neben, A. R.; Tribiano, S. M.; Bradley, R. F.; Buza, V.; Ewall-Wice, A.; Gharibyan, H.; Hickish, J.; Kunz, E.; Losh, J.; Lutomirski, A.; Morgan, E.; Narayanan, S.; Perko, A.; Rosner, D.; Sanchez, N.; Schutz, K.; Valdez, M.; Villasenor, J.; Yang, H.; Zarb Adami, K.; Zelko, I.; Zheng, K.

    2017-03-01

    We present a new method for interferometric imaging that is ideal for the large fields of view and compact arrays common in 21 cm cosmology. We first demonstrate the method with the simulations for two very different low-frequency interferometers, the Murchison Widefield Array and the MIT Epoch of Reionization (MITEoR) experiment. We then apply the method to the MITEoR data set collected in 2013 July to obtain the first northern sky map from 128 to 175 MHz at ∼2° resolution and find an overall spectral index of -2.73 ± 0.11. The success of this imaging method bodes well for upcoming compact redundant low-frequency arrays such as Hydrogen Epoch of Reionization Array. Both the MITEoR interferometric data and the 150 MHz sky map are available at http://space.mit.edu/home/tegmark/omniscope.html.

  20. Speckle noise reduction in quantitative optical metrology techniques by application of the discrete wavelet transformation

    NASA Astrophysics Data System (ADS)

    Furlong, Cosme; Pryputniewicz, Ryszard J.

    2002-06-01

    Effective suppression of speckle noise content in interferometric data images can help in improving accuracy and resolution of the results obtained with interferometric optical metrology techniques. In this paper, novel speckle noise reduction algorithms based on the discrete wavelet transformation are presented. The algorithms proceed by: (a) estimating the noise level contained in the interferograms of interest, (b) selecting wavelet families, (c) applying the wavelet transformation using the selected families, (d) wavelet thresholding, and (e) applying the inverse wavelet transformation, producing denoised interferograms. The algorithms are applied to the different stages of the processing procedures utilized for generation of quantitative speckle correlation interferometry data of fiber-optic based opto-electronic holography (FOBOEH) techniques, allowing identification of optimal processing conditions. It is shown that wavelet algorithms are effective for speckle noise reduction while preserving image features otherwise faded with other algorithms.

  1. Single-exposure super-resolved interferometric microscopy by RGB multiplexing in lensless configuration

    NASA Astrophysics Data System (ADS)

    Granero, Luis; Ferreira, Carlos; Zalevsky, Zeev; García, Javier; Micó, Vicente

    2016-07-01

    Single-Exposure Super-Resolved Interferometric Microscopy (SESRIM) reports on a way to achieve one-dimensional (1-D) superresolved imaging in digital holographic microscopy (DHM) by a single illumination shot and digital recording. SESRIM provides color-coded angular multiplexing of the accessible sample's range of spatial frequencies and it allows their recording in a single CCD (color or monochrome) snapshot by adding 3 RGB coherent reference beams at the output plane. In this manuscript, we extend the applicability of SESRIM to the field of digital in-line holographic microscopy (DIHM), that is, working without lenses. As consequence of the in-line configuration, an additional restriction concerning the object field of view (FOV) must be imposed to the technique. Experimental results are reported for both a synthetic object (USAF resolution test target) and a biological sample (swine sperm sample) validating this new kind of superresolution imaging method named as lensless SESRIM (L-SESRIM).

  2. Volunteers Help Decide Where to Point Mars Camera

    NASA Image and Video Library

    2015-07-22

    This series of images from NASA's Mars Reconnaissance Orbiter successively zooms into "spider" features -- or channels carved in the surface in radial patterns -- in the south polar region of Mars. In a new citizen-science project, volunteers will identify features like these using wide-scale images from the orbiter. Their input will then help mission planners decide where to point the orbiter's high-resolution camera for more detailed views of interesting terrain. Volunteers will start with images from the orbiter's Context Camera (CTX), which provides wide views of the Red Planet. The first two images in this series are from CTX; the top right image zooms into a portion of the image at left. The top right image highlights the geological spider features, which are carved into the terrain in the Martian spring when dry ice turns to gas. By identifying unusual features like these, volunteers will help the mission team choose targets for the orbiter's High Resolution Imaging Science Experiment (HiRISE) camera, which can reveal more detail than any other camera ever put into orbit around Mars. The final image is this series (bottom right) shows a HiRISE close-up of one of the spider features. http://photojournal.jpl.nasa.gov/catalog/PIA19823

  3. Photometric Calibration and Image Stitching for a Large Field of View Multi-Camera System

    PubMed Central

    Lu, Yu; Wang, Keyi; Fan, Gongshu

    2016-01-01

    A new compact large field of view (FOV) multi-camera system is introduced. The camera is based on seven tiny complementary metal-oxide-semiconductor sensor modules covering over 160° × 160° FOV. Although image stitching has been studied extensively, sensor and lens differences have not been considered in previous multi-camera devices. In this study, we have calibrated the photometric characteristics of the multi-camera device. Lenses were not mounted on the sensor in the process of radiometric response calibration to eliminate the influence of the focusing effect of uniform light from an integrating sphere. Linearity range of the radiometric response, non-linearity response characteristics, sensitivity, and dark current of the camera response function are presented. The R, G, and B channels have different responses for the same illuminance. Vignetting artifact patterns have been tested. The actual luminance of the object is retrieved by sensor calibration results, and is used to blend images to make panoramas reflect the objective luminance more objectively. This compensates for the limitation of stitching images that are more realistic only through the smoothing method. The dynamic range limitation of can be resolved by using multiple cameras that cover a large field of view instead of a single image sensor with a wide-angle lens. The dynamic range is expanded by 48-fold in this system. We can obtain seven images in one shot with this multi-camera system, at 13 frames per second. PMID:27077857

  4. Multispectral image dissector camera flight test

    NASA Technical Reports Server (NTRS)

    Johnson, B. L.

    1973-01-01

    It was demonstrated that the multispectral image dissector camera is able to provide composite pictures of the earth surface from high altitude overflights. An electronic deflection feature was used to inject the gyro error signal into the camera for correction of aircraft motion.

  5. On the accuracy potential of focused plenoptic camera range determination in long distance operation

    NASA Astrophysics Data System (ADS)

    Sardemann, Hannes; Maas, Hans-Gerd

    2016-04-01

    Plenoptic cameras have found increasing interest in optical 3D measurement techniques in recent years. While their basic principle is 100 years old, the development in digital photography, micro-lens fabrication technology and computer hardware has boosted the development and lead to several commercially available ready-to-use cameras. Beyond their popular option of a posteriori image focusing or total focus image generation, their basic ability of generating 3D information from single camera imagery depicts a very beneficial option for certain applications. The paper will first present some fundamentals on the design and history of plenoptic cameras and will describe depth determination from plenoptic camera image data. It will then present an analysis of the depth determination accuracy potential of plenoptic cameras. While most research on plenoptic camera accuracy so far has focused on close range applications, we will focus on mid and long ranges of up to 100 m. This range is especially relevant, if plenoptic cameras are discussed as potential mono-sensorial range imaging devices in (semi-)autonomous cars or in mobile robotics. The results show the expected deterioration of depth measurement accuracy with depth. At depths of 30-100 m, which may be considered typical in autonomous driving, depth errors in the order of 3% (with peaks up to 10-13 m) were obtained from processing small point clusters on an imaged target. Outliers much higher than these values were observed in single point analysis, stressing the necessity of spatial or spatio-temporal filtering of the plenoptic camera depth measurements. Despite these obviously large errors, a plenoptic camera may nevertheless be considered a valid option for the application fields of real-time robotics like autonomous driving or unmanned aerial and underwater vehicles, where the accuracy requirements decrease with distance.

  6. Evaluation of Suppression of Hydroprocessed Renewable Jet (HRJ) Fuel Fires with Aqueous Film Forming Foam (AFFF)

    DTIC Science & Technology

    2011-07-01

    cameras were installed around the test pan and an underwater GoPro ® video camera recorded the fire from below the layer of fuel. 3.2.2. Camera Images...Distribution A: Approved for public release; distribution unlimited. 3.2.3. Video Images A GoPro video camera with a wide angle lens recorded the tests...camera and the GoPro ® video camera were not used for fire suppression experiments. 3.3.2. Test Pans Two ¼-in thick stainless steel test pans were

  7. Imagers for digital still photography

    NASA Astrophysics Data System (ADS)

    Bosiers, Jan; Dillen, Bart; Draijer, Cees; Manoury, Erik-Jan; Meessen, Louis; Peters, Inge

    2006-04-01

    This paper gives an overview of the requirements for, and current state-of-the-art of, CCD and CMOS imagers for use in digital still photography. Four market segments will be reviewed: mobile imaging, consumer "point-and-shoot cameras", consumer digital SLR cameras and high-end professional camera systems. The paper will also present some challenges and innovations with respect to packaging, testing, and system integration.

  8. Mapping the Apollo 17 landing site area based on Lunar Reconnaissance Orbiter Camera images and Apollo surface photography

    NASA Astrophysics Data System (ADS)

    Haase, I.; Oberst, J.; Scholten, F.; Wählisch, M.; Gläser, P.; Karachevtseva, I.; Robinson, M. S.

    2012-05-01

    Newly acquired high resolution Lunar Reconnaissance Orbiter Camera (LROC) images allow accurate determination of the coordinates of Apollo hardware, sampling stations, and photographic viewpoints. In particular, the positions from where the Apollo 17 astronauts recorded panoramic image series, at the so-called “traverse stations”, were precisely determined for traverse path reconstruction. We analyzed observations made in Apollo surface photography as well as orthorectified orbital images (0.5 m/pixel) and Digital Terrain Models (DTMs) (1.5 m/pixel and 100 m/pixel) derived from LROC Narrow Angle Camera (NAC) and Wide Angle Camera (WAC) images. Key features captured in the Apollo panoramic sequences were identified in LROC NAC orthoimages. Angular directions of these features were measured in the panoramic images and fitted to the NAC orthoimage by applying least squares techniques. As a result, we obtained the surface panoramic camera positions to within 50 cm. At the same time, the camera orientations, North azimuth angles and distances to nearby features of interest were also determined. Here, initial results are shown for traverse station 1 (northwest of Steno Crater) as well as the Apollo Lunar Surface Experiment Package (ALSEP) area.

  9. Instrument Pointing Control System for the Stellar Interferometry Mission - Planet Quest

    NASA Technical Reports Server (NTRS)

    Brugarolas, Paul B.; Kang, Bryan

    2006-01-01

    This paper describes the high precision Instrument Pointing Control System (PCS) for the Stellar Interferometry Mission (SIM) - Planet Quest. The PCS system provides front-end pointing, compensation for spacecraft motion, and feedforward stabilization, which are needed for proper interference. Optical interferometric measurements require very precise pointing (0.03 as, 1-(sigma) radial) for maximizing the interference pattern visibility. This requirement is achieved by fine pointing control of articulating pointing mirrors with feedback from angle tracking cameras. The overall pointing system design concept is presentcd. Functional requirements and an acquisition concept are given. Guide and Science pointing control loops are discussed. Simulation analyses demonstrate the feasibility of the design.

  10. Person re-identification over camera networks using multi-task distance metric learning.

    PubMed

    Ma, Lianyang; Yang, Xiaokang; Tao, Dacheng

    2014-08-01

    Person reidentification in a camera network is a valuable yet challenging problem to solve. Existing methods learn a common Mahalanobis distance metric by using the data collected from different cameras and then exploit the learned metric for identifying people in the images. However, the cameras in a camera network have different settings and the recorded images are seriously affected by variability in illumination conditions, camera viewing angles, and background clutter. Using a common metric to conduct person reidentification tasks on different camera pairs overlooks the differences in camera settings; however, it is very time-consuming to label people manually in images from surveillance videos. For example, in most existing person reidentification data sets, only one image of a person is collected from each of only two cameras; therefore, directly learning a unique Mahalanobis distance metric for each camera pair is susceptible to over-fitting by using insufficiently labeled data. In this paper, we reformulate person reidentification in a camera network as a multitask distance metric learning problem. The proposed method designs multiple Mahalanobis distance metrics to cope with the complicated conditions that exist in typical camera networks. We address the fact that these Mahalanobis distance metrics are different but related, and learned by adding joint regularization to alleviate over-fitting. Furthermore, by extending, we present a novel multitask maximally collapsing metric learning (MtMCML) model for person reidentification in a camera network. Experimental results demonstrate that formulating person reidentification over camera networks as multitask distance metric learning problem can improve performance, and our proposed MtMCML works substantially better than other current state-of-the-art person reidentification methods.

  11. Photography in Dermatologic Surgery: Selection of an Appropriate Camera Type for a Particular Clinical Application.

    PubMed

    Chen, Brian R; Poon, Emily; Alam, Murad

    2017-08-01

    Photographs are an essential tool for the documentation and sharing of findings in dermatologic surgery, and various camera types are available. To evaluate the currently available camera types in view of the special functional needs of procedural dermatologists. Mobile phone, point and shoot, digital single-lens reflex (DSLR), digital medium format, and 3-dimensional cameras were compared in terms of their usefulness for dermatologic surgeons. For each camera type, the image quality, as well as the other practical benefits and limitations, were evaluated with reference to a set of ideal camera characteristics. Based on these assessments, recommendations were made regarding the specific clinical circumstances in which each camera type would likely be most useful. Mobile photography may be adequate when ease of use, availability, and accessibility are prioritized. Point and shoot cameras and DSLR cameras provide sufficient resolution for a range of clinical circumstances, while providing the added benefit of portability. Digital medium format cameras offer the highest image quality, with accurate color rendition and greater color depth. Three-dimensional imaging may be optimal for the definition of skin contour. The selection of an optimal camera depends on the context in which it will be used.

  12. Depth measurements through controlled aberrations of projected patterns.

    PubMed

    Birch, Gabriel C; Tyo, J Scott; Schwiegerling, Jim

    2012-03-12

    Three-dimensional displays have become increasingly present in consumer markets. However, the ability to capture three-dimensional images in space confined environments and without major modifications to current cameras is uncommon. Our goal is to create a simple modification to a conventional camera that allows for three dimensional reconstruction. We require such an imaging system have imaging and illumination paths coincident. Furthermore, we require that any three-dimensional modification to a camera also permits full resolution 2D image capture.Here we present a method of extracting depth information with a single camera and aberrated projected pattern. A commercial digital camera is used in conjunction with a projector system with astigmatic focus to capture images of a scene. By using an astigmatic projected pattern we can create two different focus depths for horizontal and vertical features of a projected pattern, thereby encoding depth. By designing an aberrated projected pattern, we are able to exploit this differential focus in post-processing designed to exploit the projected pattern and optical system. We are able to correlate the distance of an object at a particular transverse position from the camera to ratios of particular wavelet coefficients.We present our information regarding construction, calibration, and images produced by this system. The nature of linking a projected pattern design and image processing algorithms will be discussed.

  13. An Example-Based Super-Resolution Algorithm for Selfie Images

    PubMed Central

    William, Jino Hans; Venkateswaran, N.; Narayanan, Srinath; Ramachandran, Sandeep

    2016-01-01

    A selfie is typically a self-portrait captured using the front camera of a smartphone. Most state-of-the-art smartphones are equipped with a high-resolution (HR) rear camera and a low-resolution (LR) front camera. As selfies are captured by front camera with limited pixel resolution, the fine details in it are explicitly missed. This paper aims to improve the resolution of selfies by exploiting the fine details in HR images captured by rear camera using an example-based super-resolution (SR) algorithm. HR images captured by rear camera carry significant fine details and are used as an exemplar to train an optimal matrix-value regression (MVR) operator. The MVR operator serves as an image-pair priori which learns the correspondence between the LR-HR patch-pairs and is effectively used to super-resolve LR selfie images. The proposed MVR algorithm avoids vectorization of image patch-pairs and preserves image-level information during both learning and recovering process. The proposed algorithm is evaluated for its efficiency and effectiveness both qualitatively and quantitatively with other state-of-the-art SR algorithms. The results validate that the proposed algorithm is efficient as it requires less than 3 seconds to super-resolve LR selfie and is effective as it preserves sharp details without introducing any counterfeit fine details. PMID:27064500

  14. Accuracy Analysis for Automatic Orientation of a Tumbling Oblique Viewing Sensor System

    NASA Astrophysics Data System (ADS)

    Stebner, K.; Wieden, A.

    2014-03-01

    Dynamic camera systems with moving parts are difficult to handle in photogrammetric workflow, because it is not ensured that the dynamics are constant over the recording period. Minimum changes of the camera's orientation greatly influence the projection of oblique images. In this publication these effects - originating from the kinematic chain of a dynamic camera system - are analysed and validated. A member of the Modular Airborne Camera System family - MACS-TumbleCam - consisting of a vertical viewing and a tumbling oblique camera was used for this investigation. Focus is on dynamic geometric modeling and the stability of the kinematic chain. To validate the experimental findings, the determined parameters are applied to the exterior orientation of an actual aerial image acquisition campaign using MACS-TumbleCam. The quality of the parameters is sufficient for direct georeferencing of oblique image data from the orientation information of a synchronously captured vertical image dataset. Relative accuracy for the oblique data set ranges from 1.5 pixels when using all images of the image block to 0.3 pixels when using only adjacent images.

  15. The 2016 interferometric imaging beauty contest

    NASA Astrophysics Data System (ADS)

    Sanchez-Bermudez, J.; Thiébaut, E.; Hofmann, K.-H.; Heininger, M.; Schertl, D.; Weigelt, G.; Millour, F.; Schutz, A.; Ferrari, A.; Vannier, M.; Mary, D.; Young, J.

    2016-08-01

    Image reconstruction in optical interferometry has gained considerable importance for astrophysical studies during the last decade. This has been mainly due to improvements in the imaging capabilities of existing interferometers and the expectation of new facilities in the coming years. However, despite the advances made so far, image synthesis in optical interferometry is still an open field of research. Since 2004, the community has organized a biennial contest to formally test the different methods and algorithms for image reconstruction. In 2016, we celebrated the 7th edition of the "Interferometric Imaging Beauty Contest". This initiative represented an open call to participate in the reconstruction of a selected set of simulated targets with a wavelength-dependent morphology as they could be observed by the 2nd generation of VLTI instruments. This contest represents a unique opportunity to benchmark, in a systematic way, the current advances and limitations in the field, as well as to discuss possible future approaches. In this contribution, we summarize: (a) the rules of the 2016 contest; (b) the different data sets used and the selection procedure; (c) the methods and results obtained by each one of the participants; and (d) the metric used to select the best reconstructed images. Finally, we named Karl-Heinz Hofmann and the group of the Max-Planck-Institut fur Radioastronomie as winners of this edition of the contest.

  16. Using the Standard Deviation of a Region of Interest in an Image to Estimate Camera to Emitter Distance

    PubMed Central

    Cano-García, Angel E.; Lazaro, José Luis; Infante, Arturo; Fernández, Pedro; Pompa-Chacón, Yamilet; Espinoza, Felipe

    2012-01-01

    In this study, a camera to infrared diode (IRED) distance estimation problem was analyzed. The main objective was to define an alternative to measures depth only using the information extracted from pixel grey levels of the IRED image to estimate the distance between the camera and the IRED. In this paper, the standard deviation of the pixel grey level in the region of interest containing the IRED image is proposed as an empirical parameter to define a model for estimating camera to emitter distance. This model includes the camera exposure time, IRED radiant intensity and the distance between the camera and the IRED. An expression for the standard deviation model related to these magnitudes was also derived and calibrated using different images taken under different conditions. From this analysis, we determined the optimum parameters to ensure the best accuracy provided by this alternative. Once the model calibration had been carried out, a differential method to estimate the distance between the camera and the IRED was defined and applied, considering that the camera was aligned with the IRED. The results indicate that this method represents a useful alternative for determining the depth information. PMID:22778608

  17. Using the standard deviation of a region of interest in an image to estimate camera to emitter distance.

    PubMed

    Cano-García, Angel E; Lazaro, José Luis; Infante, Arturo; Fernández, Pedro; Pompa-Chacón, Yamilet; Espinoza, Felipe

    2012-01-01

    In this study, a camera to infrared diode (IRED) distance estimation problem was analyzed. The main objective was to define an alternative to measures depth only using the information extracted from pixel grey levels of the IRED image to estimate the distance between the camera and the IRED. In this paper, the standard deviation of the pixel grey level in the region of interest containing the IRED image is proposed as an empirical parameter to define a model for estimating camera to emitter distance. This model includes the camera exposure time, IRED radiant intensity and the distance between the camera and the IRED. An expression for the standard deviation model related to these magnitudes was also derived and calibrated using different images taken under different conditions. From this analysis, we determined the optimum parameters to ensure the best accuracy provided by this alternative. Once the model calibration had been carried out, a differential method to estimate the distance between the camera and the IRED was defined and applied, considering that the camera was aligned with the IRED. The results indicate that this method represents a useful alternative for determining the depth information.

  18. Autocalibration of a projector-camera system.

    PubMed

    Okatani, Takayuki; Deguchi, Koichiro

    2005-12-01

    This paper presents a method for calibrating a projector-camera system that consists of multiple projectors (or multiple poses of a single projector), a camera, and a planar screen. We consider the problem of estimating the homography between the screen and the image plane of the camera or the screen-camera homography, in the case where there is no prior knowledge regarding the screen surface that enables the direct computation of the homography. It is assumed that the pose of each projector is unknown while its internal geometry is known. Subsequently, it is shown that the screen-camera homography can be determined from only the images projected by the projectors and then obtained by the camera, up to a transformation with four degrees of freedom. This transformation corresponds to arbitrariness in choosing a two-dimensional coordinate system on the screen surface and when this coordinate system is chosen in some manner, the screen-camera homography as well as the unknown poses of the projectors can be uniquely determined. A noniterative algorithm is presented, which computes the homography from three or more images. Several experimental results on synthetic as well as real images are shown to demonstrate the effectiveness of the method.

  19. Gate simulation of Compton Ar-Xe gamma-camera for radionuclide imaging in nuclear medicine

    NASA Astrophysics Data System (ADS)

    Dubov, L. Yu; Belyaev, V. N.; Berdnikova, A. K.; Bolozdynia, A. I.; Akmalova, Yu A.; Shtotsky, Yu V.

    2017-01-01

    Computer simulations of cylindrical Compton Ar-Xe gamma camera are described in the current report. Detection efficiency of cylindrical Ar-Xe Compton camera with internal diameter of 40 cm is estimated as1-3%that is 10-100 times higher than collimated Anger’s camera. It is shown that cylindrical Compton camera can image Tc-99m radiotracer distribution with uniform spatial resolution of 20 mm through the whole field of view.

  20. A new path to first light for the Magdalena Ridge Observatory interferometer

    NASA Astrophysics Data System (ADS)

    Creech-Eakman, M. J.; Romero, V.; Payne, I.; Haniff, C. A.; Buscher, D. F.; Young, J. S.; Cervantes, R.; Dahl, C.; Farris, A.; Fisher, M.; Johnston, P.; Klinglesmith, D.; Love, H.; Ochoa, D.; Olivares, A.; Pino, J.; Salcido, C.; Santoro, F.; Schmidt, L.; Seneta, E. B.; Sun, X.; Jenka, L.; Kelly, R.; Price, J.; Rea, A.; Riker, J.; Rochelle, S.

    2016-08-01

    The Magdalena Ridge Observatory Interferometer (MROI) was the most ambitious infrared interferometric facility conceived of in 2003 when funding began. Today, despite having suffered some financial short-falls, it is still one of the most ambitious interferometric imaging facilities ever designed. With an innovative approach to attaining the original goal of fringe tracking to H = 14th magnitude via completely redesigned mobile telescopes, and a unique approach to the beam train and delay lines, the MROI will be able to image faint and complex objects with milliarcsecond resolutions for a fraction of the cost of giant telescopes or space-based facilities. The design goals of MROI have been optimized for studying stellar astrophysical processes such as mass loss and mass transfer, the formation and evolution of YSOs and their disks, and the environs of nearby AGN. The global needs for Space Situational Awareness (SSA) have moved to the forefront in many communities as Space becomes a more integral part of a national security portfolio. These needs drive imaging capabilities ultimately to a few tens of centimeter resolution at geosynchronous orbits. Any array capable of producing images on faint and complex geosynchronous objects in just a few hours will be outstanding not only as an astrophysical tool, but also for these types of SSA missions. With the recent infusion of new funding from the Air Force Research Lab (AFRL) in Albuquerque, NM, MROI will be able to attain first light, first fringes, and demonstrate bootstrapping with three telescopes by 2020. MROI's current status along with a sketch of our activities over the coming 5 years will be presented, as well as clear opportunities to collaborate on various aspects of the facility as it comes online. Further funding is actively being sought to accelerate the capability of the array for interferometric imaging on a short time-scale so as to achieve the original goals of this ambitious facility

  1. SAR Interferometry: On the Coherence Estimation in non Stationary Scenes

    NASA Astrophysics Data System (ADS)

    Ballatore, P.

    2005-05-01

    The possibility of producing good quality satellite SAR interferometry allows observations of terrain mass movement as small as millimetric scales, with applicability in researches about landslides, volcanoes, seismology and others. SAR interferometric images is characterized by the presence of random speckle, whose pattern does not correspond to the underlying image structure. However the local brightness of speckle reflects the local echogenicity of the underlying scatters. Specifically, the coherence between interferometric pair is generally considered as an indicator of interferogram quality. Moreover, it leads to useful image segmentations and it can be employed in data mining and database browsing algorithms. SAR coherence is generally computed by substituting the ensemble averages with the spatial averages, by assuming ergodicity in the estimation window sub-areas. Nevertheless, the actual results may depend on the spatial size scale of the sampling window used for the computation. This is especially true in the cases of fast coherence estimator algorithms, which make use of the correlation coefficient's square root (Rignon and van Zyl, IEEE Trans. Geosci.Remote Sensing, vol. 31, n. 4, pp. 896-906, 1993; Guarnieri and Prati, IEEE Trans. Geosci. Remote Sensing, vol. 35, n. 3, pp. 660-669, 1997). In fact, the correlation coefficient is increased by image texture, due to non stationary absolute values within single sample estimation windows. For example, this can happen in the case of mountainous lands, and, specifically, in the case of the Italian Southern Appennini region around Benevento city, which is of specific geophysical attention for its numerous seismic and landslide terrain movements. In these cases, dedicated techniques are applied for compensating texture effects. This presentation shows an example of interferometric coherence image depending on the spatial size of sampling window. Moreover, the different methodologies present in literature for texture effect control are briefly summarized and applied to our specific exemplary case. A quantitative comparison among resulting coherences is illustrated and discussed in terms of different experimental applicability.

  2. A method and results of color calibration for the Chang'e-3 terrain camera and panoramic camera

    NASA Astrophysics Data System (ADS)

    Ren, Xin; Li, Chun-Lai; Liu, Jian-Jun; Wang, Fen-Fei; Yang, Jian-Feng; Liu, En-Hai; Xue, Bin; Zhao, Ru-Jin

    2014-12-01

    The terrain camera (TCAM) and panoramic camera (PCAM) are two of the major scientific payloads installed on the lander and rover of the Chang'e 3 mission respectively. They both use a Bayer color filter array covering CMOS sensor to capture color images of the Moon's surface. RGB values of the original images are related to these two kinds of cameras. There is an obvious color difference compared with human visual perception. This paper follows standards published by the International Commission on Illumination to establish a color correction model, designs the ground calibration experiment and obtains the color correction coefficient. The image quality has been significantly improved and there is no obvious color difference in the corrected images. Ground experimental results show that: (1) Compared with uncorrected images, the average color difference of TCAM is 4.30, which has been reduced by 62.1%. (2) The average color differences of the left and right cameras in PCAM are 4.14 and 4.16, which have been reduced by 68.3% and 67.6% respectively.

  3. Semi-autonomous wheelchair system using stereoscopic cameras.

    PubMed

    Nguyen, Jordan S; Nguyen, Thanh H; Nguyen, Hung T

    2009-01-01

    This paper is concerned with the design and development of a semi-autonomous wheelchair system using stereoscopic cameras to assist hands-free control technologies for severely disabled people. The stereoscopic cameras capture an image from both the left and right cameras, which are then processed with a Sum of Absolute Differences (SAD) correlation algorithm to establish correspondence between image features in the different views of the scene. This is used to produce a stereo disparity image containing information about the depth of objects away from the camera in the image. A geometric projection algorithm is then used to generate a 3-Dimensional (3D) point map, placing pixels of the disparity image in 3D space. This is then converted to a 2-Dimensional (2D) depth map allowing objects in the scene to be viewed and a safe travel path for the wheelchair to be planned and followed based on the user's commands. This assistive technology utilising stereoscopic cameras has the purpose of automated obstacle detection, path planning and following, and collision avoidance during navigation. Experimental results obtained in an indoor environment displayed the effectiveness of this assistive technology.

  4. Accuracy evaluation of optical distortion calibration by digital image correlation

    NASA Astrophysics Data System (ADS)

    Gao, Zeren; Zhang, Qingchuan; Su, Yong; Wu, Shangquan

    2017-11-01

    Due to its convenience of operation, the camera calibration algorithm, which is based on the plane template, is widely used in image measurement, computer vision and other fields. How to select a suitable distortion model is always a problem to be solved. Therefore, there is an urgent need for an experimental evaluation of the accuracy of camera distortion calibrations. This paper presents an experimental method for evaluating camera distortion calibration accuracy, which is easy to implement, has high precision, and is suitable for a variety of commonly used lens. First, we use the digital image correlation method to calculate the in-plane rigid body displacement field of an image displayed on a liquid crystal display before and after translation, as captured with a camera. Next, we use a calibration board to calibrate the camera to obtain calibration parameters which are used to correct calculation points of the image before and after deformation. The displacement field before and after correction is compared to analyze the distortion calibration results. Experiments were carried out to evaluate the performance of two commonly used industrial camera lenses for four commonly used distortion models.

  5. The Mast Cameras and Mars Descent Imager (MARDI) for the 2009 Mars Science Laboratory

    NASA Technical Reports Server (NTRS)

    Malin, M. C.; Bell, J. F.; Cameron, J.; Dietrich, W. E.; Edgett, K. S.; Hallet, B.; Herkenhoff, K. E.; Lemmon, M. T.; Parker, T. J.; Sullivan, R. J.

    2005-01-01

    Based on operational experience gained during the Mars Exploration Rover (MER) mission, we proposed and were selected to conduct two related imaging experiments: (1) an investigation of the geology and short-term atmospheric vertical wind profile local to the Mars Science Laboratory (MSL) landing site using descent imaging, and (2) a broadly-based scientific investigation of the MSL locale employing visible and very near infra-red imaging techniques from a pair of mast-mounted, high resolution cameras. Both instruments share a common electronics design, a design also employed for the MSL Mars Hand Lens Imager (MAHLI) [1]. The primary differences between the cameras are in the nature and number of mechanisms and specific optics tailored to each camera s requirements.

  6. Mitigating Effects of Missing Data for SAR Coherent Images

    DOE PAGES

    Musgrove, Cameron H.; West, James C.

    2017-01-01

    Missing samples within synthetic aperture radar data result in image distortions. For coherent data products, such as coherent change detection and interferometric processing, the image distortion can be devastating to these second order products, resulting in missed detections and inaccurate height maps. Earlier approaches to repair the coherent data products focus upon reconstructing the missing data samples. This study demonstrates that reconstruction is not necessary to restore the quality of the coherent data products.

  7. Interferometric Imaging of Geostationary Satellites: Signal-to-Noise Considerations

    DTIC Science & Technology

    2011-09-01

    instrument a minute time -scale snapshot imager. Snapshot imaging is im- portant because it allows for resolving short time -scale changes of the satellite ...curves of fringe amplitude standard deviation as a function of satellite V-magnitude, giving the corresponding integration time . From this figure we can...combiner (in R-band). We conclude that it is possible to track fringes on typical highly resolved satellites to a magnitude of V = 14.5. This range

  8. Advances in Gamma-Ray Imaging with Intensified Quantum-Imaging Detectors

    NASA Astrophysics Data System (ADS)

    Han, Ling

    Nuclear medicine, an important branch of modern medical imaging, is an essential tool for both diagnosis and treatment of disease. As the fundamental element of nuclear medicine imaging, the gamma camera is able to detect gamma-ray photons emitted by radiotracers injected into a patient and form an image of the radiotracer distribution, reflecting biological functions of organs or tissues. Recently, an intensified CCD/CMOS-based quantum detector, called iQID, was developed in the Center for Gamma-Ray Imaging. Originally designed as a novel type of gamma camera, iQID demonstrated ultra-high spatial resolution (< 100 micron) and many other advantages over traditional gamma cameras. This work focuses on advancing this conceptually-proven gamma-ray imaging technology to make it ready for both preclinical and clinical applications. To start with, a Monte Carlo simulation of the key light-intensification device, i.e. the image intensifier, was developed, which revealed the dominating factor(s) that limit energy resolution performance of the iQID cameras. For preclinical imaging applications, a previously-developed iQID-based single-photon-emission computed-tomography (SPECT) system, called FastSPECT III, was fully advanced in terms of data acquisition software, system sensitivity and effective FOV by developing and adopting a new photon-counting algorithm, thicker columnar scintillation detectors, and system calibration method. Originally designed for mouse brain imaging, the system is now able to provide full-body mouse imaging with sub-350-micron spatial resolution. To further advance the iQID technology to include clinical imaging applications, a novel large-area iQID gamma camera, called LA-iQID, was developed from concept to prototype. Sub-mm system resolution in an effective FOV of 188 mm x 188 mm has been achieved. The camera architecture, system components, design and integration, data acquisition, camera calibration, and performance evaluation are presented in this work. Mounted on a castered counter-weighted clinical cart, the camera also features portable and mobile capabilities for easy handling and on-site applications at remote locations where hospital facilities are not available.

  9. Flat-panel detector, CCD cameras, and electron-beam-tube-based video for use in portal imaging

    NASA Astrophysics Data System (ADS)

    Roehrig, Hans; Tang, Chuankun; Cheng, Chee-Way; Dallas, William J.

    1998-07-01

    This paper provides a comparison of some imaging parameters of four portal imaging systems at 6 MV: a flat panel detector, two CCD cameras and an electron beam tube based video camera. Measurements were made of signal and noise and consequently of signal-to-noise per pixel as a function of the exposure. All systems have a linear response with respect to exposure, and with the exception of the electron beam tube based video camera, the noise is proportional to the square-root of the exposure, indicating photon-noise limitation. The flat-panel detector has a signal-to-noise ratio, which is higher than that observed with both CCD-Cameras or with the electron beam tube based video camera. This is expected because most portal imaging systems using optical coupling with a lens exhibit severe quantum-sinks. The measurements of signal-and noise were complemented by images of a Las Vegas-type aluminum contrast detail phantom, located at the ISO-Center. These images were generated at an exposure of 1 MU. The flat-panel detector permits detection of Aluminum holes of 1.2 mm diameter and 1.6 mm depth, indicating the best signal-to-noise ratio. The CCD-cameras rank second and third in signal-to- noise ratio, permitting detection of Aluminum-holes of 1.2 mm diameter and 2.2 mm depth (CCD_1) and of 1.2 mm diameter and 3.2 mm depth (CCD_2) respectively, while the electron beam tube based video camera permits detection of only a hole of 1.2 mm diameter and 4.6 mm depth. Rank Order Filtering was applied to the raw images from the CCD-based systems in order to remove the direct hits. These are camera responses to scattered x-ray photons which interact directly with the CCD of the CCD-Camera and generate 'Salt and Pepper type noise,' which interferes severely with attempts to determine accurate estimates of the image noise. The paper also presents data on the metal-phosphor's photon gain (the number of light-photons per interacting x-ray photon).

  10. Volumetric particle image velocimetry with a single plenoptic camera

    NASA Astrophysics Data System (ADS)

    Fahringer, Timothy W.; Lynch, Kyle P.; Thurow, Brian S.

    2015-11-01

    A novel three-dimensional (3D), three-component (3C) particle image velocimetry (PIV) technique based on volume illumination and light field imaging with a single plenoptic camera is described. A plenoptic camera uses a densely packed microlens array mounted near a high resolution image sensor to sample the spatial and angular distribution of light collected by the camera. The multiplicative algebraic reconstruction technique (MART) computed tomography algorithm is used to reconstruct a volumetric intensity field from individual snapshots and a cross-correlation algorithm is used to estimate the velocity field from a pair of reconstructed particle volumes. This work provides an introduction to the basic concepts of light field imaging with a plenoptic camera and describes the unique implementation of MART in the context of plenoptic image data for 3D/3C PIV measurements. Simulations of a plenoptic camera using geometric optics are used to generate synthetic plenoptic particle images, which are subsequently used to estimate the quality of particle volume reconstructions at various particle number densities. 3D reconstructions using this method produce reconstructed particles that are elongated by a factor of approximately 4 along the optical axis of the camera. A simulated 3D Gaussian vortex is used to test the capability of single camera plenoptic PIV to produce a 3D/3C vector field, where it was found that lateral displacements could be measured to approximately 0.2 voxel accuracy in the lateral direction and 1 voxel in the depth direction over a 300× 200× 200 voxel volume. The feasibility of the technique is demonstrated experimentally using a home-built plenoptic camera based on a 16-megapixel interline CCD camera and a 289× 193 array of microlenses and a pulsed Nd:YAG laser. 3D/3C measurements were performed in the wake of a low Reynolds number circular cylinder and compared with measurements made using a conventional 2D/2C PIV system. Overall, single camera plenoptic PIV is shown to be a viable 3D/3C velocimetry technique.

  11. Performance evaluation and clinical applications of 3D plenoptic cameras

    NASA Astrophysics Data System (ADS)

    Decker, Ryan; Shademan, Azad; Opfermann, Justin; Leonard, Simon; Kim, Peter C. W.; Krieger, Axel

    2015-06-01

    The observation and 3D quantification of arbitrary scenes using optical imaging systems is challenging, but increasingly necessary in many fields. This paper provides a technical basis for the application of plenoptic cameras in medical and medical robotics applications, and rigorously evaluates camera integration and performance in the clinical setting. It discusses plenoptic camera calibration and setup, assesses plenoptic imaging in a clinically relevant context, and in the context of other quantitative imaging technologies. We report the methods used for camera calibration, precision and accuracy results in an ideal and simulated surgical setting. Afterwards, we report performance during a surgical task. Test results showed the average precision of the plenoptic camera to be 0.90mm, increasing to 1.37mm for tissue across the calibrated FOV. The ideal accuracy was 1.14mm. The camera showed submillimeter error during a simulated surgical task.

  12. Micro-Imagers for Spaceborne Cell-Growth Experiments

    NASA Technical Reports Server (NTRS)

    Behar, Alberto; Matthews, Janet; SaintAnge, Beverly; Tanabe, Helen

    2006-01-01

    A document discusses selected aspects of a continuing effort to develop five micro-imagers for both still and video monitoring of cell cultures to be grown aboard the International Space Station. The approach taken in this effort is to modify and augment pre-existing electronic micro-cameras. Each such camera includes an image-detector integrated-circuit chip, signal-conditioning and image-compression circuitry, and connections for receiving power from, and exchanging data with, external electronic equipment. Four white and four multicolor light-emitting diodes are to be added to each camera for illuminating the specimens to be monitored. The lens used in the original version of each camera is to be replaced with a shorter-focal-length, more-compact singlet lens to make it possible to fit the camera into the limited space allocated to it. Initially, the lenses in the five cameras are to have different focal lengths: the focal lengths are to be 1, 1.5, 2, 2.5, and 3 cm. Once one of the focal lengths is determined to be the most nearly optimum, the remaining four cameras are to be fitted with lenses of that focal length.

  13. High-performance camera module for fast quality inspection in industrial printing applications

    NASA Astrophysics Data System (ADS)

    Fürtler, Johannes; Bodenstorfer, Ernst; Mayer, Konrad J.; Brodersen, Jörg; Heiss, Dorothea; Penz, Harald; Eckel, Christian; Gravogl, Klaus; Nachtnebel, Herbert

    2007-02-01

    Today, printing products which must meet highest quality standards, e.g., banknotes, stamps, or vouchers, are automatically checked by optical inspection systems. Typically, the examination of fine details of the print or security features demands images taken from various perspectives, with different spectral sensitivity (visible, infrared, ultraviolet), and with high resolution. Consequently, the inspection system is equipped with several cameras and has to cope with an enormous data rate to be processed in real-time. Hence, it is desirable to move image processing tasks into the camera to reduce the amount of data which has to be transferred to the (central) image processing system. The idea is to transfer relevant information only, i.e., features of the image instead of the raw image data from the sensor. These features are then further processed. In this paper a color line-scan camera for line rates up to 100 kHz is presented. The camera is based on a commercial CMOS (complementary metal oxide semiconductor) area image sensor and a field programmable gate array (FPGA). It implements extraction of image features which are well suited to detect print flaws like blotches of ink, color smears, splashes, spots and scratches. The camera design and several image processing methods implemented on the FPGA are described, including flat field correction, compensation of geometric distortions, color transformation, as well as decimation and neighborhood operations.

  14. [Analysis of antibiotic diffusion from agarose gel by spectrophotometry and laser interferometry methods].

    PubMed

    Arabski, Michał; Wasik, Sławomir; Piskulak, Patrycja; Góźdź, Natalia; Slezak, Andrzej; Kaca, Wiesław

    2011-01-01

    The aim of this study was to analysis of antibiotics (ampicilin, streptomycin, ciprofloxacin or colistin) release from agarose gel by spectrophotmetry and laser interferometry methods. The interferometric system consisted of a Mach-Zehnder interferometer with a He-Ne laser, TV-CCD camera, computerised data acquisition system and a gel system. The gel system under study consists of two cuvettes. We filled the lower cuvette with an aqueous 1% agarose solution with the antibiotics at initial concentration of antibiotics in the range of 0.12-2 mg/ml for spectrophotmetry analysis or 0.05-0.5 mg/ml for laser interferometry methods, while in the upper cuvette there was pure water. The diffusion was analysed from 120 to 2400 s with a time interval of deltat = 120 s by both methods. We observed that 0.25-1 mg/ml and 0,05 mg/ml are minimal initial concentrations detected by spectrophotometric and laser interferometry methods, respectively. Additionally, we observed differences in kinetic of antibiotic diffusion from gel measured by both methods. In conclusion, the laser interferometric method is a useful tool for studies of antibiotic release from agarose gel, especially for substances are not fully soluble in water, for example: colistin.

  15. Cryogenic Optical Performance of a Light-weight Mirror Assembly for Future Space Astronomical Telescopes: Optical Test Results and Thermal Optical Model

    NASA Technical Reports Server (NTRS)

    Eng, Ron; Arnold, William; Baker, Markus A.; Bevan, Ryan M.; Carpenter, James R.; Effinger, Michael R.; Gaddy, Darrell E.; Goode, Brian K.; Kegley, Jeffrey R.; Hogue, William D.; hide

    2013-01-01

    A 40 cm diameter mirror assembly was interferometrically tested at room temperature down to 250 degrees Kelvin for thermal deformation. The 2.5 m radius of curvature spherical mirror assembly was constructed by low temperature fusing three abrasive waterjet core sections between two face sheets. The 93% lightweighted Corning ULE mirror assembly represents the current state of the art for future UV, optical, near IR space telescopes. During the multiple thermal test cycles, test results of interferometric test, thermal IR images of the front face were recorded in order to validate thermal optical model.

  16. High-resolution interferometic microscope for traceable dimensional nanometrology in Brazil

    NASA Astrophysics Data System (ADS)

    Malinovski, I.; França, R. S.; Lima, M. S.; Bessa, M. S.; Silva, C. R.; Couceiro, I. B.

    2016-07-01

    The double color interferometric microscope is developed for step height standards nanometrology traceable to meter definition via primary wavelength laser standards. The setup is based on two stabilized lasers to provide traceable measurements of highest possible resolution down to the physical limits of the optical instruments in sub-nanometer to micrometer range of the heights. The wavelength reference is He-Ne 633 nm stabilized laser, the secondary source is Blue-Green 488 nm grating laser diode. Accurate fringe portion is measured by modulated phase-shift technique combined with imaging interferometry and Fourier processing. Self calibrating methods are developed to correct systematic interferometric errors.

  17. Interferometric study of Betelgeuse in H band

    NASA Astrophysics Data System (ADS)

    Haubois, X.; Perrin, G.; Lacour, S.; Schuller, P. A.; Monnier, J. D.; Berger, J.-P.; Ridgway, S. T.; Millan-Gabet, R.; Pedretti, E.; Traub, W. A.

    2006-06-01

    We present 3 telescope interferometric observations of the super giant star Betelgeuse (Alpha Ori, M2Iab) using the IOTA/IONIC interferometer (Whipple Observatory, Arizona) in early October 2005. Since IOTA is a 3 telescope interferometer, we were able to make closure phase measurements which allow us to image the star with several pixels across the disk. We discuss the fondamental parameters of Betelgeuse such as diameter, limb darkening and effective temperature. For the first time at this spatial resolution in the H band, closure phases provide interesting insights on the features of the object since we detect a spot corresponding to 0.5% of the total received flux.

  18. REVIEWS OF TOPICAL PROBLEMS: Global phase-stable radiointerferometric systems

    NASA Astrophysics Data System (ADS)

    Dravskikh, A. F.; Korol'kov, Dimitrii V.; Pariĭskiĭ, Yu N.; Stotskiĭ, A. A.; Finkel'steĭn, A. M.; Fridman, P. A.

    1981-12-01

    We discuss from a unitary standpoint the possibility of building a phase-stable interferometric system with very long baselines that operate around the clock with real-time data processing. The various problems involved in the realization of this idea are discussed: the methods of suppression of instrumental and tropospheric phase fluctuations, the methods for constructing two-dimensional images and determining the coordinates of radio sources with high angular resolution, and the problem of the optimal structure of the interferometric system. We review in detail the scientific problems from the various branches of natural science (astrophysics, cosmology, geophysics, geodynamics, astrometry, etc.) whose solution requires superhigh angular resolution.

  19. Interferometric imaging of nonlocal electromechanical power transduction in ferroelectric domains.

    PubMed

    Zheng, Lu; Dong, Hui; Wu, Xiaoyu; Huang, Yen-Lin; Wang, Wenbo; Wu, Weida; Wang, Zheng; Lai, Keji

    2018-05-22

    The electrical generation and detection of elastic waves are the foundation for acoustoelectronic and acoustooptic systems. For surface acoustic wave devices, microelectromechanical/nanoelectromechanical systems, and phononic crystals, tailoring the spatial variation of material properties such as piezoelectric and elastic tensors may bring significant improvements to the system performance. Due to the much slower speed of sound than speed of light in solids, it is desirable to study various electroacoustic behaviors at the mesoscopic length scale. In this work, we demonstrate the interferometric imaging of electromechanical power transduction in ferroelectric lithium niobate domain structures by microwave impedance microscopy. In sharp contrast to the traditional standing-wave patterns caused by the superposition of counterpropagating waves, the constructive and destructive fringes in microwave dissipation images exhibit an intriguing one-wavelength periodicity. We show that such unusual interference patterns, which are fundamentally different from the acoustic displacement fields, stem from the nonlocal interaction between electric fields and elastic waves. The results are corroborated by numerical simulations taking into account the sign reversal of piezoelectric tensor in oppositely polarized domains. Our work paves ways to probe nanoscale electroacoustic phenomena in complex structures by near-field electromagnetic imaging.

  20. Deconvolution of interferometric data using interior point iterative algorithms

    NASA Astrophysics Data System (ADS)

    Theys, C.; Lantéri, H.; Aime, C.

    2016-09-01

    We address the problem of deconvolution of astronomical images that could be obtained with future large interferometers in space. The presentation is made in two complementary parts. The first part gives an introduction to the image deconvolution with linear and nonlinear algorithms. The emphasis is made on nonlinear iterative algorithms that verify the constraints of non-negativity and constant flux. The Richardson-Lucy algorithm appears there as a special case for photon counting conditions. More generally, the algorithm published recently by Lanteri et al. (2015) is based on scale invariant divergences without assumption on the statistic model of the data. The two proposed algorithms are interior-point algorithms, the latter being more efficient in terms of speed of calculation. These algorithms are applied to the deconvolution of simulated images corresponding to an interferometric system of 16 diluted telescopes in space. Two non-redundant configurations, one disposed around a circle and the other on an hexagonal lattice, are compared for their effectiveness on a simple astronomical object. The comparison is made in the direct and Fourier spaces. Raw "dirty" images have many artifacts due to replicas of the original object. Linear methods cannot remove these replicas while iterative methods clearly show their efficacy in these examples.

  1. Wetland Mapping with Quad-Pol Data Acquired during Tandem-X Science Phase

    NASA Astrophysics Data System (ADS)

    Mleczko, M.; Mroz, M.; Fitrzyk, M.

    2016-06-01

    The aim of this study was to exploit fully polarimetric SAR data acquired during TanDEM-X - Science Phase (2014/2015) over herbaceous wetlands of the Biebrza National Park (BbNP) in North-Eastern Poland for mapping seasonally flooded grasslands and permanent natural vegetation associations. The main goal of this work was to estimate the advantage of fully polarimetric radar images (QuadPol) versus alternative polarization (AltPol) modes. The methodology consisted in processing of several data subsets through polarimetric decompositions of complex quad-pol datasets, classification of multitemporal backscattering images, complementing backscattering images with Shannon Entropy, exploitation of interferometric coherence from tandem operations. In each case the multidimensional stack of images has been classified using ISODATA unsupervised clustering algorithm. With 6 QUAD-POL TSX/TDX acquisitions it was possible to distinguish correctly 5 thematic classes related to their water regime: permanent water bodies, temporarily flooded areas, wet grasslands, dry grasslands and common reed. This last category was possible to distinguish from deciduous forest only with Yamaguchi 4 component decomposition. The interferometric coherence calculated for tandem pairs turned out not so efficient as expected for this wetland mapping.

  2. A single-sided homogeneous Green's function representation for holographic imaging, inverse scattering, time-reversal acoustics and interferometric Green's function retrieval

    NASA Astrophysics Data System (ADS)

    Wapenaar, Kees; Thorbecke, Jan; van der Neut, Joost

    2016-04-01

    Green's theorem plays a fundamental role in a diverse range of wavefield imaging applications, such as holographic imaging, inverse scattering, time-reversal acoustics and interferometric Green's function retrieval. In many of those applications, the homogeneous Green's function (i.e. the Green's function of the wave equation without a singularity on the right-hand side) is represented by a closed boundary integral. In practical applications, sources and/or receivers are usually present only on an open surface, which implies that a significant part of the closed boundary integral is by necessity ignored. Here we derive a homogeneous Green's function representation for the common situation that sources and/or receivers are present on an open surface only. We modify the integrand in such a way that it vanishes on the part of the boundary where no sources and receivers are present. As a consequence, the remaining integral along the open surface is an accurate single-sided representation of the homogeneous Green's function. This single-sided representation accounts for all orders of multiple scattering. The new representation significantly improves the aforementioned wavefield imaging applications, particularly in situations where the first-order scattering approximation breaks down.

  3. Sensors for 3D Imaging: Metric Evaluation and Calibration of a CCD/CMOS Time-of-Flight Camera.

    PubMed

    Chiabrando, Filiberto; Chiabrando, Roberto; Piatti, Dario; Rinaudo, Fulvio

    2009-01-01

    3D imaging with Time-of-Flight (ToF) cameras is a promising recent technique which allows 3D point clouds to be acquired at video frame rates. However, the distance measurements of these devices are often affected by some systematic errors which decrease the quality of the acquired data. In order to evaluate these errors, some experimental tests on a CCD/CMOS ToF camera sensor, the SwissRanger (SR)-4000 camera, were performed and reported in this paper. In particular, two main aspects are treated: the calibration of the distance measurements of the SR-4000 camera, which deals with evaluation of the camera warm up time period, the distance measurement error evaluation and a study of the influence on distance measurements of the camera orientation with respect to the observed object; the second aspect concerns the photogrammetric calibration of the amplitude images delivered by the camera using a purpose-built multi-resolution field made of high contrast targets.

  4. Real-time vehicle matching for multi-camera tunnel surveillance

    NASA Astrophysics Data System (ADS)

    Jelača, Vedran; Niño Castañeda, Jorge Oswaldo; Frías-Velázquez, Andrés; Pižurica, Aleksandra; Philips, Wilfried

    2011-03-01

    Tracking multiple vehicles with multiple cameras is a challenging problem of great importance in tunnel surveillance. One of the main challenges is accurate vehicle matching across the cameras with non-overlapping fields of view. Since systems dedicated to this task can contain hundreds of cameras which observe dozens of vehicles each, for a real-time performance computational efficiency is essential. In this paper, we propose a low complexity, yet highly accurate method for vehicle matching using vehicle signatures composed of Radon transform like projection profiles of the vehicle image. The proposed signatures can be calculated by a simple scan-line algorithm, by the camera software itself and transmitted to the central server or to the other cameras in a smart camera environment. The amount of data is drastically reduced compared to the whole image, which relaxes the data link capacity requirements. Experiments on real vehicle images, extracted from video sequences recorded in a tunnel by two distant security cameras, validate our approach.

  5. The use of consumer depth cameras for 3D surface imaging of people with obesity: A feasibility study.

    PubMed

    Wheat, J S; Clarkson, S; Flint, S W; Simpson, C; Broom, D R

    2018-05-21

    Three dimensional (3D) surface imaging is a viable alternative to traditional body morphology measures, but the feasibility of using this technique with people with obesity has not been fully established. Therefore, the aim of this study was to investigate the validity, repeatability and acceptability of a consumer depth camera 3D surface imaging system in imaging people with obesity. The concurrent validity of the depth camera based system was investigated by comparing measures of mid-trunk volume to a gold-standard. The repeatability and acceptability of the depth camera system was assessed in people with obesity at a clinic. There was evidence of a fixed systematic difference between the depth camera system and the gold standard but excellent correlation between volume estimates (r 2 =0.997), with little evidence of proportional bias. The depth camera system was highly repeatable - low typical error (0.192L), high intraclass correlation coefficient (>0.999) and low technical error of measurement (0.64%). Depth camera based 3D surface imaging was also acceptable to people with obesity. It is feasible (valid, repeatable and acceptable) to use a low cost, flexible 3D surface imaging system to monitor the body size and shape of people with obesity in a clinical setting. Copyright © 2018 Asia Oceania Association for the Study of Obesity. Published by Elsevier Ltd. All rights reserved.

  6. Can Commercial Digital Cameras Be Used as Multispectral Sensors? A Crop Monitoring Test.

    PubMed

    Lebourgeois, Valentine; Bégué, Agnès; Labbé, Sylvain; Mallavan, Benjamin; Prévot, Laurent; Roux, Bruno

    2008-11-17

    The use of consumer digital cameras or webcams to characterize and monitor different features has become prevalent in various domains, especially in environmental applications. Despite some promising results, such digital camera systems generally suffer from signal aberrations due to the on-board image processing systems and thus offer limited quantitative data acquisition capability. The objective of this study was to test a series of radiometric corrections having the potential to reduce radiometric distortions linked to camera optics and environmental conditions, and to quantify the effects of these corrections on our ability to monitor crop variables. In 2007, we conducted a five-month experiment on sugarcane trial plots using original RGB and modified RGB (Red-Edge and NIR) cameras fitted onto a light aircraft. The camera settings were kept unchanged throughout the acquisition period and the images were recorded in JPEG and RAW formats. These images were corrected to eliminate the vignetting effect, and normalized between acquisition dates. Our results suggest that 1) the use of unprocessed image data did not improve the results of image analyses; 2) vignetting had a significant effect, especially for the modified camera, and 3) normalized vegetation indices calculated with vignetting-corrected images were sufficient to correct for scene illumination conditions. These results are discussed in the light of the experimental protocol and recommendations are made for the use of these versatile systems for quantitative remote sensing of terrestrial surfaces.

  7. iPhone 4s and iPhone 5s Imaging of the Eye.

    PubMed

    Jalil, Maaz; Ferenczy, Sandor R; Shields, Carol L

    2017-01-01

    To evaluate the technical feasibility of a consumer-grade cellular iPhone camera as an ocular imaging device compared to existing ophthalmic imaging equipment for documentation purposes. A comparison of iPhone 4s and 5s images was made with external facial images (macrophotography) using Nikon cameras, slit-lamp images (microphotography) using Zeiss photo slit-lamp camera, and fundus images (fundus photography) using RetCam II. In an analysis of six consecutive patients with ophthalmic conditions, both iPhones achieved documentation of external findings (macrophotography) using standard camera modality, tap to focus, and built-in flash. Both iPhones achieved documentation of anterior segment findings (microphotography) during slit-lamp examination through oculars. Both iPhones achieved fundus imaging using standard video modality with continuous iPhone illumination through an ophthalmic lens. Comparison to standard ophthalmic cameras, macrophotography and microphotography were excellent. In comparison to RetCam fundus photography, iPhone fundus photography revealed smaller field and was technically more difficult to obtain, but the quality was nearly similar to RetCam. iPhone versions 4s and 5s can provide excellent ophthalmic macrophotography and microphotography and adequate fundus photography. We believe that iPhone imaging could be most useful in settings where expensive, complicated, and cumbersome imaging equipment is unavailable.

  8. A time-resolved image sensor for tubeless streak cameras

    NASA Astrophysics Data System (ADS)

    Yasutomi, Keita; Han, SangMan; Seo, Min-Woong; Takasawa, Taishi; Kagawa, Keiichiro; Kawahito, Shoji

    2014-03-01

    This paper presents a time-resolved CMOS image sensor with draining-only modulation (DOM) pixels for tube-less streak cameras. Although the conventional streak camera has high time resolution, the device requires high voltage and bulky system due to the structure with a vacuum tube. The proposed time-resolved imager with a simple optics realize a streak camera without any vacuum tubes. The proposed image sensor has DOM pixels, a delay-based pulse generator, and a readout circuitry. The delay-based pulse generator in combination with an in-pixel logic allows us to create and to provide a short gating clock to the pixel array. A prototype time-resolved CMOS image sensor with the proposed pixel is designed and implemented using 0.11um CMOS image sensor technology. The image array has 30(Vertical) x 128(Memory length) pixels with the pixel pitch of 22.4um. .

  9. Plenoptic Imager for Automated Surface Navigation

    NASA Technical Reports Server (NTRS)

    Zollar, Byron; Milder, Andrew; Milder, Andrew; Mayo, Michael

    2010-01-01

    An electro-optical imaging device is capable of autonomously determining the range to objects in a scene without the use of active emitters or multiple apertures. The novel, automated, low-power imaging system is based on a plenoptic camera design that was constructed as a breadboard system. Nanohmics proved feasibility of the concept by designing an optical system for a prototype plenoptic camera, developing simulated plenoptic images and range-calculation algorithms, constructing a breadboard prototype plenoptic camera, and processing images (including range calculations) from the prototype system. The breadboard demonstration included an optical subsystem comprised of a main aperture lens, a mechanical structure that holds an array of micro lenses at the focal distance from the main lens, and a structure that mates a CMOS imaging sensor the correct distance from the micro lenses. The demonstrator also featured embedded electronics for camera readout, and a post-processor executing image-processing algorithms to provide ranging information.

  10. Multiplane and Spectrally-Resolved Single Molecule Localization Microscopy with Industrial Grade CMOS cameras.

    PubMed

    Babcock, Hazen P

    2018-01-29

    This work explores the use of industrial grade CMOS cameras for single molecule localization microscopy (SMLM). We show that industrial grade CMOS cameras approach the performance of scientific grade CMOS cameras at a fraction of the cost. This makes it more economically feasible to construct high-performance imaging systems with multiple cameras that are capable of a diversity of applications. In particular we demonstrate the use of industrial CMOS cameras for biplane, multiplane and spectrally resolved SMLM. We also provide open-source software for simultaneous control of multiple CMOS cameras and for the reduction of the movies that are acquired to super-resolution images.

  11. Space-based infrared sensors of space target imaging effect analysis

    NASA Astrophysics Data System (ADS)

    Dai, Huayu; Zhang, Yasheng; Zhou, Haijun; Zhao, Shuang

    2018-02-01

    Target identification problem is one of the core problem of ballistic missile defense system, infrared imaging simulation is an important means of target detection and recognition. This paper first established the space-based infrared sensors ballistic target imaging model of point source on the planet's atmosphere; then from two aspects of space-based sensors camera parameters and target characteristics simulated atmosphere ballistic target of infrared imaging effect, analyzed the camera line of sight jitter, camera system noise and different imaging effects of wave on the target.

  12. Correction And Use Of Jitter In Television Images

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B.; Fender, Derek H.; Fender, Antony R. H.

    1989-01-01

    Proposed system stabilizes jittering television image and/or measures jitter to extract information on motions of objects in image. Alternative version, system controls lateral motion on camera to generate stereoscopic views to measure distances to objects. In another version, motion of camera controlled to keep object in view. Heart of system is digital image-data processor called "jitter-miser", which includes frame buffer and logic circuits to correct for jitter in image. Signals from motion sensors on camera sent to logic circuits and processed into corrections for motion along and across line of sight.

  13. INTERFEROMETRIC IMAGING WITH TERAHERTZ PULSES. (R827122)

    EPA Science Inventory

    The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...

  14. Relative Panoramic Camera Position Estimation for Image-Based Virtual Reality Networks in Indoor Environments

    NASA Astrophysics Data System (ADS)

    Nakagawa, M.; Akano, K.; Kobayashi, T.; Sekiguchi, Y.

    2017-09-01

    Image-based virtual reality (VR) is a virtual space generated with panoramic images projected onto a primitive model. In imagebased VR, realistic VR scenes can be generated with lower rendering cost, and network data can be described as relationships among VR scenes. The camera network data are generated manually or by an automated procedure using camera position and rotation data. When panoramic images are acquired in indoor environments, network data should be generated without Global Navigation Satellite Systems (GNSS) positioning data. Thus, we focused on image-based VR generation using a panoramic camera in indoor environments. We propose a methodology to automate network data generation using panoramic images for an image-based VR space. We verified and evaluated our methodology through five experiments in indoor environments, including a corridor, elevator hall, room, and stairs. We confirmed that our methodology can automatically reconstruct network data using panoramic images for image-based VR in indoor environments without GNSS position data.

  15. Fluorescent image tracking velocimeter

    DOEpatents

    Shaffer, Franklin D.

    1994-01-01

    A multiple-exposure fluorescent image tracking velocimeter (FITV) detects and measures the motion (trajectory, direction and velocity) of small particles close to light scattering surfaces. The small particles may follow the motion of a carrier medium such as a liquid, gas or multi-phase mixture, allowing the motion of the carrier medium to be observed, measured and recorded. The main components of the FITV include: (1) fluorescent particles; (2) a pulsed fluorescent excitation laser source; (3) an imaging camera; and (4) an image analyzer. FITV uses fluorescing particles excited by visible laser light to enhance particle image detectability near light scattering surfaces. The excitation laser light is filtered out before reaching the imaging camera allowing the fluoresced wavelengths emitted by the particles to be detected and recorded by the camera. FITV employs multiple exposures of a single camera image by pulsing the excitation laser light for producing a series of images of each particle along its trajectory. The time-lapsed image may be used to determine trajectory and velocity and the exposures may be coded to derive directional information.

  16. HERCULES/MSI: a multispectral imager with geolocation for STS-70

    NASA Astrophysics Data System (ADS)

    Simi, Christopher G.; Kindsfather, Randy; Pickard, Henry; Howard, William, III; Norton, Mark C.; Dixon, Roberta

    1995-11-01

    A multispectral intensified CCD imager combined with a ring laser gyroscope based inertial measurement unit was flown on the Space Shuttle Discovery from July 13-22, 1995 (Space Transport System Flight No. 70, STS-70). The camera includes a six position filter wheel, a third generation image intensifier, and a CCD camera. The camera is integrated with a laser gyroscope system that determines the ground position of the imagery to an accuracy of better than three nautical miles. The camera has two modes of operation; a panchromatic mode for high-magnification imaging [ground sample distance (GSD) of 4 m], or a multispectral mode consisting of six different user-selectable spectral ranges at reduced magnification (12 m GSD). This paper discusses the system hardware and technical trade-offs involved with camera optimization, and presents imagery observed during the shuttle mission.

  17. Investigation into the use of photoanthropometry in facial image comparison.

    PubMed

    Moreton, Reuben; Morley, Johanna

    2011-10-10

    Photoanthropometry is a metric based facial image comparison technique. Measurements of the face are taken from an image using predetermined facial landmarks. Measurements are then converted to proportionality indices (PIs) and compared to PIs from another facial image. Photoanthropometry has been presented as a facial image comparison technique in UK courts for over 15 years. It is generally accepted that extrinsic factors (e.g. orientation of the head, camera angle and distance from the camera) can cause discrepancies in anthropometric measurements of the face from photographs. However there has been limited empirical research into quantifying the influence of such variables. The aim of this study was to determine the reliability of photoanthropometric measurements between different images of the same individual taken with different angulations of the camera. The study examined the facial measurements of 25 individuals from high resolution photographs, taken at different horizontal and vertical camera angles in a controlled environment. Results show that the degree of variability in facial measurements of the same individual due to variations in camera angle can be as great as the variability of facial measurements between different individuals. Results suggest that photoanthropometric facial comparison, as it is currently practiced, is unsuitable for elimination purposes. Preliminary investigations into the effects of distance from camera and image resolution in poor quality images suggest that such images are not an accurate representation of an individuals face, however further work is required. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  18. Fourier Plane Image Combination by Feathering

    NASA Astrophysics Data System (ADS)

    Cotton, W. D.

    2017-09-01

    Astronomical objects frequently exhibit structure over a wide range of scales whereas many telescopes, especially interferometer arrays, only sample a limited range of spatial scales. To properly image these objects, images from a set of instruments covering the range of scales may be needed. These images then must be combined in a manner to recover all spatial scales. This paper describes the feathering technique for image combination in the Fourier transform plane. Implementations in several packages are discussed and example combinations of single dish and interferometric observations of both simulated and celestial radio emission are given.

  19. Joint estimation of high resolution images and depth maps from light field cameras

    NASA Astrophysics Data System (ADS)

    Ohashi, Kazuki; Takahashi, Keita; Fujii, Toshiaki

    2014-03-01

    Light field cameras are attracting much attention as tools for acquiring 3D information of a scene through a single camera. The main drawback of typical lenselet-based light field cameras is the limited resolution. This limitation comes from the structure where a microlens array is inserted between the sensor and the main lens. The microlens array projects 4D light field on a single 2D image sensor at the sacrifice of the resolution; the angular resolution and the position resolution trade-off under the fixed resolution of the image sensor. This fundamental trade-off remains after the raw light field image is converted to a set of sub-aperture images. The purpose of our study is to estimate a higher resolution image from low resolution sub-aperture images using a framework of super-resolution reconstruction. In this reconstruction, these sub-aperture images should be registered as accurately as possible. This registration is equivalent to depth estimation. Therefore, we propose a method where super-resolution and depth refinement are performed alternatively. Most of the process of our method is implemented by image processing operations. We present several experimental results using a Lytro camera, where we increased the resolution of a sub-aperture image by three times horizontally and vertically. Our method can produce clearer images compared to the original sub-aperture images and the case without depth refinement.

  20. A novel super-resolution camera model

    NASA Astrophysics Data System (ADS)

    Shao, Xiaopeng; Wang, Yi; Xu, Jie; Wang, Lin; Liu, Fei; Luo, Qiuhua; Chen, Xiaodong; Bi, Xiangli

    2015-05-01

    Aiming to realize super resolution(SR) to single image and video reconstruction, a super resolution camera model is proposed for the problem that the resolution of the images obtained by traditional cameras behave comparatively low. To achieve this function we put a certain driving device such as piezoelectric ceramics in the camera. By controlling the driving device, a set of continuous low resolution(LR) images can be obtained and stored instantaneity, which reflect the randomness of the displacements and the real-time performance of the storage very well. The low resolution image sequences have different redundant information and some particular priori information, thus it is possible to restore super resolution image factually and effectively. The sample method is used to derive the reconstruction principle of super resolution, which analyzes the possible improvement degree of the resolution in theory. The super resolution algorithm based on learning is used to reconstruct single image and the variational Bayesian algorithm is simulated to reconstruct the low resolution images with random displacements, which models the unknown high resolution image, motion parameters and unknown model parameters in one hierarchical Bayesian framework. Utilizing sub-pixel registration method, a super resolution image of the scene can be reconstructed. The results of 16 images reconstruction show that this camera model can increase the image resolution to 2 times, obtaining images with higher resolution in currently available hardware levels.

  1. Demonstration of the Wide-Field Imaging Interferometer Testbed Using a Calibrated Hyperspectral Image Projector

    NASA Technical Reports Server (NTRS)

    Bolcar, Matthew R.; Leisawitz, David; Maher, Steve; Rinehart, Stephen

    2012-01-01

    The Wide-field Imaging Interferometer testbed (WIIT) at NASA's Goddard Space Flight Center uses a dual-Michelson interferometric technique. The WIIT combines stellar interferometry with Fourier-transform interferometry to produce high-resolution spatial-spectral data over a large field-of-view. This combined technique could be employed on future NASA missions such as the Space Infrared Interferometric Telescope (SPIRIT) and the Sub-millimeter Probe of the Evolution of Cosmic Structure (SPECS). While both SPIRIT and SPECS would operate at far-infrared wavelengths, the WIIT demonstrates the dual-interferometry technique at visible wavelengths. The WIIT will produce hyperspectral image data, so a true hyperspectral object is necessary. A calibrated hyperspectral image projector (CHIP) has been constructed to provide such an object. The CHIP uses Digital Light Processing (DLP) technology to produce customized, spectrally-diverse scenes. CHIP scenes will have approximately 1.6-micron spatial resolution and the capability of . producing arbitrary spectra in the band between 380 nm and 1.6 microns, with approximately 5-nm spectral resolution. Each pixel in the scene can take on a unique spectrum. Spectral calibration is achieved with an onboard fiber-coupled spectrometer. In this paper we describe the operation of the CHIP. Results from the WIIT observations of CHIP scenes will also be presented.

  2. Mitigation of Atmospheric Effects on Imaging Systems

    DTIC Science & Technology

    2004-03-31

    focal length. The imaging system had two cameras: an Electrim camera sensitive in the visible (0.6 µ m) waveband and an Amber QWIP infrared camera...sensitive in the 9–micron region. The Amber QWIP infrared camera had 256x256 pixels, pixel pitch 38 mµ , focal length of 1.8 m, FOV of 5.4 x5.4 mr...each day. Unfortunately, signals from the different read ports of the Electrim camera picked up noise on their way to the digitizer, and this resulted

  3. Imaging Emission Spectra with Handheld and Cellphone Cameras

    NASA Astrophysics Data System (ADS)

    Sitar, David

    2012-12-01

    As point-and-shoot digital camera technology advances it is becoming easier to image spectra in a laboralory setting on a shoestring budget and get immediale results. With this in mind, I wanted to test three cameras to see how their results would differ. Two undergraduate physics students and I used one handheld 7.1 megapixel (MP) digital Cannon point-and-shoot auto focusing camera and two different cellphone cameras: one at 6.1 MP and the other at 5.1 MP.

  4. Use of a Digital Camera To Document Student Observations in a Microbiology Laboratory Class.

    ERIC Educational Resources Information Center

    Mills, David A.; Kelley, Kevin; Jones, Michael

    2001-01-01

    Points out the lack of microscopic images of wine-related microbes. Uses a digital camera during a wine microbiology laboratory to capture student-generated microscope images. Discusses the advantages of using a digital camera in a teaching lab. (YDS)

  5. Lincoln Penny on Mars in Camera Calibration Target

    NASA Image and Video Library

    2012-09-10

    The penny in this image is part of a camera calibration target on NASA Mars rover Curiosity. The MAHLI camera on the rover took this image of the MAHLI calibration target during the 34th Martian day of Curiosity work on Mars, Sept. 9, 2012.

  6. Cheetah: A high frame rate, high resolution SWIR image camera

    NASA Astrophysics Data System (ADS)

    Neys, Joel; Bentell, Jonas; O'Grady, Matt; Vermeiren, Jan; Colin, Thierry; Hooylaerts, Peter; Grietens, Bob

    2008-10-01

    A high resolution, high frame rate InGaAs based image sensor and associated camera has been developed. The sensor and the camera are capable of recording and delivering more than 1700 full 640x512pixel frames per second. The FPA utilizes a low lag CTIA current integrator in each pixel, enabling integration times shorter than one microsecond. On-chip logics allows for four different sub windows to be read out simultaneously at even higher rates. The spectral sensitivity of the FPA is situated in the SWIR range [0.9-1.7 μm] and can be further extended into the Visible and NIR range. The Cheetah camera has max 16 GB of on-board memory to store the acquired images and transfer the data over a Gigabit Ethernet connection to the PC. The camera is also equipped with a full CameralinkTM interface to directly stream the data to a frame grabber or dedicated image processing unit. The Cheetah camera is completely under software control.

  7. Design and fabrication of a CCD camera for use with relay optics in solar X-ray astronomy

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Configured as a subsystem of a sounding rocket experiment, a camera system was designed to record and transmit an X-ray image focused on a charge coupled device. The camera consists of a X-ray sensitive detector and the electronics for processing and transmitting image data. The design and operation of the camera are described. Schematics are included.

  8. An image-tube camera for cometary spectrography

    NASA Astrophysics Data System (ADS)

    Mamadov, O.

    The paper discusses the mounting of an image tube camera. The cathode is of antimony, sodium, potassium, and cesium. The parts used for mounting are of acrylic plastic and a fabric-based laminate. A mounting design that does not include cooling is presented. The aperture ratio of the camera is 1:27. Also discussed is the way that the camera is joined to the spectrograph.

  9. Preliminary GAOFEN-3 Insar dem Accuracy Analysis

    NASA Astrophysics Data System (ADS)

    Chen, Q.; Li, T.; Tang, X.; Gao, X.; Zhang, X.

    2018-04-01

    GF-3 satellite, the first C band and full-polarization SAR satellite of China with spatial resolution of 1 m, was successfully launched in August 2016. We analyze the error sources of GF-3 satellite in this paper, and provide the interferometric calibration model based on range function, Doppler shift equation and interferometric phase function, and interferometric parameters calibrated using the three-dimensional coordinates of ground control points. Then, we conduct the experimental two pairs of images in fine stripmap I mode covering Songshan of Henan Province and Tangshan of Hebei Province, respectively. The DEM data are assessed using SRTM DEM, ICESat-GLAS points, and ground control points database obtained using ZY-3 satellite to validate the accuracy of DEM elevation. The experimental results show that the accuracy of DEM extracted from GF-3 satellite SAR data can meet the requirements of topographic mapping in mountain and alpine regions at the scale of 1 : 50000 in China. Besides, it proves that GF-3 satellite has the potential of interferometry.

  10. Rapid interferometric imaging of printed drug laden multilayer structures

    NASA Astrophysics Data System (ADS)

    Sandler, Niklas; Kassamakov, Ivan; Ehlers, Henrik; Genina, Natalja; Ylitalo, Tuomo; Haeggstrom, Edward

    2014-02-01

    The developments in printing technologies allow fabrication of micron-size nano-layered delivery systems to personal specifications. In this study we fabricated layered polymer structures for drug-delivery into a microfluidic channel and aimed to interferometrically assure their topography and adherence to each other. We present a scanning white light interferometer (SWLI) method for quantitative assurance of the topography of the embedded structure. We determined rapidly in non-destructive manner the thickness and roughness of the structures and whether the printed layers containing polymers or/and active pharmaceutical ingredients (API) adhere to each other. This is crucial in order to have predetermined drug release profiles. We also demonstrate non-invasive measurement of a polymer structure in a microfluidic channel. It shown that traceable interferometric 3D microscopy is a viable technique for detailed structural quality assurance of layered drug-delivery systems. The approach can have impact and find use in a much broader setting within and outside life sciences.

  11. Camera Trajectory fromWide Baseline Images

    NASA Astrophysics Data System (ADS)

    Havlena, M.; Torii, A.; Pajdla, T.

    2008-09-01

    Camera trajectory estimation, which is closely related to the structure from motion computation, is one of the fundamental tasks in computer vision. Reliable camera trajectory estimation plays an important role in 3D reconstruction, self localization, and object recognition. There are essential issues for a reliable camera trajectory estimation, for instance, choice of the camera and its geometric projection model, camera calibration, image feature detection and description, and robust 3D structure computation. Most of approaches rely on classical perspective cameras because of the simplicity of their projection models and ease of their calibration. However, classical perspective cameras offer only a limited field of view, and thus occlusions and sharp camera turns may cause that consecutive frames look completely different when the baseline becomes longer. This makes the image feature matching very difficult (or impossible) and the camera trajectory estimation fails under such conditions. These problems can be avoided if omnidirectional cameras, e.g. a fish-eye lens convertor, are used. The hardware which we are using in practice is a combination of Nikon FC-E9 mounted via a mechanical adaptor onto a Kyocera Finecam M410R digital camera. Nikon FC-E9 is a megapixel omnidirectional addon convertor with 180° view angle which provides images of photographic quality. Kyocera Finecam M410R delivers 2272×1704 images at 3 frames per second. The resulting combination yields a circular view of diameter 1600 pixels in the image. Since consecutive frames of the omnidirectional camera often share a common region in 3D space, the image feature matching is often feasible. On the other hand, the calibration of these cameras is non-trivial and is crucial for the accuracy of the resulting 3D reconstruction. We calibrate omnidirectional cameras off-line using the state-of-the-art technique and Mičušík's two-parameter model, that links the radius of the image point r to the angle θ of its corresponding rays w.r.t. the optical axis as θ = ar 1+br2 . After a successful calibration, we know the correspondence of the image points to the 3D optical rays in the coordinate system of the camera. The following steps aim at finding the transformation between the camera and the world coordinate systems, i.e. the pose of the camera in the 3D world, using 2D image matches. For computing 3D structure, we construct a set of tentative matches detecting different affine covariant feature regions including MSER, Harris Affine, and Hessian Affine in acquired images. These features are alternative to popular SIFT features and work comparably in our situation. Parameters of the detectors are chosen to limit the number of regions to 1-2 thousands per image. The detected regions are assigned local affine frames (LAF) and transformed into standard positions w.r.t. their LAFs. Discrete Cosine Descriptors are computed for each region in the standard position. Finally, mutual distances of all regions in one image and all regions in the other image are computed as the Euclidean distances of their descriptors and tentative matches are constructed by selecting the mutually closest pairs. Opposed to the methods using short baseline images, simpler image features which are not affine covariant cannot be used because the view point can change a lot between consecutive frames. Furthermore, feature matching has to be performed on the whole frame because no assumptions on the proximity of the consecutive projections can be made for wide baseline images. This is making the feature detection, description, and matching much more time-consuming than it is for short baseline images and limits the usage to low frame rate sequences when operating in real-time. Robust 3D structure can be computed by RANSAC which searches for the largest subset of the set of tentative matches which is, within a predefined threshold ", consistent with an epipolar geometry. We use ordered sampling as suggested in to draw 5-tuples from the list of tentative matches ordered ascendingly by the distance of their descriptors which may help to reduce the number of samples in RANSAC. From each 5-tuple, relative orientation is computed by solving the 5-point minimal relative orientation problem for calibrated cameras. Often, there are more models which are supported by a large number of matches. Thus the chance that the correct model, even if it has the largest support, will be found by running a single RANSAC is small. Work suggested to generate models by randomized sampling as in RANSAC but to use soft (kernel) voting for a parameter instead of looking for the maximal support. The best model is then selected as the one with the parameter closest to the maximum in the accumulator space. In our case, we vote in a two-dimensional accumulator for the estimated camera motion direction. However, unlike in, we do not cast votes directly by each sampled epipolar geometry but by the best epipolar geometries recovered by ordered sampling of RANSAC. With our technique, we could go up to the 98.5 % contamination of mismatches with comparable effort as simple RANSAC does for the contamination by 84 %. The relative camera orientation with the motion direction closest to the maximum in the voting space is finally selected. As already mentioned in the first paragraph, the use of camera trajectory estimates is quite wide. In we have introduced a technique for measuring the size of camera translation relatively to the observed scene which uses the dominant apical angle computed at the reconstructed scene points and is robust against mismatches. The experiments demonstrated that the measure can be used to improve the robustness of camera path computation and object recognition for methods which use a geometric, e.g. the ground plane, constraint such as does for the detection of pedestrians. Using the camera trajectories, perspective cutouts with stabilized horizon are constructed and an arbitrary object recognition routine designed to work with images acquired by perspective cameras can be used without any further modifications.

  12. Spacecraft camera image registration

    NASA Technical Reports Server (NTRS)

    Kamel, Ahmed A. (Inventor); Graul, Donald W. (Inventor); Chan, Fred N. T. (Inventor); Gamble, Donald W. (Inventor)

    1987-01-01

    A system for achieving spacecraft camera (1, 2) image registration comprises a portion external to the spacecraft and an image motion compensation system (IMCS) portion onboard the spacecraft. Within the IMCS, a computer (38) calculates an image registration compensation signal (60) which is sent to the scan control loops (84, 88, 94, 98) of the onboard cameras (1, 2). At the location external to the spacecraft, the long-term orbital and attitude perturbations on the spacecraft are modeled. Coefficients (K, A) from this model are periodically sent to the onboard computer (38) by means of a command unit (39). The coefficients (K, A) take into account observations of stars and landmarks made by the spacecraft cameras (1, 2) themselves. The computer (38) takes as inputs the updated coefficients (K, A) plus synchronization information indicating the mirror position (AZ, EL) of each of the spacecraft cameras (1, 2), operating mode, and starting and stopping status of the scan lines generated by these cameras (1, 2), and generates in response thereto the image registration compensation signal (60). The sources of periodic thermal errors on the spacecraft are discussed. The system is checked by calculating measurement residuals, the difference between the landmark and star locations predicted at the external location and the landmark and star locations as measured by the spacecraft cameras (1, 2).

  13. Bayes classification of interferometric TOPSAR data

    NASA Technical Reports Server (NTRS)

    Michel, T. R.; Rodriguez, E.; Houshmand, B.; Carande, R.

    1995-01-01

    We report the Bayes classification of terrain types at different sites using airborne interferometric synthetic aperture radar (INSAR) data. A Gaussian maximum likelihood classifier was applied on multidimensional observations derived from the SAR intensity, the terrain elevation model, and the magnitude of the interferometric correlation. Training sets for forested, urban, agricultural, or bare areas were obtained either by selecting samples with known ground truth, or by k-means clustering of random sets of samples uniformly distributed across all sites, and subsequent assignments of these clusters using ground truth. The accuracy of the classifier was used to optimize the discriminating efficiency of the set of features that was chosen. The most important features include the SAR intensity, a canopy penetration depth model, and the terrain slope. We demonstrate the classifier's performance across sites using a unique set of training classes for the four main terrain categories. The scenes examined include San Francisco (CA) (predominantly urban and water), Mount Adams (WA) (forested with clear cuts), Pasadena (CA) (urban with mountains), and Antioch Hills (CA) (water, swamps, fields). Issues related to the effects of image calibration and the robustness of the classification to calibration errors are explored. The relative performance of single polarization Interferometric data classification is contrasted against classification schemes based on polarimetric SAR data.

  14. Modeling the depth-sectioning effect in reflection-mode dynamic speckle-field interferometric microscopy

    PubMed Central

    Zhou, Renjie; Jin, Di; Hosseini, Poorya; Singh, Vijay Raj; Kim, Yang-hyo; Kuang, Cuifang; Dasari, Ramachandra R.; Yaqoob, Zahid; So, Peter T. C.

    2017-01-01

    Unlike most optical coherence microscopy (OCM) systems, dynamic speckle-field interferometric microscopy (DSIM) achieves depth sectioning through the spatial-coherence gating effect. Under high numerical aperture (NA) speckle-field illumination, our previous experiments have demonstrated less than 1 μm depth resolution in reflection-mode DSIM, while doubling the diffraction limited resolution as under structured illumination. However, there has not been a physical model to rigorously describe the speckle imaging process, in particular explaining the sectioning effect under high illumination and imaging NA settings in DSIM. In this paper, we develop such a model based on the diffraction tomography theory and the speckle statistics. Using this model, we calculate the system response function, which is used to further obtain the depth resolution limit in reflection-mode DSIM. Theoretically calculated depth resolution limit is in an excellent agreement with experiment results. We envision that our physical model will not only help in understanding the imaging process in DSIM, but also enable better designing such systems for depth-resolved measurements in biological cells and tissues. PMID:28085800

  15. Probing mass-transport and binding inhomogeneity in macromolecular interactions by molecular interferometric imaging

    NASA Astrophysics Data System (ADS)

    Zhao, Ming; Wang, Xuefeng; Nolte, David

    2009-02-01

    In solid-support immunoassays, the transport of target analyte in sample solution to capture molecules on the sensor surface controls the detected binding signal. Depletion of the target analyte in the sample solution adjacent to the sensor surface leads to deviations from ideal association, and causes inhomogeneity of surface binding as analyte concentration varies spatially across the sensor surface. In the field of label-free optical biosensing, studies of mass-transport-limited reaction kinetics have focused on the average response on the sensor surface, but have not addressed binding inhomogeneities caused by mass-transport limitations. In this paper, we employ Molecular Interferometric Imaging (MI2) to study mass-transport-induced inhomogeneity of analyte binding within a single protein spot. Rabbit IgG binding to immobilized protein A/G was imaged at various concentrations and under different flow rates. In the mass-transport-limited regime, enhanced binding at the edges of the protein spots was caused by depletion of analyte towards the center of the protein spots. The magnitude of the inhomogeneous response was a function of analyte reaction rate and sample flow rate.

  16. Exploring three faint source detections methods for aperture synthesis radio images

    NASA Astrophysics Data System (ADS)

    Peracaula, M.; Torrent, A.; Masias, M.; Lladó, X.; Freixenet, J.; Martí, J.; Sánchez-Sutil, J. R.; Muñoz-Arjonilla, A. J.; Paredes, J. M.

    2015-04-01

    Wide-field radio interferometric images often contain a large population of faint compact sources. Due to their low intensity/noise ratio, these objects can be easily missed by automated detection methods, which have been classically based on thresholding techniques after local noise estimation. The aim of this paper is to present and analyse the performance of several alternative or complementary techniques to thresholding. We compare three different algorithms to increase the detection rate of faint objects. The first technique consists of combining wavelet decomposition with local thresholding. The second technique is based on the structural behaviour of the neighbourhood of each pixel. Finally, the third algorithm uses local features extracted from a bank of filters and a boosting classifier to perform the detections. The methods' performances are evaluated using simulations and radio mosaics from the Giant Metrewave Radio Telescope and the Australia Telescope Compact Array. We show that the new methods perform better than well-known state of the art methods such as SEXTRACTOR, SAD and DUCHAMP at detecting faint sources of radio interferometric images.

  17. Bio-Inspired Sensing and Imaging of Polarization Information in Nature

    DTIC Science & Technology

    2008-05-04

    polarization imaging,” Appl. Opt. 36, 150–155 (1997). 5. L. B. Wolff, “Polarization camera for computer vision with a beam splitter ,” J. Opt. Soc. Am. A...vision with a beam splitter ,” J. Opt. Soc. Am. A 11, 2935–2945 (1994). 2. L. B. Wolff and A. G. Andreou, “Polarization camera sensors,” Image Vis. Comput...group we have been developing various man-made, non -invasive imaging methodologies, sensing schemes, camera systems, and visualization and display

  18. A high-sensitivity EM-CCD camera for the open port telescope cavity of SOFIA

    NASA Astrophysics Data System (ADS)

    Wiedemann, Manuel; Wolf, Jürgen; McGrotty, Paul; Edwards, Chris; Krabbe, Alfred

    2016-08-01

    The Stratospheric Observatory for Infrared Astronomy (SOFIA) has three target acquisition and tracking cameras. All three imagers originally used the same cameras, which did not meet the sensitivity requirements, due to low quantum efficiency and high dark current. The Focal Plane Imager (FPI) suffered the most from high dark current, since it operated in the aircraft cabin at room temperatures without active cooling. In early 2013 the FPI was upgraded with an iXon3 888 from Andor Techonolgy. Compared to the original cameras, the iXon3 has a factor five higher QE, thanks to its back-illuminated sensor, and orders of magnitude lower dark current, due to a thermo-electric cooler and "inverted mode operation." This leads to an increase in sensitivity of about five stellar magnitudes. The Wide Field Imager (WFI) and Fine Field Imager (FFI) shall now be upgraded with equally sensitive cameras. However, they are exposed to stratospheric conditions in flight (typical conditions: T≍-40° C, p≍ 0:1 atm) and there are no off-the-shelf CCD cameras with the performance of an iXon3, suited for these conditions. Therefore, Andor Technology and the Deutsches SOFIA Institut (DSI) are jointly developing and qualifying a camera for these conditions, based on the iXon3 888. These changes include replacement of electrical components with MIL-SPEC or industrial grade components and various system optimizations, a new data interface that allows the image data transmission over 30m of cable from the camera to the controller, a new power converter in the camera to generate all necessary operating voltages of the camera locally and a new housing that fulfills airworthiness requirements. A prototype of this camera has been built and tested in an environmental test chamber at temperatures down to T=-62° C and pressure equivalent to 50 000 ft altitude. In this paper, we will report about the development of the camera and present results from the environmental testing.

  19. Quantitative evaluation of the accuracy and variance of individual pixels in a scientific CMOS (sCMOS) camera for computational imaging

    NASA Astrophysics Data System (ADS)

    Watanabe, Shigeo; Takahashi, Teruo; Bennett, Keith

    2017-02-01

    The"scientific" CMOS (sCMOS) camera architecture fundamentally differs from CCD and EMCCD cameras. In digital CCD and EMCCD cameras, conversion from charge to the digital output is generally through a single electronic chain, and the read noise and the conversion factor from photoelectrons to digital outputs are highly uniform for all pixels, although quantum efficiency may spatially vary. In CMOS cameras, the charge to voltage conversion is separate for each pixel and each column has independent amplifiers and analog-to-digital converters, in addition to possible pixel-to-pixel variation in quantum efficiency. The "raw" output from the CMOS image sensor includes pixel-to-pixel variability in the read noise, electronic gain, offset and dark current. Scientific camera manufacturers digitally compensate the raw signal from the CMOS image sensors to provide usable images. Statistical noise in images, unless properly modeled, can introduce errors in methods such as fluctuation correlation spectroscopy or computational imaging, for example, localization microscopy using maximum likelihood estimation. We measured the distributions and spatial maps of individual pixel offset, dark current, read noise, linearity, photoresponse non-uniformity and variance distributions of individual pixels for standard, off-the-shelf Hamamatsu ORCA-Flash4.0 V3 sCMOS cameras using highly uniform and controlled illumination conditions, from dark conditions to multiple low light levels between 20 to 1,000 photons / pixel per frame to higher light conditions. We further show that using pixel variance for flat field correction leads to errors in cameras with good factory calibration.

  20. An HDR imaging method with DTDI technology for push-broom cameras

    NASA Astrophysics Data System (ADS)

    Sun, Wu; Han, Chengshan; Xue, Xucheng; Lv, Hengyi; Shi, Junxia; Hu, Changhong; Li, Xiangzhi; Fu, Yao; Jiang, Xiaonan; Huang, Liang; Han, Hongyin

    2018-03-01

    Conventionally, high dynamic-range (HDR) imaging is based on taking two or more pictures of the same scene with different exposure. However, due to a high-speed relative motion between the camera and the scene, it is hard for this technique to be applied to push-broom remote sensing cameras. For the sake of HDR imaging in push-broom remote sensing applications, the present paper proposes an innovative method which can generate HDR images without redundant image sensors or optical components. Specifically, this paper adopts an area array CMOS (complementary metal oxide semiconductor) with the digital domain time-delay-integration (DTDI) technology for imaging, instead of adopting more than one row of image sensors, thereby taking more than one picture with different exposure. And then a new HDR image by fusing two original images with a simple algorithm can be achieved. By conducting the experiment, the dynamic range (DR) of the image increases by 26.02 dB. The proposed method is proved to be effective and has potential in other imaging applications where there is a relative motion between the cameras and scenes.

  1. Super-resolved refocusing with a plenoptic camera

    NASA Astrophysics Data System (ADS)

    Zhou, Zhiliang; Yuan, Yan; Bin, Xiangli; Qian, Lulu

    2011-03-01

    This paper presents an approach to enhance the resolution of refocused images by super resolution methods. In plenoptic imaging, we demonstrate that the raw sensor image can be divided to a number of low-resolution angular images with sub-pixel shifts between each other. The sub-pixel shift, which defines the super-resolving ability, is mathematically derived by considering the plenoptic camera as equivalent camera arrays. We implement simulation to demonstrate the imaging process of a plenoptic camera. A high-resolution image is then reconstructed using maximum a posteriori (MAP) super resolution algorithms. Without other degradation effects in simulation, the super resolved image achieves a resolution as high as predicted by the proposed model. We also build an experimental setup to acquire light fields. With traditional refocusing methods, the image is rendered at a rather low resolution. In contrast, we implement the super-resolved refocusing methods and recover an image with more spatial details. To evaluate the performance of the proposed method, we finally compare the reconstructed images using image quality metrics like peak signal to noise ratio (PSNR).

  2. Pedestrian Detection Based on Adaptive Selection of Visible Light or Far-Infrared Light Camera Image by Fuzzy Inference System and Convolutional Neural Network-Based Verification.

    PubMed

    Kang, Jin Kyu; Hong, Hyung Gil; Park, Kang Ryoung

    2017-07-08

    A number of studies have been conducted to enhance the pedestrian detection accuracy of intelligent surveillance systems. However, detecting pedestrians under outdoor conditions is a challenging problem due to the varying lighting, shadows, and occlusions. In recent times, a growing number of studies have been performed on visible light camera-based pedestrian detection systems using a convolutional neural network (CNN) in order to make the pedestrian detection process more resilient to such conditions. However, visible light cameras still cannot detect pedestrians during nighttime, and are easily affected by shadows and lighting. There are many studies on CNN-based pedestrian detection through the use of far-infrared (FIR) light cameras (i.e., thermal cameras) to address such difficulties. However, when the solar radiation increases and the background temperature reaches the same level as the body temperature, it remains difficult for the FIR light camera to detect pedestrians due to the insignificant difference between the pedestrian and non-pedestrian features within the images. Researchers have been trying to solve this issue by inputting both the visible light and the FIR camera images into the CNN as the input. This, however, takes a longer time to process, and makes the system structure more complex as the CNN needs to process both camera images. This research adaptively selects a more appropriate candidate between two pedestrian images from visible light and FIR cameras based on a fuzzy inference system (FIS), and the selected candidate is verified with a CNN. Three types of databases were tested, taking into account various environmental factors using visible light and FIR cameras. The results showed that the proposed method performs better than the previously reported methods.

  3. Novel computer-based endoscopic camera

    NASA Astrophysics Data System (ADS)

    Rabinovitz, R.; Hai, N.; Abraham, Martin D.; Adler, Doron; Nissani, M.; Fridental, Ron; Vitsnudel, Ilia

    1995-05-01

    We have introduced a computer-based endoscopic camera which includes (a) unique real-time digital image processing to optimize image visualization by reducing over exposed glared areas and brightening dark areas, and by accentuating sharpness and fine structures, and (b) patient data documentation and management. The image processing is based on i Sight's iSP1000TM digital video processor chip and Adaptive SensitivityTM patented scheme for capturing and displaying images with wide dynamic range of light, taking into account local neighborhood image conditions and global image statistics. It provides the medical user with the ability to view images under difficult lighting conditions, without losing details `in the dark' or in completely saturated areas. The patient data documentation and management allows storage of images (approximately 1 MB per image for a full 24 bit color image) to any storage device installed into the camera, or to an external host media via network. The patient data which is included with every image described essential information on the patient and procedure. The operator can assign custom data descriptors, and can search for the stored image/data by typing any image descriptor. The camera optics has extended zoom range of f equals 20 - 45 mm allowing control of the diameter of the field which is displayed on the monitor such that the complete field of view of the endoscope can be displayed on all the area of the screen. All these features provide versatile endoscopic camera with excellent image quality and documentation capabilities.

  4. Superresolution Interferometric Imaging with Sparse Modeling Using Total Squared Variation: Application to Imaging the Black Hole Shadow

    NASA Astrophysics Data System (ADS)

    Kuramochi, Kazuki; Akiyama, Kazunori; Ikeda, Shiro; Tazaki, Fumie; Fish, Vincent L.; Pu, Hung-Yi; Asada, Keiichi; Honma, Mareki

    2018-05-01

    We propose a new imaging technique for interferometry using sparse modeling, utilizing two regularization terms: the ℓ 1-norm and a new function named total squared variation (TSV) of the brightness distribution. First, we demonstrate that our technique may achieve a superresolution of ∼30% compared with the traditional CLEAN beam size using synthetic observations of two point sources. Second, we present simulated observations of three physically motivated static models of Sgr A* with the Event Horizon Telescope (EHT) to show the performance of proposed techniques in greater detail. Remarkably, in both the image and gradient domains, the optimal beam size minimizing root-mean-squared errors is ≲10% of the traditional CLEAN beam size for ℓ 1+TSV regularization, and non-convolved reconstructed images have smaller errors than beam-convolved reconstructed images. This indicates that TSV is well matched to the expected physical properties of the astronomical images and the traditional post-processing technique of Gaussian convolution in interferometric imaging may not be required. We also propose a feature-extraction method to detect circular features from the image of a black hole shadow and use it to evaluate the performance of the image reconstruction. With this method and reconstructed images, the EHT can constrain the radius of the black hole shadow with an accuracy of ∼10%–20% in present simulations for Sgr A*, suggesting that the EHT would be able to provide useful independent measurements of the mass of the supermassive black holes in Sgr A* and also another primary target, M87.

  5. Software for Acquiring Image Data for PIV

    NASA Technical Reports Server (NTRS)

    Wernet, Mark P.; Cheung, H. M.; Kressler, Brian

    2003-01-01

    PIV Acquisition (PIVACQ) is a computer program for acquisition of data for particle-image velocimetry (PIV). In the PIV system for which PIVACQ was developed, small particles entrained in a flow are illuminated with a sheet of light from a pulsed laser. The illuminated region is monitored by a charge-coupled-device camera that operates in conjunction with a data-acquisition system that includes a frame grabber and a counter-timer board, both installed in a single computer. The camera operates in "frame-straddle" mode where a pair of images can be obtained closely spaced in time (on the order of microseconds). The frame grabber acquires image data from the camera and stores the data in the computer memory. The counter/timer board triggers the camera and synchronizes the pulsing of the laser with acquisition of data from the camera. PIVPROC coordinates all of these functions and provides a graphical user interface, through which the user can control the PIV data-acquisition system. PIVACQ enables the user to acquire a sequence of single-exposure images, display the images, process the images, and then save the images to the computer hard drive. PIVACQ works in conjunction with the PIVPROC program which processes the images of particles into the velocity field in the illuminated plane.

  6. A digital ISO expansion technique for digital cameras

    NASA Astrophysics Data System (ADS)

    Yoo, Youngjin; Lee, Kangeui; Choe, Wonhee; Park, SungChan; Lee, Seong-Deok; Kim, Chang-Yong

    2010-01-01

    Market's demands of digital cameras for higher sensitivity capability under low-light conditions are remarkably increasing nowadays. The digital camera market is now a tough race for providing higher ISO capability. In this paper, we explore an approach for increasing maximum ISO capability of digital cameras without changing any structure of an image sensor or CFA. Our method is directly applied to the raw Bayer pattern CFA image to avoid non-linearity characteristics and noise amplification which are usually deteriorated after ISP (Image Signal Processor) of digital cameras. The proposed method fuses multiple short exposed images which are noisy, but less blurred. Our approach is designed to avoid the ghost artifact caused by hand-shaking and object motion. In order to achieve a desired ISO image quality, both low frequency chromatic noise and fine-grain noise that usually appear in high ISO images are removed and then we modify the different layers which are created by a two-scale non-linear decomposition of an image. Once our approach is performed on an input Bayer pattern CFA image, the resultant Bayer image is further processed by ISP to obtain a fully processed RGB image. The performance of our proposed approach is evaluated by comparing SNR (Signal to Noise Ratio), MTF50 (Modulation Transfer Function), color error ~E*ab and visual quality with reference images whose exposure times are properly extended into a variety of target sensitivity.

  7. Television monitor field shifter and an opto-electronic method for obtaining a stereo image of optimal depth resolution and reduced depth distortion on a single screen

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor)

    1989-01-01

    A method and apparatus is developed for obtaining a stereo image with reduced depth distortion and optimum depth resolution. Static and dynamic depth distortion and depth resolution tradeoff is provided. Cameras obtaining the images for a stereo view are converged at a convergence point behind the object to be presented in the image, and the collection-surface-to-object distance, the camera separation distance, and the focal lengths of zoom lenses for the cameras are all increased. Doubling the distances cuts the static depth distortion in half while maintaining image size and depth resolution. Dynamic depth distortion is minimized by panning a stereo view-collecting camera system about a circle which passes through the convergence point and the camera's first nodal points. Horizontal field shifting of the television fields on a television monitor brings both the monitor and the stereo views within the viewer's limit of binocular fusion.

  8. Setup for testing cameras for image guided surgery using a controlled NIR fluorescence mimicking light source and tissue phantom

    NASA Astrophysics Data System (ADS)

    Georgiou, Giota; Verdaasdonk, Rudolf M.; van der Veen, Albert; Klaessens, John H.

    2017-02-01

    In the development of new near-infrared (NIR) fluorescence dyes for image guided surgery, there is a need for new NIR sensitive camera systems that can easily be adjusted to specific wavelength ranges in contrast the present clinical systems that are only optimized for ICG. To test alternative camera systems, a setup was developed to mimic the fluorescence light in a tissue phantom to measure the sensitivity and resolution. Selected narrow band NIR LED's were used to illuminate a 6mm diameter circular diffuse plate to create uniform intensity controllable light spot (μW-mW) as target/source for NIR camera's. Layers of (artificial) tissue with controlled thickness could be placed on the spot to mimic a fluorescent `cancer' embedded in tissue. This setup was used to compare a range of NIR sensitive consumer's cameras for potential use in image guided surgery. The image of the spot obtained with the cameras was captured and analyzed using ImageJ software. Enhanced CCD night vision cameras were the most sensitive capable of showing intensities < 1 μW through 5 mm of tissue. However, there was no control over the automatic gain and hence noise level. NIR sensitive DSLR cameras proved relative less sensitive but could be fully manually controlled as to gain (ISO 25600) and exposure time and are therefore preferred for a clinical setting in combination with Wi-Fi remote control. The NIR fluorescence testing setup proved to be useful for camera testing and can be used for development and quality control of new NIR fluorescence guided surgery equipment.

  9. Label-free tissue scanner for colorectal cancer screening

    NASA Astrophysics Data System (ADS)

    Kandel, Mikhail E.; Sridharan, Shamira; Liang, Jon; Luo, Zelun; Han, Kevin; Macias, Virgilia; Shah, Anish; Patel, Roshan; Tangella, Krishnarao; Kajdacsy-Balla, Andre; Guzman, Grace; Popescu, Gabriel

    2017-06-01

    The current practice of surgical pathology relies on external contrast agents to reveal tissue architecture, which is then qualitatively examined by a trained pathologist. The diagnosis is based on the comparison with standardized empirical, qualitative assessments of limited objectivity. We propose an approach to pathology based on interferometric imaging of "unstained" biopsies, which provides unique capabilities for quantitative diagnosis and automation. We developed a label-free tissue scanner based on "quantitative phase imaging," which maps out optical path length at each point in the field of view and, thus, yields images that are sensitive to the "nanoscale" tissue architecture. Unlike analysis of stained tissue, which is qualitative in nature and affected by color balance, staining strength and imaging conditions, optical path length measurements are intrinsically quantitative, i.e., images can be compared across different instruments and clinical sites. These critical features allow us to automate the diagnosis process. We paired our interferometric optical system with highly parallelized, dedicated software algorithms for data acquisition, allowing us to image at a throughput comparable to that of commercial tissue scanners while maintaining the nanoscale sensitivity to morphology. Based on the measured phase information, we implemented software tools for autofocusing during imaging, as well as image archiving and data access. To illustrate the potential of our technology for large volume pathology screening, we established an "intrinsic marker" for colorectal disease that detects tissue with dysplasia or colorectal cancer and flags specific areas for further examination, potentially improving the efficiency of existing pathology workflows.

  10. Design, demonstration and testing of low F-number LWIR panoramic imaging relay optics

    NASA Astrophysics Data System (ADS)

    Furxhi, Orges; Frascati, Joe; Driggers, Ronald

    2018-04-01

    Panoramic imaging is inherently wide field of view. High sensitivity uncooled Long Wave Infrared (LWIR) imaging requires low F-number optics. These two requirements result in short back working distance designs that, in addition to being costly, are challenging to integrate with commercially available uncooled LWIR cameras and cores. Common challenges include the relocation of the shutter flag, custom calibration of the camera dynamic range and NUC tables, focusing, and athermalization. Solutions to these challenges add to the system cost and make panoramic uncooled LWIR cameras commercially unattractive. In this paper, we present the design of Panoramic Imaging Relay Optics (PIRO) and show imagery and test results with one of the first prototypes. PIRO designs use several reflective surfaces (generally two) to relay a panoramic scene onto a real, donut-shaped image. The PIRO donut is imaged on the focal plane of the camera using a commercially-off-the-shelf (COTS) low F-number lens. This approach results in low component cost and effortless integration with pre-calibrated commercially available cameras and lenses.

  11. NPS assessment of color medical image displays using a monochromatic CCD camera

    NASA Astrophysics Data System (ADS)

    Roehrig, Hans; Gu, Xiliang; Fan, Jiahua

    2012-10-01

    This paper presents an approach to Noise Power Spectrum (NPS) assessment of color medical displays without using an expensive imaging colorimeter. The R, G and B color uniform patterns were shown on the display under study and the images were taken using a high resolution monochromatic camera. A colorimeter was used to calibrate the camera images. Synthetic intensity images were formed by the weighted sum of the R, G, B and the dark screen images. Finally the NPS analysis was conducted on the synthetic images. The proposed method replaces an expensive imaging colorimeter for NPS evaluation, which also suggests a potential solution for routine color medical display QA/QC in the clinical area, especially when imaging of display devices is desired

  12. Object tracking using multiple camera video streams

    NASA Astrophysics Data System (ADS)

    Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford

    2010-05-01

    Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.

  13. Clinical evaluation of pixellated NaI:Tl and continuous LaBr 3:Ce, compact scintillation cameras for breast tumors imaging

    NASA Astrophysics Data System (ADS)

    Pani, R.; Pellegrini, R.; Betti, M.; De Vincentis, G.; Cinti, M. N.; Bennati, P.; Vittorini, F.; Casali, V.; Mattioli, M.; Orsolini Cencelli, V.; Navarria, F.; Bollini, D.; Moschini, G.; Iurlaro, G.; Montani, L.; de Notaristefani, F.

    2007-02-01

    The principal limiting factor in the clinical acceptance of scintimammography is certainly its low sensitivity for cancers sized <1 cm, mainly due to the lack of equipment specifically designed for breast imaging. The National Institute of Nuclear Physics (INFN) has been developing a new scintillation camera based on Lanthanum tri-Bromide Cerium-doped crystal (LaBr 3:Ce), that demonstrating superior imaging performances with respect to the dedicated scintillation γ-camera that was previously developed. The proposed detector consists of continuous LaBr 3:Ce scintillator crystal coupled to a Hamamatsu H8500 Flat Panel PMT. One centimeter thick crystal has been chosen to increase crystal detection efficiency. In this paper, we propose a comparison and evaluation between lanthanum γ-camera and a Multi PSPMT camera, NaI(Tl) discrete pixel based, previously developed under "IMI" Italian project for technological transfer of INFN. A phantom study has been developed to test both the cameras before introducing them in clinical trials. High resolution scans produced by LaBr 3:Ce camera showed higher tumor contrast with a detailed imaging of uptake area than pixellated NaI(Tl) dedicated camera. Furthermore, with the lanthanum camera, the Signal-to-Noise Ratio ( SNR) value was increased for a lesion as small as 5 mm, with a consequent strong improvement in detectability.

  14. Photogrammetric Modeling and Image-Based Rendering for Rapid Virtual Environment Creation

    DTIC Science & Technology

    2004-12-01

    area and different methods have been proposed. Pertinent methods include: Camera Calibration , Structure from Motion, Stereo Correspondence, and Image...Based Rendering 1.1.1 Camera Calibration Determining the 3D structure of a model from multiple views becomes simpler if the intrinsic (or internal...can introduce significant nonlinearities into the image. We have found that camera calibration is a straightforward process which can simplify the

  15. Research on inosculation between master of ceremonies or players and virtual scene in virtual studio

    NASA Astrophysics Data System (ADS)

    Li, Zili; Zhu, Guangxi; Zhu, Yaoting

    2003-04-01

    A technical principle about construction of virtual studio has been proposed where orientation tracker and telemeter has been used for improving conventional BETACAM pickup camera and connecting with the software module of the host. A model of virtual camera named Camera & Post-camera Coupling Pair has been put forward, which is different from the common model in computer graphics and has been bound to real BETACAM pickup camera for shooting. The formula has been educed to compute the foreground frame buffer image and the background frame buffer image of the virtual scene whose boundary is based on the depth information of target point of the real BETACAM pickup camera's projective ray. The effect of real-time consistency has been achieved between the video image sequences of the master of ceremonies or players and the CG video image sequences for the virtual scene in spatial position, perspective relationship and image object masking. The experimental result has shown that the technological scheme of construction of virtual studio submitted in this paper is feasible and more applicative and more effective than the existing technology to establish a virtual studio based on color-key and image synthesis with background using non-linear video editing technique.

  16. The use of low cost compact cameras with focus stacking functionality in entomological digitization projects

    PubMed Central

    Mertens, Jan E.J.; Roie, Martijn Van; Merckx, Jonas; Dekoninck, Wouter

    2017-01-01

    Abstract Digitization of specimen collections has become a key priority of many natural history museums. The camera systems built for this purpose are expensive, providing a barrier in institutes with limited funding, and therefore hampering progress. An assessment is made on whether a low cost compact camera with image stacking functionality can help expedite the digitization process in large museums or provide smaller institutes and amateur entomologists with the means to digitize their collections. Images of a professional setup were compared with the Olympus Stylus TG-4 Tough, a low-cost compact camera with internal focus stacking functions. Parameters considered include image quality, digitization speed, price, and ease-of-use. The compact camera’s image quality, although inferior to the professional setup, is exceptional considering its fourfold lower price point. Producing the image slices in the compact camera is a matter of seconds and when optimal image quality is less of a priority, the internal stacking function omits the need for dedicated stacking software altogether, further decreasing the cost and speeding up the process. In general, it is found that, aware of its limitations, this compact camera is capable of digitizing entomological collections with sufficient quality. As technology advances, more institutes and amateur entomologists will be able to easily and affordably catalogue their specimens. PMID:29134038

  17. From a Million Miles Away, NASA Camera Shows Moon Crossing Face of Earth

    NASA Image and Video Library

    2015-08-05

    This animation shows images of the far side of the moon, illuminated by the sun, as it crosses between the DISCOVR spacecraft's Earth Polychromatic Imaging Camera (EPIC) camera and telescope, and the Earth - one million miles away. Credits: NASA/NOAA A NASA camera aboard the Deep Space Climate Observatory (DSCOVR) satellite captured a unique view of the moon as it moved in front of the sunlit side of Earth last month. The series of test images shows the fully illuminated “dark side” of the moon that is never visible from Earth. The images were captured by NASA’s Earth Polychromatic Imaging Camera (EPIC), a four megapixel CCD camera and telescope on the DSCOVR satellite orbiting 1 million miles from Earth. From its position between the sun and Earth, DSCOVR conducts its primary mission of real-time solar wind monitoring for the National Oceanic and Atmospheric Administration (NOAA). Read more: www.nasa.gov/feature/goddard/from-a-million-miles-away-na... NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  18. From a Million Miles Away, NASA Camera Shows Moon Crossing Face of Earth

    NASA Image and Video Library

    2017-12-08

    This animation still image shows the far side of the moon, illuminated by the sun, as it crosses between the DISCOVR spacecraft's Earth Polychromatic Imaging Camera (EPIC) camera and telescope, and the Earth - one million miles away. Credits: NASA/NOAA A NASA camera aboard the Deep Space Climate Observatory (DSCOVR) satellite captured a unique view of the moon as it moved in front of the sunlit side of Earth last month. The series of test images shows the fully illuminated “dark side” of the moon that is never visible from Earth. The images were captured by NASA’s Earth Polychromatic Imaging Camera (EPIC), a four megapixel CCD camera and telescope on the DSCOVR satellite orbiting 1 million miles from Earth. From its position between the sun and Earth, DSCOVR conducts its primary mission of real-time solar wind monitoring for the National Oceanic and Atmospheric Administration (NOAA). Read more: www.nasa.gov/feature/goddard/from-a-million-miles-away-na... NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  19. Introducing the depth transfer curve for 3D capture system characterization

    NASA Astrophysics Data System (ADS)

    Goma, Sergio R.; Atanassov, Kalin; Ramachandra, Vikas

    2011-03-01

    3D technology has recently made a transition from movie theaters to consumer electronic devices such as 3D cameras and camcorders. In addition to what 2D imaging conveys, 3D content also contains information regarding the scene depth. Scene depth is simulated through the strongest brain depth cue, namely retinal disparity. This can be achieved by capturing an image by horizontally separated cameras. Objects at different depths will be projected with different horizontal displacement on the left and right camera images. These images, when fed separately to either eye, leads to retinal disparity. Since the perception of depth is the single most important 3D imaging capability, an evaluation procedure is needed to quantify the depth capture characteristics. Evaluating depth capture characteristics subjectively is a very difficult task since the intended and/or unintended side effects from 3D image fusion (depth interpretation) by the brain are not immediately perceived by the observer, nor do such effects lend themselves easily to objective quantification. Objective evaluation of 3D camera depth characteristics is an important tool that can be used for "black box" characterization of 3D cameras. In this paper we propose a methodology to evaluate the 3D cameras' depth capture capabilities.

  20. Constructing a Database from Multiple 2D Images for Camera Pose Estimation and Robot Localization

    NASA Technical Reports Server (NTRS)

    Wolf, Michael; Ansar, Adnan I.; Brennan, Shane; Clouse, Daniel S.; Padgett, Curtis W.

    2012-01-01

    The LMDB (Landmark Database) Builder software identifies persistent image features (landmarks) in a scene viewed multiple times and precisely estimates the landmarks 3D world positions. The software receives as input multiple 2D images of approximately the same scene, along with an initial guess of the camera poses for each image, and a table of features matched pair-wise in each frame. LMDB Builder aggregates landmarks across an arbitrarily large collection of frames with matched features. Range data from stereo vision processing can also be passed to improve the initial guess of the 3D point estimates. The LMDB Builder aggregates feature lists across all frames, manages the process to promote selected features to landmarks, and iteratively calculates the 3D landmark positions using the current camera pose estimations (via an optimal ray projection method), and then improves the camera pose estimates using the 3D landmark positions. Finally, it extracts image patches for each landmark from auto-selected key frames and constructs the landmark database. The landmark database can then be used to estimate future camera poses (and therefore localize a robotic vehicle that may be carrying the cameras) by matching current imagery to landmark database image patches and using the known 3D landmark positions to estimate the current pose.

  1. Using DSLR cameras in digital holography

    NASA Astrophysics Data System (ADS)

    Hincapié-Zuluaga, Diego; Herrera-Ramírez, Jorge; García-Sucerquia, Jorge

    2017-08-01

    In Digital Holography (DH), the size of the bidimensional image sensor to record the digital hologram, plays a key role on the performance of this imaging technique; the larger the size of the camera sensor, the better the quality of the final reconstructed image. Scientific cameras with large formats are offered in the market, but their cost and availability limit their use as a first option when implementing DH. Nowadays, DSLR cameras provide an easy-access alternative that is worthwhile to be explored. The DSLR cameras are a wide, commercial, and available option that in comparison with traditional scientific cameras, offer a much lower cost per effective pixel over a large sensing area. However, in the DSLR cameras, with their RGB pixel distribution, the sampling of information is different to the sampling in monochrome cameras usually employed in DH. This fact has implications in their performance. In this work, we discuss why DSLR cameras are not extensively used for DH, taking into account the problem reported by different authors of object replication. Simulations of DH using monochromatic and DSLR cameras are presented and a theoretical deduction for the replication problem using the Fourier theory is also shown. Experimental results of DH implementation using a DSLR camera show the replication problem.

  2. Evaluation of Real-Time Hand Motion Tracking Using a Range Camera and the Mean-Shift Algorithm

    NASA Astrophysics Data System (ADS)

    Lahamy, H.; Lichti, D.

    2011-09-01

    Several sensors have been tested for improving the interaction between humans and machines including traditional web cameras, special gloves, haptic devices, cameras providing stereo pairs of images and range cameras. Meanwhile, several methods are described in the literature for tracking hand motion: the Kalman filter, the mean-shift algorithm and the condensation algorithm. In this research, the combination of a range camera and the simple version of the mean-shift algorithm has been evaluated for its capability for hand motion tracking. The evaluation was assessed in terms of position accuracy of the tracking trajectory in x, y and z directions in the camera space and the time difference between image acquisition and image display. Three parameters have been analyzed regarding their influence on the tracking process: the speed of the hand movement, the distance between the camera and the hand and finally the integration time of the camera. Prior to the evaluation, the required warm-up time of the camera has been measured. This study has demonstrated the suitability of the range camera used in combination with the mean-shift algorithm for real-time hand motion tracking but for very high speed hand movement in the traverse plane with respect to the camera, the tracking accuracy is low and requires improvement.

  3. Bam, Iran, Radar Interferometry -- Earthquake

    NASA Image and Video Library

    2004-06-25

    A magnitude 6.5 earthquake devastated the small city of Bam in southeast Iran on December 26, 2003. The two images from ESA Envisat show similar measures of the radar interferometric correlation in grayscale on the left and in false colors on the right.

  4. Using SAR Interferograms and Coherence Images for Object-Based Delineation of Unstable Slopes

    NASA Astrophysics Data System (ADS)

    Friedl, Barbara; Holbling, Daniel

    2015-05-01

    This study uses synthetic aperture radar (SAR) interferometric products for the semi-automated identification and delineation of unstable slopes and active landslides. Single-pair interferograms and coherence images are therefore segmented and classified in an object-based image analysis (OBIA) framework. The rule-based classification approach has been applied to landslide-prone areas located in Taiwan and Southern Germany. The semi-automatically obtained results were validated against landslide polygons derived from manual interpretation.

  5. Focal-Plane Imaging of Crossed Beams in Nonlinear Optics Experiments

    NASA Technical Reports Server (NTRS)

    Bivolaru, Daniel; Herring, G. C.

    2007-01-01

    An application of focal-plane imaging that can be used as a real time diagnostic of beam crossing in various optical techniques is reported. We discuss two specific versions and demonstrate the capability of maximizing system performance with an example in a combined dual-pump coherent anti-Stokes Raman scattering interferometric Rayleigh scattering experiment (CARS-IRS). We find that this imaging diagnostic significantly reduces beam alignment time and loss of CARS-IRS signals due to inadvertent misalignments.

  6. White light phase shifting interferometry and color fringe analysis for the detection of contaminants in water

    NASA Astrophysics Data System (ADS)

    Dubey, Vishesh; Singh, Veena; Ahmad, Azeem; Singh, Gyanendra; Mehta, Dalip Singh

    2016-03-01

    We report white light phase shifting interferometry in conjunction with color fringe analysis for the detection of contaminants in water such as Escherichia coli (E.coli), Campylobacter coli and Bacillus cereus. The experimental setup is based on a common path interferometer using Mirau interferometric objective lens. White light interferograms are recorded using a 3-chip color CCD camera based on prism technology. The 3-chip color camera have lesser color cross talk and better spatial resolution in comparison to single chip CCD camera. A piezo-electric transducer (PZT) phase shifter is fixed with the Mirau objective and they are attached with a conventional microscope. Five phase shifted white light interferograms are recorded by the 3-chip color CCD camera and each phase shifted interferogram is decomposed into the red, green and blue constituent colors, thus making three sets of five phase shifted intererograms for three different colors from a single set of white light interferogram. This makes the system less time consuming and have lesser effect due to surrounding environment. Initially 3D phase maps of the bacteria are reconstructed for red, green and blue wavelengths from these interferograms using MATLAB, from these phase maps we determines the refractive index (RI) of the bacteria. Experimental results of 3D shape measurement and RI at multiple wavelengths will be presented. These results might find applications for detection of contaminants in water without using any chemical processing and fluorescent dyes.

  7. Wideband Interferometric Sensing and Imaging Polarimetry

    NASA Technical Reports Server (NTRS)

    Verdi, James Salvatore; Kessler, Otto; Boerner, Wolfgang-Martin

    1996-01-01

    Wideband Interferometric Sensing and Imaging Polarimetry (WISIP) has become an important, indispensible tool in wide area military surveillance and global environmental monitoring of the terrestrial and planetary covers. It enables dynamic, real time optimal feature extraction of significant characteristics of desirable targets and/or target sections with simultaneous suppression of undesirable background clutter and propagation path speckle at hitherto unknown clarity and never before achieved quality. WISIP may be adopted to the detection, recognition, and identification (DRI) of any stationary, moving or vibrating targets or distributed scatterer segments versus arbitrary stationary, dynamical changing and/or moving geo-physical/ecological environments, provided the instantaneous 2x2 phasor and 4x4 power density matrices for forward propagation/backward scattering, respectively, can be measured with sufficient accuracy. For example, the DRI of stealthy, dynamically moving inhomogeneous volumetric scatter environments such as precipitation scatter, the ocean/sea/lake surface boundary layers, the littoral coastal surf zones, pack ice and snow or vegetative canopies, dry sands and soils, etc. can now be successfully realized. A comprehensive overview is presented on how these modern high resolution/precision, complete polarimetric co-registered signature sensing and imaging techniques, complemented by full integration of novel navigational electronic tools, such as DGPS, will advance electromagnetic vector wave sensing and imaging towards the limits of physical realization. Various examples utilizing the most recent image data take sets of airborne, space shuttle, and satellite imaging systems demonstrate the utility of WISIP.

  8. The influence of underwater turbulence on optical phase measurements

    NASA Astrophysics Data System (ADS)

    Redding, Brandon; Davis, Allen; Kirkendall, Clay; Dandridge, Anthony

    2016-05-01

    Emerging underwater optical imaging and sensing applications rely on phase-sensitive detection to provide added functionality and improved sensitivity. However, underwater turbulence introduces spatio-temporal variations in the refractive index of water which can degrade the performance of these systems. Although the influence of turbulence on traditional, non-interferometric imaging has been investigated, its influence on the optical phase remains poorly understood. Nonetheless, a thorough understanding of the spatio-temporal dynamics of the optical phase of light passing through underwater turbulence are crucial to the design of phase-sensitive imaging and sensing systems. To address this concern, we combined underwater imaging with high speed holography to provide a calibrated characterization of the effects of turbulence on the optical phase. By measuring the modulation transfer function of an underwater imaging system, we were able to calibrate varying levels of optical turbulence intensity using the Simple Underwater Imaging Model (SUIM). We then used high speed holography to measure the temporal dynamics of the optical phase of light passing through varying levels of turbulence. Using this method, we measured the variance in the amplitude and phase of the beam, the temporal correlation of the optical phase, and recorded the turbulence induced phase noise as a function of frequency. By bench marking the effects of varying levels of turbulence on the optical phase, this work provides a basis to evaluate the real-world potential of emerging underwater interferometric sensing modalities.

  9. Technology and Technique Standards for Camera-Acquired Digital Dermatologic Images: A Systematic Review.

    PubMed

    Quigley, Elizabeth A; Tokay, Barbara A; Jewell, Sarah T; Marchetti, Michael A; Halpern, Allan C

    2015-08-01

    Photographs are invaluable dermatologic diagnostic, management, research, teaching, and documentation tools. Digital Imaging and Communications in Medicine (DICOM) standards exist for many types of digital medical images, but there are no DICOM standards for camera-acquired dermatologic images to date. To identify and describe existing or proposed technology and technique standards for camera-acquired dermatologic images in the scientific literature. Systematic searches of the PubMed, EMBASE, and Cochrane databases were performed in January 2013 using photography and digital imaging, standardization, and medical specialty and medical illustration search terms and augmented by a gray literature search of 14 websites using Google. Two reviewers independently screened titles of 7371 unique publications, followed by 3 sequential full-text reviews, leading to the selection of 49 publications with the most recent (1985-2013) or detailed description of technology or technique standards related to the acquisition or use of images of skin disease (or related conditions). No universally accepted existing technology or technique standards for camera-based digital images in dermatology were identified. Recommendations are summarized for technology imaging standards, including spatial resolution, color resolution, reproduction (magnification) ratios, postacquisition image processing, color calibration, compression, output, archiving and storage, and security during storage and transmission. Recommendations are also summarized for technique imaging standards, including environmental conditions (lighting, background, and camera position), patient pose and standard view sets, and patient consent, privacy, and confidentiality. Proposed standards for specific-use cases in total body photography, teledermatology, and dermoscopy are described. The literature is replete with descriptions of obtaining photographs of skin disease, but universal imaging standards have not been developed, validated, and adopted to date. Dermatologic imaging is evolving without defined standards for camera-acquired images, leading to variable image quality and limited exchangeability. The development and adoption of universal technology and technique standards may first emerge in scenarios when image use is most associated with a defined clinical benefit.

  10. Super-resolved all-refocused image with a plenoptic camera

    NASA Astrophysics Data System (ADS)

    Wang, Xiang; Li, Lin; Hou, Guangqi

    2015-12-01

    This paper proposes an approach to produce the super-resolution all-refocused images with the plenoptic camera. The plenoptic camera can be produced by putting a micro-lens array between the lens and the sensor in a conventional camera. This kind of camera captures both the angular and spatial information of the scene in one single shot. A sequence of digital refocused images, which are refocused at different depth, can be produced after processing the 4D light field captured by the plenoptic camera. The number of the pixels in the refocused image is the same as that of the micro-lens in the micro-lens array. Limited number of the micro-lens will result in poor low resolution refocused images. Therefore, not enough details will exist in these images. Such lost details, which are often high frequency information, are important for the in-focus part in the refocused image. We decide to super-resolve these in-focus parts. The result of image segmentation method based on random walks, which works on the depth map produced from the 4D light field data, is used to separate the foreground and background in the refocused image. And focusing evaluation function is employed to determine which refocused image owns the clearest foreground part and which one owns the clearest background part. Subsequently, we employ single image super-resolution method based on sparse signal representation to process the focusing parts in these selected refocused images. Eventually, we can obtain the super-resolved all-focus image through merging the focusing background part and the focusing foreground part in the way of digital signal processing. And more spatial details will be kept in these output images. Our method will enhance the resolution of the refocused image, and just the refocused images owning the clearest foreground and background need to be super-resolved.

  11. A Semi-Automatic Image-Based Close Range 3D Modeling Pipeline Using a Multi-Camera Configuration

    PubMed Central

    Rau, Jiann-Yeou; Yeh, Po-Chia

    2012-01-01

    The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum. PMID:23112656

  12. Development of an Ultra-Violet Digital Camera for Volcanic Sulfur Dioxide Imaging

    NASA Astrophysics Data System (ADS)

    Bluth, G. J.; Shannon, J. M.; Watson, I. M.; Prata, F. J.; Realmuto, V. J.

    2006-12-01

    In an effort to improve monitoring of passive volcano degassing, we have constructed and tested a digital camera for quantifying the sulfur dioxide (SO2) content of volcanic plumes. The camera utilizes a bandpass filter to collect photons in the ultra-violet (UV) region where SO2 selectively absorbs UV light. SO2 is quantified by imaging calibration cells of known SO2 concentrations. Images of volcanic SO2 plumes were collected at four active volcanoes with persistent passive degassing: Villarrica, located in Chile, and Santiaguito, Fuego, and Pacaya, located in Guatemala. Images were collected from distances ranging between 4 and 28 km away, with crisp detection up to approximately 16 km. Camera set-up time in the field ranges from 5-10 minutes and images can be recorded in as rapidly as 10-second intervals. Variable in-plume concentrations can be observed and accurate plume speeds (or rise rates) can readily be determined by tracing individual portions of the plume within sequential images. Initial fluxes computed from camera images require a correction for the effects of environmental light scattered into the field of view. At Fuego volcano, simultaneous measurements of corrected SO2 fluxes with the camera and a Correlation Spectrometer (COSPEC) agreed within 25 percent. Experiments at the other sites were equally encouraging, and demonstrated the camera's ability to detect SO2 under demanding meteorological conditions. This early work has shown great success in imaging SO2 plumes and offers promise for volcano monitoring due to its rapid deployment and data processing capabilities, relatively low cost, and improved interpretation afforded by synoptic plume coverage from a range of distances.

  13. A semi-automatic image-based close range 3D modeling pipeline using a multi-camera configuration.

    PubMed

    Rau, Jiann-Yeou; Yeh, Po-Chia

    2012-01-01

    The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum.

  14. Adaptive Unscented Kalman Filter Phase Unwrapping Method and Its Application on Gaofen-3 Interferometric SAR Data.

    PubMed

    Gao, Yandong; Zhang, Shubi; Li, Tao; Chen, Qianfu; Li, Shijin; Meng, Pengfei

    2018-06-02

    Phase unwrapping (PU) is a key step in the reconstruction of digital elevation models (DEMs) and the monitoring of surface deformation from interferometric synthetic aperture radar (SAR, InSAR) data. In this paper, an improved PU method that combines an amended matrix pencil model, an adaptive unscented kalman filter (AUKF), an efficient quality-guided strategy based on heapsort, and a circular median filter is proposed. PU theory and the existing UKFPU method are covered. Then, the improved method is presented with emphasis on the AUKF and the circular median filter. AUKF has been well used in other fields, but it is for the first time applied to interferometric images PU, to the best of our knowledge. First, the amended matrix pencil model is used to estimate the phase gradient. Then, an AUKF model is used to unwrap the interferometric phase based on an efficient quality-guided strategy based on heapsort. Finally, the key results are obtained by filtering the results using a circular median. The proposed method is compared with the minimum cost network flow (MCF), statistical cost network flow (SNAPHU), regularized phase tracking technique (RPTPU), and UKFPU methods using two sets of simulated data and two sets of experimental GF-3 SAR data. The improved method is shown to yield the greatest accuracy in the interferometric phase maps compared to the methods considered in this paper. Furthermore, the improved method is shown to be the most robust to noise and is thus most suitable for PU of GF-3 SAR data in high-noise and low-coherence regions.

  15. Optimized algorithm for the spatial nonuniformity correction of an imaging system based on a charge-coupled device color camera.

    PubMed

    de Lasarte, Marta; Pujol, Jaume; Arjona, Montserrat; Vilaseca, Meritxell

    2007-01-10

    We present an optimized linear algorithm for the spatial nonuniformity correction of a CCD color camera's imaging system and the experimental methodology developed for its implementation. We assess the influence of the algorithm's variables on the quality of the correction, that is, the dark image, the base correction image, and the reference level, and the range of application of the correction using a uniform radiance field provided by an integrator cube. The best spatial nonuniformity correction is achieved by having a nonzero dark image, by using an image with a mean digital level placed in the linear response range of the camera as the base correction image and taking the mean digital level of the image as the reference digital level. The response of the CCD color camera's imaging system to the uniform radiance field shows a high level of spatial uniformity after the optimized algorithm has been applied, which also allows us to achieve a high-quality spatial nonuniformity correction of captured images under different exposure conditions.

  16. Performance evaluation of low-cost airglow cameras for mesospheric gravity wave measurements

    NASA Astrophysics Data System (ADS)

    Suzuki, S.; Shiokawa, K.

    2016-12-01

    Atmospheric gravity waves significantly contribute to the wind/thermal balances in the mesosphere and lower thermosphere (MLT) through their vertical transport of horizontal momentum. It has been reported that the gravity wave momentum flux preferentially associated with the scale of the waves; the momentum fluxes of the waves with a horizontal scale of 10-100 km are particularly significant. Airglow imaging is a useful technique to observe two-dimensional structure of small-scale (<100 km) gravity waves in the MLT region and has been used to investigate global behaviour of the waves. Recent studies with simultaneous/multiple airglow cameras have derived spatial extent of the MLT waves. Such network imaging observations are advantageous to ever better understanding of coupling between the lower and upper atmosphere via gravity waves. In this study, we newly developed low-cost airglow cameras to enlarge the airglow imaging network. Each of the cameras has a fish-eye lens with a 185-deg field-of-view and equipped with a CCD video camera (WATEC WAT-910HX) ; the camera is small (W35.5 x H36.0 x D63.5 mm) and inexpensive, much more than the airglow camera used for the existing ground-based network (Optical Mesosphere Thermosphere Imagers (OMTI) operated by Solar-Terrestrial Environmental Laboratory, Nagoya University), and has a CCD sensor with 768 x 494 pixels that is highly sensitive enough to detect the mesospheric OH airglow emission perturbations. In this presentation, we will report some results of performance evaluation of this camera made at Shigaraki (35-deg N, 136-deg E), Japan, where is one of the OMTI station. By summing 15-images (i.e., 1-min composition of the images) we recognised clear gravity wave patterns in the images with comparable quality to the OMTI's image. Outreach and educational activities based on this research will be also reported.

  17. Digital Camera Control for Faster Inspection

    NASA Technical Reports Server (NTRS)

    Brown, Katharine; Siekierski, James D.; Mangieri, Mark L.; Dekome, Kent; Cobarruvias, John; Piplani, Perry J.; Busa, Joel

    2009-01-01

    Digital Camera Control Software (DCCS) is a computer program for controlling a boom and a boom-mounted camera used to inspect the external surface of a space shuttle in orbit around the Earth. Running in a laptop computer in the space-shuttle crew cabin, DCCS commands integrated displays and controls. By means of a simple one-button command, a crewmember can view low- resolution images to quickly spot problem areas and can then cause a rapid transition to high- resolution images. The crewmember can command that camera settings apply to a specific small area of interest within the field of view of the camera so as to maximize image quality within that area. DCCS also provides critical high-resolution images to a ground screening team, which analyzes the images to assess damage (if any); in so doing, DCCS enables the team to clear initially suspect areas more quickly than would otherwise be possible and further saves time by minimizing the probability of re-imaging of areas already inspected. On the basis of experience with a previous version (2.0) of the software, the present version (3.0) incorporates a number of advanced imaging features that optimize crewmember capability and efficiency.

  18. Brute Force Matching Between Camera Shots and Synthetic Images from Point Clouds

    NASA Astrophysics Data System (ADS)

    Boerner, R.; Kröhnert, M.

    2016-06-01

    3D point clouds, acquired by state-of-the-art terrestrial laser scanning techniques (TLS), provide spatial information about accuracies up to several millimetres. Unfortunately, common TLS data has no spectral information about the covered scene. However, the matching of TLS data with images is important for monoplotting purposes and point cloud colouration. Well-established methods solve this issue by matching of close range images and point cloud data by fitting optical camera systems on top of laser scanners or rather using ground control points. The approach addressed in this paper aims for the matching of 2D image and 3D point cloud data from a freely moving camera within an environment covered by a large 3D point cloud, e.g. a 3D city model. The key advantage of the free movement affects augmented reality applications or real time measurements. Therefore, a so-called real image, captured by a smartphone camera, has to be matched with a so-called synthetic image which consists of reverse projected 3D point cloud data to a synthetic projection centre whose exterior orientation parameters match the parameters of the image, assuming an ideal distortion free camera.

  19. Dense Region of Impact Craters

    NASA Image and Video Library

    2011-09-23

    NASA Dawn spacecraft obtained this image of the giant asteroid Vesta with its framing camera on Aug. 14 2011. This image was taken through the camera clear filter. The image has a resolution of about 260 meters per pixel.

  20. Low-cost printing of computerised tomography (CT) images where there is no dedicated CT camera.

    PubMed

    Tabari, Abdulkadir M

    2007-01-01

    Many developing countries still rely on conventional hard copy images to transfer information among physicians. We have developed a low-cost alternative method of printing computerised tomography (CT) scan images where there is no dedicated camera. A digital camera is used to photograph images from the CT scan screen monitor. The images are then transferred to a PC via a USB port, before being printed on glossy paper using an inkjet printer. The method can be applied to other imaging modalities like ultrasound and MRI and appears worthy of emulation elsewhere in the developing world where resources and technical expertise are scarce.

  1. A small field of view camera for hybrid gamma and optical imaging

    NASA Astrophysics Data System (ADS)

    Lees, J. E.; Bugby, S. L.; Bhatia, B. S.; Jambi, L. K.; Alqahtani, M. S.; McKnight, W. R.; Ng, A. H.; Perkins, A. C.

    2014-12-01

    The development of compact low profile gamma-ray detectors has allowed the production of small field of view, hand held imaging devices for use at the patient bedside and in operating theatres. The combination of an optical and a gamma camera, in a co-aligned configuration, offers high spatial resolution multi-modal imaging giving a superimposed scintigraphic and optical image. This innovative introduction of hybrid imaging offers new possibilities for assisting surgeons in localising the site of uptake in procedures such as sentinel node detection. Recent improvements to the camera system along with results of phantom and clinical imaging are reported.

  2. Sensor noise camera identification: countering counter-forensics

    NASA Astrophysics Data System (ADS)

    Goljan, Miroslav; Fridrich, Jessica; Chen, Mo

    2010-01-01

    In camera identification using sensor noise, the camera that took a given image can be determined with high certainty by establishing the presence of the camera's sensor fingerprint in the image. In this paper, we develop methods to reveal counter-forensic activities in which an attacker estimates the camera fingerprint from a set of images and pastes it onto an image from a different camera with the intent to introduce a false alarm and, in doing so, frame an innocent victim. We start by classifying different scenarios based on the sophistication of the attacker's activity and the means available to her and to the victim, who wishes to defend herself. The key observation is that at least some of the images that were used by the attacker to estimate the fake fingerprint will likely be available to the victim as well. We describe the socalled "triangle test" that helps the victim reveal attacker's malicious activity with high certainty under a wide range of conditions. This test is then extended to the case when none of the images that the attacker used to create the fake fingerprint are available to the victim but the victim has at least two forged images to analyze. We demonstrate the test's performance experimentally and investigate its limitations. The conclusion that can be made from this study is that planting a sensor fingerprint in an image without leaving a trace is significantly more difficult than previously thought.

  3. Bundle Adjustment-Based Stability Analysis Method with a Case Study of a Dual Fluoroscopy Imaging System

    NASA Astrophysics Data System (ADS)

    Al-Durgham, K.; Lichti, D. D.; Detchev, I.; Kuntze, G.; Ronsky, J. L.

    2018-05-01

    A fundamental task in photogrammetry is the temporal stability analysis of a camera/imaging-system's calibration parameters. This is essential to validate the repeatability of the parameters' estimation, to detect any behavioural changes in the camera/imaging system and to ensure precise photogrammetric products. Many stability analysis methods exist in the photogrammetric literature; each one has different methodological bases, and advantages and disadvantages. This paper presents a simple and rigorous stability analysis method that can be straightforwardly implemented for a single camera or an imaging system with multiple cameras. The basic collinearity model is used to capture differences between two calibration datasets, and to establish the stability analysis methodology. Geometric simulation is used as a tool to derive image and object space scenarios. Experiments were performed on real calibration datasets from a dual fluoroscopy (DF; X-ray-based) imaging system. The calibration data consisted of hundreds of images and thousands of image observations from six temporal points over a two-day period for a precise evaluation of the DF system stability. The stability of the DF system - for a single camera analysis - was found to be within a range of 0.01 to 0.66 mm in terms of 3D coordinates root-mean-square-error (RMSE), and 0.07 to 0.19 mm for dual cameras analysis. It is to the authors' best knowledge that this work is the first to address the topic of DF stability analysis.

  4. A survey of camera error sources in machine vision systems

    NASA Astrophysics Data System (ADS)

    Jatko, W. B.

    In machine vision applications, such as an automated inspection line, television cameras are commonly used to record scene intensity in a computer memory or frame buffer. Scene data from the image sensor can then be analyzed with a wide variety of feature-detection techniques. Many algorithms found in textbooks on image processing make the implicit simplifying assumption of an ideal input image with clearly defined edges and uniform illumination. The ideal image model is helpful to aid the student in understanding the principles of operation, but when these algorithms are blindly applied to real-world images the results can be unsatisfactory. This paper examines some common measurement errors found in camera sensors and their underlying causes, and possible methods of error compensation. The role of the camera in a typical image-processing system is discussed, with emphasis on the origination of signal distortions. The effects of such things as lighting, optics, and sensor characteristics are considered.

  5. Brandaris 128 ultra-high-speed imaging facility: 10 years of operation, updates, and enhanced features

    NASA Astrophysics Data System (ADS)

    Gelderblom, Erik C.; Vos, Hendrik J.; Mastik, Frits; Faez, Telli; Luan, Ying; Kokhuis, Tom J. A.; van der Steen, Antonius F. W.; Lohse, Detlef; de Jong, Nico; Versluis, Michel

    2012-10-01

    The Brandaris 128 ultra-high-speed imaging facility has been updated over the last 10 years through modifications made to the camera's hardware and software. At its introduction the camera was able to record 6 sequences of 128 images (500 × 292 pixels) at a maximum frame rate of 25 Mfps. The segmented mode of the camera was revised to allow for subdivision of the 128 image sensors into arbitrary segments (1-128) with an inter-segment time of 17 μs. Furthermore, a region of interest can be selected to increase the number of recordings within a single run of the camera from 6 up to 125. By extending the imaging system with a laser-induced fluorescence setup, time-resolved ultra-high-speed fluorescence imaging of microscopic objects has been enabled. Minor updates to the system are also reported here.

  6. Flame Imaging System

    NASA Technical Reports Server (NTRS)

    Barnes, Heidi L. (Inventor); Smith, Harvey S. (Inventor)

    1998-01-01

    A system for imaging a flame and the background scene is discussed. The flame imaging system consists of two charge-coupled-device (CCD) cameras. One camera uses a 800 nm long pass filter which during overcast conditions blocks sufficient background light so the hydrogen flame is brighter than the background light, and the second CCD camera uses a 1100 nm long pass filter, which blocks the solar background in full sunshine conditions such that the hydrogen flame is brighter than the solar background. Two electronic viewfinders convert the signal from the cameras into a visible image. The operator can select the appropriate filtered camera to use depending on the current light conditions. In addition, a narrow band pass filtered InGaAs sensor at 1360 nm triggers an audible alarm and a flashing LED if the sensor detects a flame, providing additional flame detection so the operator does not overlook a small flame.

  7. Single-camera stereo-digital image correlation with a four-mirror adapter: optimized design and validation

    NASA Astrophysics Data System (ADS)

    Yu, Liping; Pan, Bing

    2016-12-01

    A low-cost, easy-to-implement but practical single-camera stereo-digital image correlation (DIC) system using a four-mirror adapter is established for accurate shape and three-dimensional (3D) deformation measurements. The mirrors assisted pseudo-stereo imaging system can convert a single camera into two virtual cameras, which view a specimen from different angles and record the surface images of the test object onto two halves of the camera sensor. To enable deformation measurement in non-laboratory conditions or extreme high temperature environments, an active imaging optical design, combining an actively illuminated monochromatic source with a coupled band-pass optical filter, is compactly integrated to the pseudo-stereo DIC system. The optical design, basic principles and implementation procedures of the established system for 3D profile and deformation measurements are described in detail. The effectiveness and accuracy of the established system are verified by measuring the profile of a regular cylinder surface and displacements of a translated planar plate. As an application example, the established system is used to determine the tensile strains and Poisson's ratio of a composite solid propellant specimen during stress relaxation test. Since the established single-camera stereo-DIC system only needs a single camera and presents strong robustness against variations in ambient light or the thermal radiation of a hot object, it demonstrates great potential in determining transient deformation in non-laboratory or high-temperature environments with the aid of a single high-speed camera.

  8. Development of Automated Tracking System with Active Cameras for Figure Skating

    NASA Astrophysics Data System (ADS)

    Haraguchi, Tomohiko; Taki, Tsuyoshi; Hasegawa, Junichi

    This paper presents a system based on the control of PTZ cameras for automated real-time tracking of individual figure skaters moving on an ice rink. In the video images of figure skating, irregular trajectories, various postures, rapid movements, and various costume colors are included. Therefore, it is difficult to determine some features useful for image tracking. On the other hand, an ice rink has a limited area and uniform high intensity, and skating is always performed on ice. In the proposed system, an ice rink region is first extracted from a video image by the region growing method, and then, a skater region is extracted using the rink shape information. In the camera control process, each camera is automatically panned and/or tilted so that the skater region is as close to the center of the image as possible; further, the camera is zoomed to maintain the skater image at an appropriate scale. The results of experiments performed for 10 training scenes show that the skater extraction rate is approximately 98%. Thus, it was concluded that tracking with camera control was successful for almost all the cases considered in the study.

  9. Achieving thermography with a thermal security camera using uncooled amorphous silicon microbolometer image sensors

    NASA Astrophysics Data System (ADS)

    Wang, Yu-Wei; Tesdahl, Curtis; Owens, Jim; Dorn, David

    2012-06-01

    Advancements in uncooled microbolometer technology over the last several years have opened up many commercial applications which had been previously cost prohibitive. Thermal technology is no longer limited to the military and government market segments. One type of thermal sensor with low NETD which is available in the commercial market segment is the uncooled amorphous silicon (α-Si) microbolometer image sensor. Typical thermal security cameras focus on providing the best image quality by auto tonemaping (contrast enhancing) the image, which provides the best contrast depending on the temperature range of the scene. While this may provide enough information to detect objects and activities, there are further benefits of being able to estimate the actual object temperatures in a scene. This thermographic ability can provide functionality beyond typical security cameras by being able to monitor processes. Example applications of thermography[2] with thermal camera include: monitoring electrical circuits, industrial machinery, building thermal leaks, oil/gas pipelines, power substations, etc...[3][5] This paper discusses the methodology of estimating object temperatures by characterizing/calibrating different components inside a thermal camera utilizing an uncooled amorphous silicon microbolometer image sensor. Plots of system performance across camera operating temperatures will be shown.

  10. Calibration of Action Cameras for Photogrammetric Purposes

    PubMed Central

    Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo

    2014-01-01

    The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution. PMID:25237898

  11. Calibration of action cameras for photogrammetric purposes.

    PubMed

    Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo

    2014-09-18

    The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution.

  12. Four Eyes Are Better

    NASA Astrophysics Data System (ADS)

    2002-09-01

    VLT Interferometer Passes Another Technical Hurdle Summary During the nights of September 15/16 and 16/17, 2002, preliminary tests were successfully carried out during which the light beams from all four VLT 8.2-m Unit Telescopes (UTs) at the ESO Paranal Observatory were successively combined, two by two, to produce interferometric fringes . This marks a next important step towards the full implementation of the VLT Interferometer (VLTI) that will ultimately provide European astronomers with unequalled opportunities for exciting front-line research projects. It is no simple matter to ensure that the quartet of ANTU, KUEYEN, MELIPAL and YEPUN , each a massive giant with a suite of computer-controlled active mirrors, can work together by sending beams of light towards a common focal point via a complex system of compensating optics. Yet, in the span of only two nights, the four VLT telescopes were successfully "paired" to do exactly this, yielding a first tantalizing glimpse of the future possibilities with this new science machine. While there is still a long way ahead to the routine production of extremely sharp, interferometric images, the present test observations have allowed to demonstrate directly the 2D-resolution capacity of the VLTI by means of multiple measurements of a distant star. Much valuable experience was gained during those two nights and the ESO engineers and scientists are optimistic that the extensive test observations with the numerous components of the VLTI will continue to progress rapidly. Five intense, technical test periods are scheduled during the next six months; some of these with the Mid-Infrared interferometric instrument for the VLTI (MIDI) which will soon be installed at Paranal. Later in 2003, the first of the four moveable VLTI 1.8-m Auxiliary Telescopes (ATs) will be put in place on the top of the mountain; together they will permit regular interferometric observations, also without having to use the large UTs. PR Photo 22a/02 : Delay Lines in the Interferometric Tunnel. PR Photo 22b/02 : Baselines and "Interferometric PSF" from observations of the star Achernar . Combining the VLT telescopes ESO PR Photo 22a/02 ESO PR Photo 22a/02 [Preview - JPEG: 503 x 400 pix - 81k] [Normal - JPEG: 1005 x 800 pix - 488k] [Hi-Res - JPEG: 3000 x 2389 pix - 2.8M] Caption : PR Photo 22a/02 : VLT Delay Lines in the Interferometric Tunnel. Less than one year after the first combination of two 8.2-m VLT telescopes - described in detail in ESO Press Release 23/01 - successful tests have now been carried out, during which all of the four telescopes were combined pairwise in rapid succession . Of the six combinations possible (ANTU-KUEYEN, ANTU-MELIPAL, ANTU-YEPUN, KUEYEN-MELIPAL, KUEYEN-YEPUN and MELIPAL-YEPUN), only the last one could not be used, because of the current geometrical configuration of the three delay lines installed so far. The combination of the light beams from two (or more) VLT Unit Telescopes is a daunting task. It involves pointing them simultaneously towards the same celestial object, ensuring optimal optical adjustment of the computer-controlled telescope mirrors (including the shape of the 8.2-m primary mirror by "active optics"), performing extremely smooth and stable tracking of the object as the Earth turns, guiding the light beams via additional ("coudé") mirrors into the "delay lines" installed in the Interferometric Tunnel below the telescope platform, keeping the total path lengths equal to within a fraction of a micron during hours at a time and finally, to register the interferometric fringes at the focal point of the VINCI instrument [1], where the light beams encounter each other. Next year, the first adaptive optics systems for the VLTI will be inserted below the telescopes. By drastically reducing the smearing effects of the turbulent atmosphere through which the light has to pass before it enters the telescopes, this will further "stabilize" the imaging and increase the sensitivity of the VLTI by a factor of almost 100. First results with four Unit Telescopes ESO PR Photo 22b/02 ESO PR Photo 22b/02 [Preview - JPEG: 573 x 400 pix - 78k] [Normal - JPEG: 1145 x 800 pix - 232k] Caption : PR Photo 22b/02 : The left panel shows the rather incomplete set of "baselines" used during the present, short interferometric test exposures (in interferometric terminology: the "UV-plane coverage"). Each baseline is represented by two opposite, short arcs, symmetric around the origin (centre) of the diagram. The colour-coded pattern reflects the telescope pairs (ANTU-KUEYEN = magenta, ANTU-MELIPAL = red, ANTU-YEPUN = green, KUEYEN-MELIPAL = cyan, KUEYEN-YEPUN = blue), as seen from the observed object. Due to the limited time available, this distribution is far from uniform and is quite elongated in one direction. To the right is shown the reconstructed, two-dimensional interferometric point-spread function (PSF) of the star Achernar (in "negative" - with most light in the darkest areas). It is the result of subsequent computer processing of the measurements with the different baselines. On the largest scale, the image consists of an inner, round distribution of light, 0.057 arcsec wide, surrounded by an outer, much weaker, broad "ring" and with a "white" zone between these two areas. This is the "Airy disk" for a single 8.2-m telescope at this infrared wavelength (the K-band at 2.2 µm). It represents the maximum resolution (image sharpness) obtainable when observing with a single telescope. As explained in the text, the interferometric "addition" of more telescopes greatly improves that resolution. The width of the individual - slightly S-shaped - lines ("fringes") in the inclined pattern visible in the inner area, about 0.003 arcsec, represents the achieved interferometric resolution in one direction (with an angular diameter of about 0.002 arcsec, the disk of Achernar is not resolved, making it a suitable object for this resolution test). The resolution in the perpendicular direction (along the lines) is evidently less - this is due to the specific (elongated) baseline pattern during these test observations (left panel). The image provides a direct illustration of the 20-fold increase in resolution of the VLTI over a single 8.2-m telescope . At this moment, three delay lines have been installed, but for the present first test, the VLTI engineers and astronomers used the telescopes in pairs, in order to set-up the various equipment configurations properly. In this way, they could also start "teaching" the computer control software to handle this very demanding process as efficiently and user-friendly as possible in the future. With the arrival of the science instrument AMBER in mid-2003, up to three beams can be combined simultaneously. It turned out that the various predictions of mirror positions and angles were quite accurate and only a moderate amount of time was needed to "obtain fringes" in all different configurations. Measurements were then made on a number of stars, among them the brightest star in the southern constellation Eridanus (The River), known as Alpha Eridani or Achernar , that was observed several times with the different telescope pairings. This star is a hot dwarf (spectral type "B5 IV") that is located at a distance of about 145 light-years. It has also been extensively observed during earlier VLTI tests. It is a very suitable object for the present resolution tests as its angular diameter is only about 0.002 arcsec and it therefore remains unresolved at the near-infrared wavelength of the K-band used (2.2 µm). In fact, the combination of these data (including also some that were obtained in October 2001) now makes it possible to reconstruct the first interferometric "point-spread function (PSF)" of a star obtained with the VLTI , cf. PR Photo 22b/02 . This is like an "interferometric image", except that the disk of this particular star remains unresolved. The angular resolution is inversely proportional to the aperture of a telescope for single telescope observation, and to the length of the "baseline" between two telescopes for the interferometric observation. However, observing interferometrically with two telescopes will improve the resolution only in the direction parallel to this baseline, while the resolution in the perpendicular direction will remain that of a single telescope. But then the use of other telescope pairs with different baseline orientations "adds" resolution in other directions. The reconstructed PSF of Achernar shown in PR Photo 22b/02 is obviously still very incomplete, due to the technical nature of the present tests and the limited time that was spent observing the star in each configuration. However, it already presents a powerful illustration of the extreme imaging sharpness that will be achieved with the VLTI.

  13. Electronic still camera

    NASA Astrophysics Data System (ADS)

    Holland, S. Douglas

    1992-09-01

    A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.

  14. Can Commercial Digital Cameras Be Used as Multispectral Sensors? A Crop Monitoring Test

    PubMed Central

    Lebourgeois, Valentine; Bégué, Agnès; Labbé, Sylvain; Mallavan, Benjamin; Prévot, Laurent; Roux, Bruno

    2008-01-01

    The use of consumer digital cameras or webcams to characterize and monitor different features has become prevalent in various domains, especially in environmental applications. Despite some promising results, such digital camera systems generally suffer from signal aberrations due to the on-board image processing systems and thus offer limited quantitative data acquisition capability. The objective of this study was to test a series of radiometric corrections having the potential to reduce radiometric distortions linked to camera optics and environmental conditions, and to quantify the effects of these corrections on our ability to monitor crop variables. In 2007, we conducted a five-month experiment on sugarcane trial plots using original RGB and modified RGB (Red-Edge and NIR) cameras fitted onto a light aircraft. The camera settings were kept unchanged throughout the acquisition period and the images were recorded in JPEG and RAW formats. These images were corrected to eliminate the vignetting effect, and normalized between acquisition dates. Our results suggest that 1) the use of unprocessed image data did not improve the results of image analyses; 2) vignetting had a significant effect, especially for the modified camera, and 3) normalized vegetation indices calculated with vignetting-corrected images were sufficient to correct for scene illumination conditions. These results are discussed in the light of the experimental protocol and recommendations are made for the use of these versatile systems for quantitative remote sensing of terrestrial surfaces. PMID:27873930

  15. Electronic Still Camera

    NASA Technical Reports Server (NTRS)

    Holland, S. Douglas (Inventor)

    1992-01-01

    A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.

  16. Earth on the Horizon

    NASA Image and Video Library

    2004-03-13

    This is the first image ever taken of Earth from the surface of a planet beyond the Moon. It was taken by the Mars Exploration Rover Spirit one hour before sunrise on the 63rd martian day, or sol, of its mission. Earth is the tiny white dot in the center. The image is a mosaic of images taken by the rover's navigation camera showing a broad view of the sky, and an image taken by the rover's panoramic camera of Earth. The contrast in the panoramic camera image was increased two times to make Earth easier to see. http://photojournal.jpl.nasa.gov/catalog/PIA05560

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Musgrove, Cameron H.; West, James C.

    Missing samples within synthetic aperture radar data result in image distortions. For coherent data products, such as coherent change detection and interferometric processing, the image distortion can be devastating to these second order products, resulting in missed detections and inaccurate height maps. Earlier approaches to repair the coherent data products focus upon reconstructing the missing data samples. This study demonstrates that reconstruction is not necessary to restore the quality of the coherent data products.

  18. Fundamental Limitations for Imaging GEO Satellites

    DTIC Science & Technology

    2015-10-18

    details of a geostationary satellite can be phase stabilized. We conclude that it is possible to phase such an interferometer with shorter baselines using...Jorgensen, A., Restaino, S., Armstrong, J., Baines, E., Hindsley, R. “Simulated Synthesis Imaging of Geostationary Satellites” Proceedings of the AMOS...A. M. “Simulated optical interferometric observations of geostationary satellites” Proceedings of the SPIE 8165, 2011 [3] C Leinert, S. Bowyer, L

  19. Using Optical Interferometry for GEO Satellites Imaging: An Update

    DTIC Science & Technology

    2016-05-27

    of a geostationary satellite using the Navy Precision Optical Inter- ferometer (NPOI) during the glint season of March 2015. We succeeded in detecting...night. These baseline lengths correspond to a resolution of ∼4 m at geostationary altitude. This is the first multiple-baseline interferometric...detection of a satellite. Keywords: geostationary satellites, optical interferometry, imaging, telescope arrays 1. INTRODUCTION Developing the ability to

  20. Image quality analysis of a color LCD as well as a monochrome LCD using a Foveon color CMOS camera

    NASA Astrophysics Data System (ADS)

    Dallas, William J.; Roehrig, Hans; Krupinski, Elizabeth A.

    2007-09-01

    We have combined a CMOS color camera with special software to compose a multi-functional image-quality analysis instrument. It functions as a colorimeter as well as measuring modulation transfer functions (MTF) and noise power spectra (NPS). It is presently being expanded to examine fixed-pattern noise and temporal noise. The CMOS camera has 9 μm square pixels and a pixel matrix of 2268 x 1512 x 3. The camera uses a sensor that has co-located pixels for all three primary colors. We have imaged sections of both a color and a monochrome LCD monitor onto the camera sensor with LCD-pixel-size to camera-pixel-size ratios of both 12:1 and 17.6:1. When used as an imaging colorimeter, each camera pixel is calibrated to provide CIE color coordinates and tristimulus values. This capability permits the camera to simultaneously determine chromaticity in different locations on the LCD display. After the color calibration with a CS-200 colorimeter the color coordinates of the display's primaries determined from the camera's luminance response are very close to those found from the CS-200. Only the color coordinates of the display's white point were in error. For calculating the MTF a vertical or horizontal line is displayed on the monitor. The captured image is color-matrix preprocessed, Fourier transformed then post-processed. For NPS, a uniform image is displayed on the monitor. Again, the image is pre-processed, transformed and processed. Our measurements show that the horizontal MTF's of both displays have a larger negative slope than that of the vertical MTF's. This behavior indicates that the horizontal MTF's are poorer than the vertical MTF's. However the modulations at the Nyquist frequency seem lower for the color LCD than for the monochrome LCD. The spatial noise of the color display in both directions is larger than that of the monochrome display. Attempts were also made to analyze the total noise in terms of spatial and temporal noise by applying subtractions of images taken at exactly the same exposure. Temporal noise seems to be significantly lower than spatial noise.

Top