Sample records for adaptive optics camera

  1. Liquid lens: advances in adaptive optics

    NASA Astrophysics Data System (ADS)

    Casey, Shawn Patrick

    2010-12-01

    'Liquid lens' technologies promise significant advancements in machine vision and optical communications systems. Adaptations for machine vision, human vision correction, and optical communications are used to exemplify the versatile nature of this technology. Utilization of liquid lens elements allows the cost effective implementation of optical velocity measurement. The project consists of a custom image processor, camera, and interface. The images are passed into customized pattern recognition and optical character recognition algorithms. A single camera would be used for both speed detection and object recognition.

  2. Adaptive Optics For Imaging Bright Objects Next To Dim Ones

    NASA Technical Reports Server (NTRS)

    Shao, Michael; Yu, Jeffrey W.; Malbet, Fabien

    1996-01-01

    Adaptive optics used in imaging optical systems, according to proposal, to enhance high-dynamic-range images (images of bright objects next to dim objects). Designed to alter wavefronts to correct for effects of scattering of light from small bumps on imaging optics. Original intended application of concept in advanced camera installed on Hubble Space Telescope for imaging of such phenomena as large planets near stars other than Sun. Also applicable to other high-quality telescopes and cameras.

  3. Feasibility evaluation and study of adapting the attitude reference system to the Orbiter camera payload system's large format camera

    NASA Technical Reports Server (NTRS)

    1982-01-01

    A design concept that will implement a mapping capability for the Orbital Camera Payload System (OCPS) when ground control points are not available is discussed. Through the use of stellar imagery collected by a pair of cameras whose optical axis are structurally related to the large format camera optical axis, such pointing information is made available.

  4. Near infra-red astronomy with adaptive optics and laser guide stars at the Keck Observatory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Max, C.E.; Gavel, D.T.; Olivier, S.S.

    1995-08-03

    A laser guide star adaptive optics system is being built for the W. M. Keck Observatory`s 10-meter Keck II telescope. Two new near infra-red instruments will be used with this system: a high-resolution camera (NIRC 2) and an echelle spectrometer (NIRSPEC). The authors describe the expected capabilities of these instruments for high-resolution astronomy, using adaptive optics with either a natural star or a sodium-layer laser guide star as a reference. They compare the expected performance of these planned Keck adaptive optics instruments with that predicted for the NICMOS near infra-red camera, which is scheduled to be installed on the Hubblemore » Space Telescope in 1997.« less

  5. Effect of camera angulation on adaptation of CAD/CAM restorations.

    PubMed

    Parsell, D E; Anderson, B C; Livingston, H M; Rudd, J I; Tankersley, J D

    2000-01-01

    A significant concern with computer-assisted design/computer-assisted manufacturing (CAD/CAM)-produced prostheses is the accuracy of adaptation of the restoration to the preparation. The objective of this study is to determine the effect of operator-controlled camera misalignment on restoration adaptation. A CEREC 2 CAD/CAM unit (Sirona Dental Systems, Bensheim, Germany) was used to capture the optical impressions and machine the restorations. A Class I preparation was used as the standard preparation for optical impressions. Camera angles along the mesio-distal and buccolingual alignment were varied from the ideal orientation. Occlusal marginal gaps and sample height, width, and length were measured and compared to preparation dimensions. For clinical correlation, clinicians were asked to take optical impressions of mesio-occlusal preparations (Class II) on all four second molar sites, using a patient simulator. On the adjacent first molar occlusal surfaces, a preparation was machined such that camera angulation could be calculated from information taken from the optical impression. Degree of tilt and plane of tilt were compared to the optimum camera positions for those preparations. One-way analysis of variance and Dunnett C post hoc testing (alpha = 0.01) revealed little significant degradation in fit with camera angulation. Only the apical length fit was significantly degraded by excessive angulation. The CEREC 2 CAD/CAM system was found to be relatively insensitive to operator-induced errors attributable to camera misalignments of less than 5 degrees in either the buccolingual or the mesiodistal plane. The average camera tilt error generated by clinicians for all sites was 1.98 +/- 1.17 degrees.

  6. Afocal viewport optics for underwater imaging

    NASA Astrophysics Data System (ADS)

    Slater, Dan

    2014-09-01

    A conventional camera can be adapted for underwater use by enclosing it in a sealed waterproof pressure housing with a viewport. The viewport, as an optical interface between water and air needs to consider both the camera and water optical characteristics while also providing a high pressure water seal. Limited hydrospace visibility drives a need for wide angle viewports. Practical optical interfaces between seawater and air vary from simple flat plate windows to complex water contact lenses. This paper first provides a brief overview of the physical and optical properties of the ocean environment along with suitable optical materials. This is followed by a discussion of the characteristics of various afocal underwater viewport types including flat windows, domes and the Ivanoff corrector lens, a derivative of a Galilean wide angle camera adapter. Several new and interesting optical designs derived from the Ivanoff corrector lens are presented including a pair of very compact afocal viewport lenses that are compatible with both in water and in air environments and an afocal underwater hyper-hemispherical fisheye lens.

  7. Meaning of visualizing retinal cone mosaic on adaptive optics images.

    PubMed

    Jacob, Julie; Paques, Michel; Krivosic, Valérie; Dupas, Bénédicte; Couturier, Aude; Kulcsar, Caroline; Tadayoni, Ramin; Massin, Pascale; Gaudric, Alain

    2015-01-01

    To explore the anatomic correlation of the retinal cone mosaic on adaptive optics images. Retrospective nonconsecutive observational case series. A retrospective review of the multimodal imaging charts of 6 patients with focal alteration of the cone mosaic on adaptive optics was performed. Retinal diseases included acute posterior multifocal placoid pigment epitheliopathy (n = 1), hydroxychloroquine retinopathy (n = 1), and macular telangiectasia type 2 (n = 4). High-resolution retinal images were obtained using a flood-illumination adaptive optics camera. Images were recorded using standard imaging modalities: color and red-free fundus camera photography; infrared reflectance scanning laser ophthalmoscopy, fluorescein angiography, indocyanine green angiography, and spectral-domain optical coherence tomography (OCT) images. On OCT, in the marginal zone of the lesions, a disappearance of the interdigitation zone was observed, while the ellipsoid zone was preserved. Image recording demonstrated that such attenuation of the interdigitation zone co-localized with the disappearance of the cone mosaic on adaptive optics images. In 1 case, the restoration of the interdigitation zone paralleled that of the cone mosaic after a 2-month follow-up. Our results suggest that the interdigitation zone could contribute substantially to the reflectance of the cone photoreceptor mosaic. The absence of cones on adaptive optics images does not necessarily mean photoreceptor cell death. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Retinal axial focusing and multi-layer imaging with a liquid crystal adaptive optics camera

    NASA Astrophysics Data System (ADS)

    Liu, Rui-Xue; Zheng, Xian-Liang; Li, Da-Yu; Xia, Ming-Liang; Hu, Li-Fa; Cao, Zhao-Liang; Mu, Quan-Quan; Xuan, Li

    2014-09-01

    With the help of adaptive optics (AO) technology, cellular level imaging of living human retina can be achieved. Aiming to reduce distressing feelings and to avoid potential drug induced diseases, we attempted to image retina with dilated pupil and froze accommodation without drugs. An optimized liquid crystal adaptive optics camera was adopted for retinal imaging. A novel eye stared system was used for stimulating accommodation and fixating imaging area. Illumination sources and imaging camera kept linkage for focusing and imaging different layers. Four subjects with diverse degree of myopia were imaged. Based on the optical properties of the human eye, the eye stared system reduced the defocus to less than the typical ocular depth of focus. In this way, the illumination light can be projected on certain retina layer precisely. Since that the defocus had been compensated by the eye stared system, the adopted 512 × 512 liquid crystal spatial light modulator (LC-SLM) corrector provided the crucial spatial fidelity to fully compensate high-order aberrations. The Strehl ratio of a subject with -8 diopter myopia was improved to 0.78, which was nearly close to diffraction-limited imaging. By finely adjusting the axial displacement of illumination sources and imaging camera, cone photoreceptors, blood vessels and nerve fiber layer were clearly imaged successfully.

  9. Plenoptic camera wavefront sensing with extended sources

    NASA Astrophysics Data System (ADS)

    Jiang, Pengzhi; Xu, Jieping; Liang, Yonghui; Mao, Hongjun

    2016-09-01

    The wavefront sensor is used in adaptive optics to detect the atmospheric distortion, which feeds back to the deformable mirror to compensate for this distortion. Different from the Shack-Hartmann sensor that has been widely used with point sources, the plenoptic camera wavefront sensor has been proposed as an alternative wavefront sensor adequate for extended objects in recent years. In this paper, the plenoptic camera wavefront sensing with extended sources is discussed systematically. Simulations are performed to investigate the wavefront measurement error and the closed-loop performance of the plenoptic sensor. The results show that there are an optimal lenslet size and an optimal number of pixels to make the best performance. The RMS of the resulting corrected wavefront in closed-loop adaptive optics system is less than 108 nm (0.2λ) when D/r0 ≤ 10 and the magnitude M ≤ 5. Our investigation indicates that the plenoptic sensor is efficient to operate on extended sources in the closed-loop adaptive optics system.

  10. Clinical Validation of a Smartphone-Based Adapter for Optic Disc Imaging in Kenya.

    PubMed

    Bastawrous, Andrew; Giardini, Mario Ettore; Bolster, Nigel M; Peto, Tunde; Shah, Nisha; Livingstone, Iain A T; Weiss, Helen A; Hu, Sen; Rono, Hillary; Kuper, Hannah; Burton, Matthew

    2016-02-01

    Visualization and interpretation of the optic nerve and retina are essential parts of most physical examinations. To design and validate a smartphone-based retinal adapter enabling image capture and remote grading of the retina. This validation study compared the grading of optic nerves from smartphone images with those of a digital retinal camera. Both image sets were independently graded at Moorfields Eye Hospital Reading Centre. Nested within the 6-year follow-up (January 7, 2013, to March 12, 2014) of the Nakuru Eye Disease Cohort in Kenya, 1460 adults (2920 eyes) 55 years and older were recruited consecutively from the study. A subset of 100 optic disc images from both methods were further used to validate a grading app for the optic nerves. Data analysis was performed April 7 to April 12, 2015. Vertical cup-disc ratio for each test was compared in terms of agreement (Bland-Altman and weighted κ) and test-retest variability. A total of 2152 optic nerve images were available from both methods (also 371 from the reference camera but not the smartphone, 170 from the smartphone but not the reference camera, and 227 from neither the reference camera nor the smartphone). Bland-Altman analysis revealed a mean difference of 0.02 (95% CI, -0.21 to 0.17) and a weighted κ coefficient of 0.69 (excellent agreement). The grades of an experienced retinal photographer were compared with those of a lay photographer (no health care experience before the study), and no observable difference in image acquisition quality was found. Nonclinical photographers using the low-cost smartphone adapter were able to acquire optic nerve images at a standard that enabled independent remote grading of the images comparable to those acquired using a desktop retinal camera operated by an ophthalmic assistant. The potential for task shifting and the detection of avoidable causes of blindness in the most at-risk communities makes this an attractive public health intervention.

  11. Adaptive Optics for the Human Eye

    NASA Astrophysics Data System (ADS)

    Williams, D. R.

    2000-05-01

    Adaptive optics can extend not only the resolution of ground-based telescopes, but also the human eye. Both static and dynamic aberrations in the cornea and lens of the normal eye limit its optical quality. Though it is possible to correct defocus and astigmatism with spectacle lenses, higher order aberrations remain. These aberrations blur vision and prevent us from seeing at the fundamental limits set by the retina and brain. They also limit the resolution of cameras to image the living retina, cameras that are a critical for the diagnosis and treatment of retinal disease. I will describe an adaptive optics system that measures the wave aberration of the eye in real time and compensates for it with a deformable mirror, endowing the human eye with unprecedented optical quality. This instrument provides fresh insight into the ultimate limits on human visual acuity, reveals for the first time images of the retinal cone mosaic responsible for color vision, and points the way to contact lenses and laser surgical methods that could enhance vision beyond what is currently possible today. Supported by the NSF Science and Technology Center for Adaptive Optics, the National Eye Institute, and Bausch and Lomb, Inc.

  12. Single-camera stereo-digital image correlation with a four-mirror adapter: optimized design and validation

    NASA Astrophysics Data System (ADS)

    Yu, Liping; Pan, Bing

    2016-12-01

    A low-cost, easy-to-implement but practical single-camera stereo-digital image correlation (DIC) system using a four-mirror adapter is established for accurate shape and three-dimensional (3D) deformation measurements. The mirrors assisted pseudo-stereo imaging system can convert a single camera into two virtual cameras, which view a specimen from different angles and record the surface images of the test object onto two halves of the camera sensor. To enable deformation measurement in non-laboratory conditions or extreme high temperature environments, an active imaging optical design, combining an actively illuminated monochromatic source with a coupled band-pass optical filter, is compactly integrated to the pseudo-stereo DIC system. The optical design, basic principles and implementation procedures of the established system for 3D profile and deformation measurements are described in detail. The effectiveness and accuracy of the established system are verified by measuring the profile of a regular cylinder surface and displacements of a translated planar plate. As an application example, the established system is used to determine the tensile strains and Poisson's ratio of a composite solid propellant specimen during stress relaxation test. Since the established single-camera stereo-DIC system only needs a single camera and presents strong robustness against variations in ambient light or the thermal radiation of a hot object, it demonstrates great potential in determining transient deformation in non-laboratory or high-temperature environments with the aid of a single high-speed camera.

  13. Light field analysis and its applications in adaptive optics and surveillance systems

    NASA Astrophysics Data System (ADS)

    Eslami, Mohammed Ali

    An image can only be as good as the optics of a camera or any other imaging system allows it to be. An imaging system is merely a transformation that takes a 3D world coordinate to a 2D image plane. This can be done through both linear/non-linear transfer functions. Depending on the application at hand it is easier to use some models of imaging systems over the others in certain situations. The most well-known models are the 1) Pinhole model, 2) Thin Lens Model and 3) Thick lens model for optical systems. Using light-field analysis the connection through these different models is described. A novel figure of merit is presented on using one optical model over the other for certain applications. After analyzing these optical systems, their applications in plenoptic cameras for adaptive optics applications are introduced. A new technique to use a plenoptic camera to extract information about a localized distorted planar wave front is described. CODEV simulations conducted in this thesis show that its performance is comparable to those of a Shack-Hartmann sensor and that they can potentially increase the dynamic range of angles that can be extracted assuming a paraxial imaging system. As a final application, a novel dual PTZ-surveillance system to track a target through space is presented. 22X optic zoom lenses on high resolution pan/tilt platforms recalibrate a master-slave relationship based on encoder readouts rather than complicated image processing algorithms for real-time target tracking. As the target moves out of a region of interest in the master camera, it is moved to force the target back into the region of interest. Once the master camera is moved, a precalibrated lookup table is interpolated to compute the relationship between the master/slave cameras. The homography that relates the pixels of the master camera to the pan/tilt settings of the slave camera then continue to follow the planar trajectories of targets as they move through space at high accuracies.

  14. Adaptive Monocular Visual-Inertial SLAM for Real-Time Augmented Reality Applications in Mobile Devices.

    PubMed

    Piao, Jin-Chun; Kim, Shin-Dug

    2017-11-07

    Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual-inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual-inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual-inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual-inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method.

  15. KAPAO Prime: Design and Simulation

    NASA Astrophysics Data System (ADS)

    McGonigle, Lorcan; Choi, P. I.; Severson, S. A.; Spjut, E.

    2013-01-01

    KAPAO (KAPAO A Pomona Adaptive Optics instrument) is a dual-band natural guide star adaptive optics system designed to measure and remove atmospheric aberration over UV-NIR wavelengths from Pomona College’s telescope atop Table Mountain. We present here, the final optical system, KAPAO Prime, designed in Zemax Optical Design Software that uses custom off-axis paraboloid mirrors (OAPs) to manipulate light appropriately for a Shack-Hartman wavefront sensor, deformable mirror, and science cameras. KAPAO Prime is characterized by diffraction limited imaging over the full 81” field of view of our optical camera at f/33 as well as over the smaller field of view of our NIR camera at f/50. In Zemax, tolerances of 1% on OAP focal length and off-axis distance were shown to contribute an additional 4 nm of wavefront error (98% confidence) over the field of view of our optical camera; the contribution from surface irregularity was determined analytically to be 40nm for OAPs specified to λ/10 surface irregularity (632.8nm). Modeling of the temperature deformation of the breadboard in SolidWorks revealed 70 micron contractions along the edges of the board for a decrease of 75°F when applied to OAP positions such displacements from the optimal layout are predicted to contribute an additional 20 nanometers of wavefront error. Flexure modeling of the breadboard due to gravity is on-going. We hope to begin alignment and testing of KAPAO Prime in Q1 2013.

  16. Scientific Design of a High Contrast Integral Field Spectrograph for the Subaru Telescope

    NASA Technical Reports Server (NTRS)

    McElwain, Michael W.

    2012-01-01

    Ground based telescopes equipped with adaptive optics systems and specialized science cameras are now capable of directly detecting extrasolar planets. We present the scientific design for a high contrast integral field spectrograph for the Subaru Telescope. This lenslet based integral field spectrograph will be implemented into the new extreme adaptive optics system at Subaru, called SCExAO.

  17. Adaptive optics with pupil tracking for high resolution retinal imaging

    PubMed Central

    Sahin, Betul; Lamory, Barbara; Levecq, Xavier; Harms, Fabrice; Dainty, Chris

    2012-01-01

    Adaptive optics, when integrated into retinal imaging systems, compensates for rapidly changing ocular aberrations in real time and results in improved high resolution images that reveal the photoreceptor mosaic. Imaging the retina at high resolution has numerous potential medical applications, and yet for the development of commercial products that can be used in the clinic, the complexity and high cost of the present research systems have to be addressed. We present a new method to control the deformable mirror in real time based on pupil tracking measurements which uses the default camera for the alignment of the eye in the retinal imaging system and requires no extra cost or hardware. We also present the first experiments done with a compact adaptive optics flood illumination fundus camera where it was possible to compensate for the higher order aberrations of a moving model eye and in vivo in real time based on pupil tracking measurements, without the real time contribution of a wavefront sensor. As an outcome of this research, we showed that pupil tracking can be effectively used as a low cost and practical adaptive optics tool for high resolution retinal imaging because eye movements constitute an important part of the ocular wavefront dynamics. PMID:22312577

  18. Adaptive optics with pupil tracking for high resolution retinal imaging.

    PubMed

    Sahin, Betul; Lamory, Barbara; Levecq, Xavier; Harms, Fabrice; Dainty, Chris

    2012-02-01

    Adaptive optics, when integrated into retinal imaging systems, compensates for rapidly changing ocular aberrations in real time and results in improved high resolution images that reveal the photoreceptor mosaic. Imaging the retina at high resolution has numerous potential medical applications, and yet for the development of commercial products that can be used in the clinic, the complexity and high cost of the present research systems have to be addressed. We present a new method to control the deformable mirror in real time based on pupil tracking measurements which uses the default camera for the alignment of the eye in the retinal imaging system and requires no extra cost or hardware. We also present the first experiments done with a compact adaptive optics flood illumination fundus camera where it was possible to compensate for the higher order aberrations of a moving model eye and in vivo in real time based on pupil tracking measurements, without the real time contribution of a wavefront sensor. As an outcome of this research, we showed that pupil tracking can be effectively used as a low cost and practical adaptive optics tool for high resolution retinal imaging because eye movements constitute an important part of the ocular wavefront dynamics.

  19. Terahertz adaptive optics with a deformable mirror.

    PubMed

    Brossard, Mathilde; Sauvage, Jean-François; Perrin, Mathias; Abraham, Emmanuel

    2018-04-01

    We report on the wavefront correction of a terahertz (THz) beam using adaptive optics, which requires both a wavefront sensor that is able to sense the optical aberrations, as well as a wavefront corrector. The wavefront sensor relies on a direct 2D electro-optic imaging system composed of a ZnTe crystal and a CMOS camera. By measuring the phase variation of the THz electric field in the crystal, we were able to minimize the geometrical aberrations of the beam, thanks to the action of a deformable mirror. This phase control will open the route to THz adaptive optics in order to optimize the THz beam quality for both practical and fundamental applications.

  20. Adapting smartphones for low-cost optical medical imaging

    NASA Astrophysics Data System (ADS)

    Pratavieira, Sebastião.; Vollet-Filho, José D.; Carbinatto, Fernanda M.; Blanco, Kate; Inada, Natalia M.; Bagnato, Vanderlei S.; Kurachi, Cristina

    2015-06-01

    Optical images have been used in several medical situations to improve diagnosis of lesions or to monitor treatments. However, most systems employ expensive scientific (CCD or CMOS) cameras and need computers to display and save the images, usually resulting in a high final cost for the system. Additionally, this sort of apparatus operation usually becomes more complex, requiring more and more specialized technical knowledge from the operator. Currently, the number of people using smartphone-like devices with built-in high quality cameras is increasing, which might allow using such devices as an efficient, lower cost, portable imaging system for medical applications. Thus, we aim to develop methods of adaptation of those devices to optical medical imaging techniques, such as fluorescence. Particularly, smartphones covers were adapted to connect a smartphone-like device to widefield fluorescence imaging systems. These systems were used to detect lesions in different tissues, such as cervix and mouth/throat mucosa, and to monitor ALA-induced protoporphyrin-IX formation for photodynamic treatment of Cervical Intraepithelial Neoplasia. This approach may contribute significantly to low-cost, portable and simple clinical optical imaging collection.

  1. An adaptive optics approach for laser beam correction in turbulence utilizing a modified plenoptic camera

    NASA Astrophysics Data System (ADS)

    Ko, Jonathan; Wu, Chensheng; Davis, Christopher C.

    2015-09-01

    Adaptive optics has been widely used in the field of astronomy to correct for atmospheric turbulence while viewing images of celestial bodies. The slightly distorted incoming wavefronts are typically sensed with a Shack-Hartmann sensor and then corrected with a deformable mirror. Although this approach has proven to be effective for astronomical purposes, a new approach must be developed when correcting for the deep turbulence experienced in ground to ground based optical systems. We propose the use of a modified plenoptic camera as a wavefront sensor capable of accurately representing an incoming wavefront that has been significantly distorted by strong turbulence conditions (C2n <10-13 m- 2/3). An intelligent correction algorithm can then be developed to reconstruct the perturbed wavefront and use this information to drive a deformable mirror capable of correcting the major distortions. After the large distortions have been corrected, a secondary mode utilizing more traditional adaptive optics algorithms can take over to fine tune the wavefront correction. This two-stage algorithm can find use in free space optical communication systems, in directed energy applications, as well as for image correction purposes.

  2. KAPAO Prime: Design and Simulation

    NASA Astrophysics Data System (ADS)

    McGonigle, Lorcan

    2012-11-01

    KAPAO (KAPAO A Pomona Adaptive Optics instrument) is a dual-band natural guide star adaptive optics system designed to measure and remove atmospheric aberration from Pomona College's telescope atop Table Mountain. We present here, the final optical system, referred to as Prime, designed in Zemax Optical Design Software. Prime is characterized by diffraction limited imaging over the full 73'' field of view of our Andor Camera at f/33 as well as for our NIR Xenics camera at f/50. In Zemax, tolerances of 1% on OAP focal length and off-axis distance were shown to contribute an additional 4 nm of wavefront error (98% confidence) over the field of view of the Andor camera; the contribution from surface irregularity was determined analytically to be 40nm for OAPs specified to l/10 surface irregularity. Modeling of the temperature deformation of the breadboard in SolidWorks revealed 70 micron contractions along the edges of the board for a decrease of 75 F; when applied to OAP positions such displacements from the optimal layout are predicted to contribute an additional 20 nanometers of wavefront error. Flexure modeling of the breadboard due to gravity is on-going. We hope to begin alignment and testing of ``Prime'' in Q1 2013.

  3. Adaptive Monocular Visual–Inertial SLAM for Real-Time Augmented Reality Applications in Mobile Devices

    PubMed Central

    Piao, Jin-Chun; Kim, Shin-Dug

    2017-01-01

    Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual–inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual–inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual–inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual–inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method. PMID:29112143

  4. Retinal arteriolar remodeling evaluated with adaptive optics camera: Relationship with blood pressure levels.

    PubMed

    Gallo, A; Mattina, A; Rosenbaum, D; Koch, E; Paques, M; Girerd, X

    2016-06-01

    To research a retinal arterioles wall-to-lumen ratio or lumen diameter cut-off that would discriminate hypertensive from normal subjects using adaptive optics camera. One thousand and five hundred subjects were consecutively recruited and Adaptive Optics Camera rtx1™ (Imagine-Eyes, Orsay, France) was used to measure wall thickness, internal diameter, to calculate wall-to-lumen ratio (WLR) and wall cross-sectional area of retinal arterioles. Sitting office blood pressure was measured once, just before retinal measurements and office blood pressure was defined as systolic blood pressure>=140mmHg and diastolic blood pressure>=90mmHg. ROC curves were constructed to determine cut-off values for retinal parameters to diagnose office hypertension. In another population of 276 subjects office BP, retinal arterioles evaluation and home blood pressure monitoring were obtained. The applicability of retinal WLR or diameter cut-off values were compared in patients with controlled, masked, white-coat and sustained hypertension. In 1500 patients, a WLR>0.31 discriminated office hypertensive subjects with a 0.57 sensitivity and 0.71 specificity. Lumen diameter<78.2μm discriminated office hypertension with a 0.73 sensitivity and a 0.52 specificity. In the other 276 patients, WLR was higher in sustained hypertension vs normotensive patients (0.330±0.06 vs 0.292±0.05; P<0.001) and diameter was narrower in masked hypertensive vs normotensive subjects (73.0±11.2 vs 78.5±11.6μm; P<0.005). A WLR higher than 0.31 is in favour of office arterial hypertension; a diameter under<78μm may indicate a masked hypertension. Retinal arterioles analysis through adaptive optics camera may help the diagnosis of arterial hypertension, in particular in case of masked hypertension. Copyright © 2016. Published by Elsevier SAS.

  5. The numerical simulation tool for the MAORY multiconjugate adaptive optics system

    NASA Astrophysics Data System (ADS)

    Arcidiacono, C.; Schreiber, L.; Bregoli, G.; Diolaiti, E.; Foppiani, I.; Agapito, G.; Puglisi, A.; Xompero, M.; Oberti, S.; Cosentino, G.; Lombini, M.; Butler, R. C.; Ciliegi, P.; Cortecchia, F.; Patti, M.; Esposito, S.; Feautrier, P.

    2016-07-01

    The Multiconjugate Adaptive Optics RelaY (MAORY) is and Adaptive Optics module to be mounted on the ESO European-Extremely Large Telescope (E-ELT). It is an hybrid Natural and Laser Guide System that will perform the correction of the atmospheric turbulence volume above the telescope feeding the Multi-AO Imaging Camera for Deep Observations Near Infrared spectro-imager (MICADO). We developed an end-to-end Monte- Carlo adaptive optics simulation tool to investigate the performance of a the MAORY and the calibration, acquisition, operation strategies. MAORY will implement Multiconjugate Adaptive Optics combining Laser Guide Stars (LGS) and Natural Guide Stars (NGS) measurements. The simulation tool implement the various aspect of the MAORY in an end to end fashion. The code has been developed using IDL and use libraries in C++ and CUDA for efficiency improvements. Here we recall the code architecture, we describe the modeled instrument components and the control strategies implemented in the code.

  6. Strategy for the development of a smart NDVI camera system for outdoor plant detection and agricultural embedded systems.

    PubMed

    Dworak, Volker; Selbeck, Joern; Dammer, Karl-Heinz; Hoffmann, Matthias; Zarezadeh, Ali Akbar; Bobda, Christophe

    2013-01-24

    The application of (smart) cameras for process control, mapping, and advanced imaging in agriculture has become an element of precision farming that facilitates the conservation of fertilizer, pesticides, and machine time. This technique additionally reduces the amount of energy required in terms of fuel. Although research activities have increased in this field, high camera prices reflect low adaptation to applications in all fields of agriculture. Smart, low-cost cameras adapted for agricultural applications can overcome this drawback. The normalized difference vegetation index (NDVI) for each image pixel is an applicable algorithm to discriminate plant information from the soil background enabled by a large difference in the reflectance between the near infrared (NIR) and the red channel optical frequency band. Two aligned charge coupled device (CCD) chips for the red and NIR channel are typically used, but they are expensive because of the precise optical alignment required. Therefore, much attention has been given to the development of alternative camera designs. In this study, the advantage of a smart one-chip camera design with NDVI image performance is demonstrated in terms of low cost and simplified design. The required assembly and pixel modifications are described, and new algorithms for establishing an enhanced NDVI image quality for data processing are discussed.

  7. Strategy for the Development of a Smart NDVI Camera System for Outdoor Plant Detection and Agricultural Embedded Systems

    PubMed Central

    Dworak, Volker; Selbeck, Joern; Dammer, Karl-Heinz; Hoffmann, Matthias; Zarezadeh, Ali Akbar; Bobda, Christophe

    2013-01-01

    The application of (smart) cameras for process control, mapping, and advanced imaging in agriculture has become an element of precision farming that facilitates the conservation of fertilizer, pesticides, and machine time. This technique additionally reduces the amount of energy required in terms of fuel. Although research activities have increased in this field, high camera prices reflect low adaptation to applications in all fields of agriculture. Smart, low-cost cameras adapted for agricultural applications can overcome this drawback. The normalized difference vegetation index (NDVI) for each image pixel is an applicable algorithm to discriminate plant information from the soil background enabled by a large difference in the reflectance between the near infrared (NIR) and the red channel optical frequency band. Two aligned charge coupled device (CCD) chips for the red and NIR channel are typically used, but they are expensive because of the precise optical alignment required. Therefore, much attention has been given to the development of alternative camera designs. In this study, the advantage of a smart one-chip camera design with NDVI image performance is demonstrated in terms of low cost and simplified design. The required assembly and pixel modifications are described, and new algorithms for establishing an enhanced NDVI image quality for data processing are discussed. PMID:23348037

  8. Shuttle sortie electro-optical instruments study

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A study to determine the feasibility of adapting existing electro-optical instruments (designed and sucessfully used for ground operations) for use on a shuttle sortie flight and to perform satisfactorily in the space environment is considered. The suitability of these two instruments (a custom made image intensifier camera system and an off-the-shelf secondary electron conduction television camera) to support a barium ion cloud experiment was studied for two different modes of spacelab operation - within the pressurized module and on the pallet.

  9. Versatile microsecond movie camera

    NASA Astrophysics Data System (ADS)

    Dreyfus, R. W.

    1980-03-01

    A laboratory-type movie camera is described which satisfies many requirements in the range 1 microsec to 1 sec. The camera consists of a He-Ne laser and compatible state-of-the-art components; the primary components are an acoustooptic modulator, an electromechanical beam deflector, and a video tape system. The present camera is distinct in its operation in that submicrosecond laser flashes freeze the image motion while still allowing the simplicity of electromechanical image deflection in the millisecond range. The gating and pulse delay circuits of an oscilloscope synchronize the modulator and scanner relative to the subject being photographed. The optical table construction and electronic control enhance the camera's versatility and adaptability. The instant replay video tape recording allows for easy synchronization and immediate viewing of the results. Economy is achieved by using off-the-shelf components, optical table construction, and short assembly time.

  10. Performance prediction of optical image stabilizer using SVM for shaker-free production line

    NASA Astrophysics Data System (ADS)

    Kim, HyungKwan; Lee, JungHyun; Hyun, JinWook; Lim, Haekeun; Kim, GyuYeol; Moon, HyukSoo

    2016-04-01

    Recent smartphones adapt the camera module with optical image stabilizer(OIS) to enhance imaging quality in handshaking conditions. However, compared to the non-OIS camera module, the cost for implementing the OIS module is still high. One reason is that the production line for the OIS camera module requires a highly precise shaker table in final test process, which increases the unit cost of the production. In this paper, we propose a framework for the OIS quality prediction that is trained with the support vector machine and following module characterizing features : noise spectral density of gyroscope, optically measured linearity and cross-axis movement of hall and actuator. The classifier was tested on an actual production line and resulted in 88% accuracy of recall rate.

  11. The optical design of a visible adaptive optics system for the Magellan Telescope

    NASA Astrophysics Data System (ADS)

    Kopon, Derek

    The Magellan Adaptive Optics system will achieve first light in November of 2012. This AO system contains several subsystems including the 585-actuator concave adaptive secondary mirror, the Calibration Return Optic (CRO) alignment and calibration system, the CLIO 1-5 microm IR science camera, the movable guider camera and active optics assembly, and the W-Unit, which contains both the Pyramid Wavefront Sensor (PWFS) and the VisAO visible science camera. In this dissertation, we present details of the design, fabrication, assembly, alignment, and laboratory performance of the VisAO camera and its optical components. Many of these components required a custom design, such as the Spectral Differential Imaging Wollaston prisms and filters and the coronagraphic spots. One component, the Atmospheric Dispersion Corrector (ADC), required a unique triplet design that had until now never been fabricated and tested on sky. We present the design, laboratory, and on-sky results for our triplet ADC. We also present details of the CRO test setup and alignment. Because Magellan is a Gregorian telescope, the ASM is a concave ellipsoidal mirror. By simulating a star with a white light point source at the far conjugate, we can create a double-pass test of the whole system without the need for a real on-sky star. This allows us to test the AO system closed loop in the Arcetri test tower at its nominal design focal length and optical conjugates. The CRO test will also allow us to calibrate and verify the system off-sky at the Magellan telescope during commissioning and periodically thereafter. We present a design for a possible future upgrade path for a new visible Integral Field Spectrograph. By integrating a fiber array bundle at the VisAO focal plane, we can send light to a pre-existing facility spectrograph, such as LDSS3, which will allow 20 mas spatial sampling and R˜1,800 spectra over the band 0.6-1.05 microm. This would be the highest spatial resolution IFU to date, either from the ground or in space.

  12. AO corrected satellite imaging from Mount Stromlo

    NASA Astrophysics Data System (ADS)

    Bennet, F.; Rigaut, F.; Price, I.; Herrald, N.; Ritchie, I.; Smith, C.

    2016-07-01

    The Research School of Astronomy and Astrophysics have been developing adaptive optics systems for space situational awareness. As part of this program we have developed satellite imaging using compact adaptive optics systems for small (1-2 m) telescopes such as those operated by Electro Optic Systems (EOS) from the Mount Stromlo Observatory. We have focused on making compact, simple, and high performance AO systems using modern high stroke high speed deformable mirrors and EMCCD cameras. We are able to track satellites down to magnitude 10 with a Strehl in excess of 20% in median seeing.

  13. Adaptive optics at the Subaru telescope: current capabilities and development

    NASA Astrophysics Data System (ADS)

    Guyon, Olivier; Hayano, Yutaka; Tamura, Motohide; Kudo, Tomoyuki; Oya, Shin; Minowa, Yosuke; Lai, Olivier; Jovanovic, Nemanja; Takato, Naruhisa; Kasdin, Jeremy; Groff, Tyler; Hayashi, Masahiko; Arimoto, Nobuo; Takami, Hideki; Bradley, Colin; Sugai, Hajime; Perrin, Guy; Tuthill, Peter; Mazin, Ben

    2014-08-01

    Current AO observations rely heavily on the AO188 instrument, a 188-elements system that can operate in natural or laser guide star (LGS) mode, and delivers diffraction-limited images in near-IR. In its LGS mode, laser light is transported from the solid state laser to the launch telescope by a single mode fiber. AO188 can feed several instruments: the infrared camera and spectrograph (IRCS), a high contrast imaging instrument (HiCIAO) or an optical integral field spectrograph (Kyoto-3DII). Adaptive optics development in support of exoplanet observations has been and continues to be very active. The Subaru Coronagraphic Extreme-AO (SCExAO) system, which combines extreme-AO correction with advanced coronagraphy, is in the commissioning phase, and will greatly increase Subaru Telescope's ability to image and study exoplanets. SCExAO currently feeds light to HiCIAO, and will soon be combined with the CHARIS integral field spectrograph and the fast frame MKIDs exoplanet camera, which have both been specifically designed for high contrast imaging. SCExAO also feeds two visible-light single pupil interferometers: VAMPIRES and FIRST. In parallel to these direct imaging activities, a near-IR high precision spectrograph (IRD) is under development for observing exoplanets with the radial velocity technique. Wide-field adaptive optics techniques are also being pursued. The RAVEN multi-object adaptive optics instrument was installed on Subaru telescope in early 2014. Subaru Telescope is also planning wide field imaging with ground-layer AO with the ULTIMATE-Subaru project.

  14. World's fastest and most sensitive astronomical camera

    NASA Astrophysics Data System (ADS)

    2009-06-01

    The next generation of instruments for ground-based telescopes took a leap forward with the development of a new ultra-fast camera that can take 1500 finely exposed images per second even when observing extremely faint objects. The first 240x240 pixel images with the world's fastest high precision faint light camera were obtained through a collaborative effort between ESO and three French laboratories from the French Centre National de la Recherche Scientifique/Institut National des Sciences de l'Univers (CNRS/INSU). Cameras such as this are key components of the next generation of adaptive optics instruments of Europe's ground-based astronomy flagship facility, the ESO Very Large Telescope (VLT). ESO PR Photo 22a/09 The CCD220 detector ESO PR Photo 22b/09 The OCam camera ESO PR Video 22a/09 OCam images "The performance of this breakthrough camera is without an equivalent anywhere in the world. The camera will enable great leaps forward in many areas of the study of the Universe," says Norbert Hubin, head of the Adaptive Optics department at ESO. OCam will be part of the second-generation VLT instrument SPHERE. To be installed in 2011, SPHERE will take images of giant exoplanets orbiting nearby stars. A fast camera such as this is needed as an essential component for the modern adaptive optics instruments used on the largest ground-based telescopes. Telescopes on the ground suffer from the blurring effect induced by atmospheric turbulence. This turbulence causes the stars to twinkle in a way that delights poets, but frustrates astronomers, since it blurs the finest details of the images. Adaptive optics techniques overcome this major drawback, so that ground-based telescopes can produce images that are as sharp as if taken from space. Adaptive optics is based on real-time corrections computed from images obtained by a special camera working at very high speeds. Nowadays, this means many hundreds of times each second. The new generation instruments require these corrections to be done at an even higher rate, more than one thousand times a second, and this is where OCam is essential. "The quality of the adaptive optics correction strongly depends on the speed of the camera and on its sensitivity," says Philippe Feautrier from the LAOG, France, who coordinated the whole project. "But these are a priori contradictory requirements, as in general the faster a camera is, the less sensitive it is." This is why cameras normally used for very high frame-rate movies require extremely powerful illumination, which is of course not an option for astronomical cameras. OCam and its CCD220 detector, developed by the British manufacturer e2v technologies, solve this dilemma, by being not only the fastest available, but also very sensitive, making a significant jump in performance for such cameras. Because of imperfect operation of any physical electronic devices, a CCD camera suffers from so-called readout noise. OCam has a readout noise ten times smaller than the detectors currently used on the VLT, making it much more sensitive and able to take pictures of the faintest of sources. "Thanks to this technology, all the new generation instruments of ESO's Very Large Telescope will be able to produce the best possible images, with an unequalled sharpness," declares Jean-Luc Gach, from the Laboratoire d'Astrophysique de Marseille, France, who led the team that built the camera. "Plans are now underway to develop the adaptive optics detectors required for ESO's planned 42-metre European Extremely Large Telescope, together with our research partners and the industry," says Hubin. Using sensitive detectors developed in the UK, with a control system developed in France, with German and Spanish participation, OCam is truly an outcome of a European collaboration that will be widely used and commercially produced. More information The three French laboratories involved are the Laboratoire d'Astrophysique de Marseille (LAM/INSU/CNRS, Université de Provence; Observatoire Astronomique de Marseille Provence), the Laboratoire d'Astrophysique de Grenoble (LAOG/INSU/CNRS, Université Joseph Fourier; Observatoire des Sciences de l'Univers de Grenoble), and the Observatoire de Haute Provence (OHP/INSU/CNRS; Observatoire Astronomique de Marseille Provence). OCam and the CCD220 are the result of five years work, financed by the European commission, ESO and CNRS-INSU, within the OPTICON project of the 6th Research and Development Framework Programme of the European Union. The development of the CCD220, supervised by ESO, was undertaken by the British company e2v technologies, one of the world leaders in the manufacture of scientific detectors. The corresponding OPTICON activity was led by the Laboratoire d'Astrophysique de Grenoble, France. The OCam camera was built by a team of French engineers from the Laboratoire d'Astrophysique de Marseille, the Laboratoire d'Astrophysique de Grenoble and the Observatoire de Haute Provence. In order to secure the continuation of this successful project a new OPTICON project started in June 2009 as part of the 7th Research and Development Framework Programme of the European Union with the same partners, with the aim of developing a detector and camera with even more powerful functionality for use with an artificial laser star. This development is necessary to ensure the image quality of the future 42-metre European Extremely Large Telescope. ESO, the European Southern Observatory, is the foremost intergovernmental astronomy organisation in Europe and the world's most productive astronomical observatory. It is supported by 14 countries: Austria, Belgium, the Czech Republic, Denmark, France, Finland, Germany, Italy, the Netherlands, Portugal, Spain, Sweden, Switzerland and the United Kingdom. ESO carries out an ambitious programme focused on the design, construction and operation of powerful ground-based observing facilities enabling astronomers to make important scientific discoveries. ESO also plays a leading role in promoting and organising cooperation in astronomical research. ESO operates three unique world-class observing sites in Chile: La Silla, Paranal and Chajnantor. At Paranal, ESO operates the Very Large Telescope, the world's most advanced visible-light astronomical observatory. ESO is the European partner of a revolutionary astronomical telescope ALMA, the largest astronomical project in existence. ESO is currently planning a 42-metre European Extremely Large optical/near-infrared Telescope, the E-ELT, which will become "the world's biggest eye on the sky".

  15. CCD imaging system for the EUV solar telescope

    NASA Astrophysics Data System (ADS)

    Gong, Yan; Song, Qian; Ye, Bing-Xun

    2006-01-01

    In order to develop the detector adapted to the space solar telescope, we have built a CCD camera system capable of working in the extra ultraviolet (EUV) band, which is composed of one phosphor screen, one intensified system using a photocathode/micro-channel plate(MCP)/ phosphor, one optical taper and one chip of front-illuminated (FI) CCD without screen windows. All of them were stuck one by one with optical glue. The working principle of the camera system is presented; moreover we have employed the mesh experiment to calibrate and test the CCD camera system in 15~24nm, the position resolution of about 19 μm is obtained at the wavelength of 17.1nm and 19.5nm.

  16. Smartphone based point-of-care detector of urine albumin

    NASA Astrophysics Data System (ADS)

    Cmiel, Vratislav; Svoboda, Ondrej; Koscova, Pavlina; Provaznik, Ivo

    2016-03-01

    Albumin plays an important role in human body. Its changed level in urine may indicate serious kidney disorders. We present a new point-of-care solution for sensitive detection of urine albumin - the miniature optical adapter for iPhone with in-built optical filters and a sample slot. The adapter exploits smart-phone flash to generate excitation light and camera to measure the level of emitted light. Albumin Blue 580 is used as albumin reagent. The proposed light-weight adapter can be produced at low cost using a 3D printer. Thus, the miniaturized detector is easy to use out of lab.

  17. Adaptive optics compensation over a 3 km near horizontal path

    NASA Astrophysics Data System (ADS)

    Mackey, Ruth; Dainty, Chris

    2008-10-01

    We present results of adaptive optics compensation at the receiver of a 3km optical link using a beacon laser operating at 635nm. The laser is transmitted from the roof of a seven-storey building over a near horizontal path towards a 127 mm optical receiver located on the second-floor of the Applied Optics Group at the National University of Ireland, Galway. The wavefront of the scintillated beam is measured using a Shack-Hartmann wavefront sensor (SHWFS) with high-speed CMOS camera capable of frame rates greater than 1kHz. The strength of turbulence is determined from the fluctuations in differential angle-of-arrival in the wavefront sensor measurements and from the degree of scintillation in the pupil plane. Adaptive optics compensation is applied using a tip-tilt mirror and 37 channel membrane mirror and controlled using a single desktop computer. The performance of the adaptive optics system in real turbulence is compared with the performance of the system in a controlled laboratory environment, where turbulence is generated using a liquid crystal spatial light modulator.

  18. Adaptive Optics Technology for High-Resolution Retinal Imaging

    PubMed Central

    Lombardo, Marco; Serrao, Sebastiano; Devaney, Nicholas; Parravano, Mariacristina; Lombardo, Giuseppe

    2013-01-01

    Adaptive optics (AO) is a technology used to improve the performance of optical systems by reducing the effects of optical aberrations. The direct visualization of the photoreceptor cells, capillaries and nerve fiber bundles represents the major benefit of adding AO to retinal imaging. Adaptive optics is opening a new frontier for clinical research in ophthalmology, providing new information on the early pathological changes of the retinal microstructures in various retinal diseases. We have reviewed AO technology for retinal imaging, providing information on the core components of an AO retinal camera. The most commonly used wavefront sensing and correcting elements are discussed. Furthermore, we discuss current applications of AO imaging to a population of healthy adults and to the most frequent causes of blindness, including diabetic retinopathy, age-related macular degeneration and glaucoma. We conclude our work with a discussion on future clinical prospects for AO retinal imaging. PMID:23271600

  19. Design and realization of adaptive optical principle system without wavefront sensing

    NASA Astrophysics Data System (ADS)

    Wang, Xiaobin; Niu, Chaojun; Guo, Yaxing; Han, Xiang'e.

    2018-02-01

    In this paper, we focus on the performance improvement of the free space optical communication system and carry out the research on wavefront-sensorless adaptive optics. We use a phase only liquid crystal spatial light modulator (SLM) as the wavefront corrector. The optical intensity distribution of the distorted wavefront is detected by a CCD. We develop a wavefront controller based on ARM and a software based on the Linux operating system. The wavefront controller can control the CCD camera and the wavefront corrector. There being two SLMs in the experimental system, one simulates atmospheric turbulence and the other is used to compensate the wavefront distortion. The experimental results show that the performance quality metric (the total gray value of 25 pixels) increases from 3037 to 4863 after 200 iterations. Besides, it is demonstrated that our wavefront-sensorless adaptive optics system based on SPGD algorithm has a good performance in compensating wavefront distortion.

  20. Comparing Parafoveal Cone Photoreceptor Mosaic Metrics in Younger and Older Age Groups Using an Adaptive Optics Retinal Camera.

    PubMed

    Jacob, Julie; Paques, Michel; Krivosic, Valérie; Dupas, Bénédicte; Erginay, Ali; Tadayoni, Ramin; Gaudric, Alain

    2017-01-01

    To analyze cone mosaic metrics on adaptive optics (AO) images as a function of retinal eccentricity in two different age groups using a commercial flood illumination AO device. Fifty-three eyes of 28 healthy subjects divided into two age groups were imaged using an AO flood-illumination camera (rtx1; Imagine Eyes, Orsay, France). A 16° × 4° field was obtained horizontally. Cone-packing metrics were determined in five neighboring 50 µm × 50 µm regions. Both retinal (cones/mm 2 and µm) and visual (cones/degrees 2 and arcmin) units were computed. Results for cone mosaic metrics at 2°, 2.5°, 3°, 4°, and 5° eccentricity were compatible with previous AO scanning laser ophthalmoscopy and histology data. No significant difference was observed between the two age groups. The rtx1 camera enabled reproducible measurements of cone-packing metrics across the extrafoveal retina. These findings may contribute to the development of normative data and act as a reference for future research. [Ophthalmic Surg Lasers Imaging Retina. 2017;48:45-50.]. Copyright 2017, SLACK Incorporated.

  1. MagAO: Status and on-sky performance of the Magellan adaptive optics system

    NASA Astrophysics Data System (ADS)

    Morzinski, Katie M.; Close, Laird M.; Males, Jared R.; Kopon, Derek; Hinz, Phil M.; Esposito, Simone; Riccardi, Armando; Puglisi, Alfio; Pinna, Enrico; Briguglio, Runa; Xompero, Marco; Quirós-Pacheco, Fernando; Bailey, Vanessa; Follette, Katherine B.; Rodigas, T. J.; Wu, Ya-Lin; Arcidiacono, Carmelo; Argomedo, Javier; Busoni, Lorenzo; Hare, Tyson; Uomoto, Alan; Weinberger, Alycia

    2014-07-01

    MagAO is the new adaptive optics system with visible-light and infrared science cameras, located on the 6.5-m Magellan "Clay" telescope at Las Campanas Observatory, Chile. The instrument locks on natural guide stars (NGS) from 0th to 16th R-band magnitude, measures turbulence with a modulating pyramid wavefront sensor binnable from 28×28 to 7×7 subapertures, and uses a 585-actuator adaptive secondary mirror (ASM) to provide at wavefronts to the two science cameras. MagAO is a mutated clone of the similar AO systems at the Large Binocular Telescope (LBT) at Mt. Graham, Arizona. The high-level AO loop controls up to 378 modes and operates at frame rates up to 1000 Hz. The instrument has two science cameras: VisAO operating from 0.5-1μm and Clio2 operating from 1-5 μm. MagAO was installed in 2012 and successfully completed two commissioning runs in 2012-2013. In April 2014 we had our first science run that was open to the general Magellan community. Observers from Arizona, Carnegie, Australia, Harvard, MIT, Michigan, and Chile took observations in collaboration with the MagAO instrument team. Here we describe the MagAO instrument, describe our on-sky performance, and report our status as of summer 2014.

  2. Addition of Adapted Optics towards obtaining a quantitative detection of diabetic retinopathy

    NASA Astrophysics Data System (ADS)

    Yust, Brian; Obregon, Isidro; Tsin, Andrew; Sardar, Dhiraj

    2009-04-01

    An adaptive optics system was assembled for correcting the aberrated wavefront of light reflected from the retina. The adaptive optics setup includes a superluminous diode light source, Hartmann-Shack wavefront sensor, deformable mirror, and imaging CCD camera. Aberrations found in the reflected wavefront are caused by changes in the index of refraction along the light path as the beam travels through the cornea, lens, and vitreous humour. The Hartmann-Shack sensor allows for detection of aberrations in the wavefront, which may then be corrected with the deformable mirror. It has been shown that there is a change in the polarization of light reflected from neovascularizations in the retina due to certain diseases, such as diabetic retinopathy. The adaptive optics system was assembled towards the goal of obtaining a quantitative measure of onset and progression of this ailment, as one does not currently exist. The study was done to show that the addition of adaptive optics results in a more accurate detection of neovascularization in the retina by measuring the expected changes in polarization of the corrected wavefront of reflected light.

  3. Amplitude and intensity spatial interferometry; Proceedings of the Meeting, Tucson, AZ, Feb. 14-16, 1990

    NASA Technical Reports Server (NTRS)

    Breckinridge, Jim B. (Editor)

    1990-01-01

    Attention is given to such topics as ground interferometers, space interferometers, speckle-based and interferometry-based astronomical observations, adaptive and atmospheric optics, speckle techniques, and instrumentation. Particular papers are presented concerning recent progress on the IR Michelson array; the IOTA interferometer project; a space interferometer concept for the detection of extrasolar earth-like planets; IR speckle imaging at Palomar; optical diameters of stars measured with the Mt. Wilson Mark III interferometer; the IR array camera for interferometry with the cophased Multiple Mirror Telescope; optimization techniques appliesd to the bispectrum of one-dimensional IR astronomical speckle data; and adaptive optical iamging for extended objects.

  4. Mid-infrared Shack-Hartmann wavefront sensor fully cryogenic using extended source for endoatmospheric applications.

    PubMed

    Robert, Clélia; Michau, Vincent; Fleury, Bruno; Magli, Serge; Vial, Laurent

    2012-07-02

    Adaptive optics provide real-time compensation for atmospheric turbulence. The correction quality relies on a key element: the wavefront sensor. We have designed an adaptive optics system in the mid-infrared range providing high spatial resolution for ground-to-air applications, integrating a Shack-Hartmann infrared wavefront sensor operating on an extended source. This paper describes and justifies the design of the infrared wavefront sensor, while defining and characterizing the Shack-Hartmann wavefront sensor camera. Performance and illustration of field tests are also reported.

  5. Adaptive optics system for the IRSOL solar observatory

    NASA Astrophysics Data System (ADS)

    Ramelli, Renzo; Bucher, Roberto; Rossini, Leopoldo; Bianda, Michele; Balemi, Silvano

    2010-07-01

    We present a low cost adaptive optics system developed for the solar observatory at Istituto Ricerche Solari Locarno (IRSOL), Switzerland. The Shack-Hartmann Wavefront Sensor is based on a Dalsa CCD camera with 256 pixels × 256 pixels working at 1kHz. The wavefront compensation is obtained by a deformable mirror with 37 actuators and a Tip-Tilt mirror. A real time control software has been developed on a RTAI-Linux PC. Scicos/Scilab based software has been realized for an online analysis of the system behavior. The software is completely open source.

  6. Adaptive lenses using transparent dielectric elastomer actuators

    NASA Astrophysics Data System (ADS)

    Shian, Samuel; Diebold, Roger; Clarke, David

    2013-03-01

    Variable focal lenses, used in a vast number of applications such as endoscope, digital camera, binoculars, information storage, communication, and machine vision, are traditionally constructed as a lens system consisting of solid lenses and actuating mechanisms. However, such lens system is complex, bulky, inefficient, and costly. Each of these shortcomings can be addressed using an adaptive lens that performs as a lens system. In this presentation, we will show how we push the boundary of adaptive lens technology through the use of a transparent electroactive polymer actuator that is integral to the optics. Detail of our concepts and lens construction will be described as well as electromechanical and optical performances. Preliminary data indicate that our adaptive lens prototype is capable of varying its focus by more than 100%, which is higher than that of human eyes. Furthermore, we will show how our approach can be used to achieve certain controls over the lens characteristics such as adaptive aberration and optical axis, which are difficult or impossible to achieve in other adaptive lens configurations.

  7. Concurrent image-based visual servoing with adaptive zooming for non-cooperative rendezvous maneuvers

    NASA Astrophysics Data System (ADS)

    Pomares, Jorge; Felicetti, Leonard; Pérez, Javier; Emami, M. Reza

    2018-02-01

    An image-based servo controller for the guidance of a spacecraft during non-cooperative rendezvous is presented in this paper. The controller directly utilizes the visual features from image frames of a target spacecraft for computing both attitude and orbital maneuvers concurrently. The utilization of adaptive optics, such as zooming cameras, is also addressed through developing an invariant-image servo controller. The controller allows for performing rendezvous maneuvers independently from the adjustments of the camera focal length, improving the performance and versatility of maneuvers. The stability of the proposed control scheme is proven analytically in the invariant space, and its viability is explored through numerical simulations.

  8. A DirtI Application for LBT Commissioning Campaigns

    NASA Astrophysics Data System (ADS)

    Borelli, J. L.

    2009-09-01

    In order to characterize the Gregorian focal stations and test the performance achieved by the Large Binocular Telescope (LBT) adaptive optics system, two infrared test cameras were constructed within a joint project between INAF (Observatorio Astronomico di Bologna, Italy) and the Max Planck Institute for Astronomy (Germany). Is intended here to describe the functionality and successful results obtained with the Daemon for the Infrared Test Camera Interface (DirtI) during commissioning campaigns.

  9. Simple and cost-effective hardware and software for functional brain mapping using intrinsic optical signal imaging.

    PubMed

    Harrison, Thomas C; Sigler, Albrecht; Murphy, Timothy H

    2009-09-15

    We describe a simple and low-cost system for intrinsic optical signal (IOS) imaging using stable LED light sources, basic microscopes, and commonly available CCD cameras. IOS imaging measures activity-dependent changes in the light reflectance of brain tissue, and can be performed with a minimum of specialized equipment. Our system uses LED ring lights that can be mounted on standard microscope objectives or video lenses to provide a homogeneous and stable light source, with less than 0.003% fluctuation across images averaged from 40 trials. We describe the equipment and surgical techniques necessary for both acute and chronic mouse preparations, and provide software that can create maps of sensory representations from images captured by inexpensive 8-bit cameras or by 12-bit cameras. The IOS imaging system can be adapted to commercial upright microscopes or custom macroscopes, eliminating the need for dedicated equipment or complex optical paths. This method can be combined with parallel high resolution imaging techniques such as two-photon microscopy.

  10. Low-cost, high-resolution scanning laser ophthalmoscope for the clinical environment

    NASA Astrophysics Data System (ADS)

    Soliz, P.; Larichev, A.; Zamora, G.; Murillo, S.; Barriga, E. S.

    2010-02-01

    Researchers have sought to gain greater insight into the mechanisms of the retina and the optic disc at high spatial resolutions that would enable the visualization of small structures such as photoreceptors and nerve fiber bundles. The sources of retinal image quality degradation are aberrations within the human eye, which limit the achievable resolution and the contrast of small image details. To overcome these fundamental limitations, researchers have been applying adaptive optics (AO) techniques to correct for the aberrations. Today, deformable mirror based adaptive optics devices have been developed to overcome the limitations of standard fundus cameras, but at prices that are typically unaffordable for most clinics. In this paper we demonstrate a clinically viable fundus camera with auto-focus and astigmatism correction that is easy to use and has improved resolution. We have shown that removal of low-order aberrations results in significantly better resolution and quality images. Additionally, through the application of image restoration and super-resolution techniques, the images present considerably improved quality. The improvements lead to enhanced visualization of retinal structures associated with pathology.

  11. Optimal energy-splitting method for an open-loop liquid crystal adaptive optics system.

    PubMed

    Cao, Zhaoliang; Mu, Quanquan; Hu, Lifa; Liu, Yonggang; Peng, Zenghui; Yang, Qingyun; Meng, Haoran; Yao, Lishuang; Xuan, Li

    2012-08-13

    A waveband-splitting method is proposed for open-loop liquid crystal adaptive optics systems (LC AOSs). The proposed method extends the working waveband, splits energy flexibly, and improves detection capability. Simulated analysis is performed for a waveband in the range of 350 nm to 950 nm. The results show that the optimal energy split is 7:3 for the wavefront sensor (WFS) and for the imaging camera with the waveband split into 350 nm to 700 nm and 700 nm to 950 nm, respectively. A validation experiment is conducted by measuring the signal-to-noise ratio (SNR) of the WFS and the imaging camera. The results indicate that for the waveband-splitting method, the SNR of WFS is approximately equal to that of the imaging camera with a variation in the intensity. On the other hand, the SNR of the WFS is significantly different from that of the imaging camera for the polarized beam splitter energy splitting scheme. Therefore, the waveband-splitting method is more suitable for an open-loop LC AOS. An adaptive correction experiment is also performed on a 1.2-meter telescope. A star with a visual magnitude of 4.45 is observed and corrected and an angular resolution ability of 0.31″ is achieved. A double star with a combined visual magnitude of 4.3 is observed as well, and its two components are resolved after correction. The results indicate that the proposed method can significantly improve the detection capability of an open-loop LC AOS.

  12. Using two MEMS deformable mirrors in an adaptive optics test bed for multiconjugate correction

    NASA Astrophysics Data System (ADS)

    Andrews, Jonathan R.; Martinez, Ty; Teare, Scott W.; Restaino, Sergio R.; Wilcox, Christopher C.; Santiago, Freddie; Payne, Don M.

    2010-02-01

    Adaptive optics systems have advanced considerably over the past decade and have become common tools for optical engineers. The most recent advances in adaptive optics technology have lead to significant reductions in the cost of most of the key components. Most significantly, the cost of deformable elements and wavefront sensor components have dropped to the point where multiple deformable mirrors and Shack- Hartmann array based wavefront sensor cameras can be included in a single system. Matched with the appropriate hardware and software, formidable systems can be operating in nearly any sized research laboratory. The significant advancement of MEMS deformable mirrors has made them very popular for use as the active corrective element in multi-conjugate adaptive optics systems so that, in particular for astronomical applications, this allows correction in more than one plane. The NRL compact AO system and atmospheric simulation systems has now been expanded to support Multi Conjugate Adaptive Optics (MCAO), taking advantage of using the liquid crystal spatial light modulator (SLM) driven aberration generators in two conjugate planes that are well separated spatially. Thus, by using two SLM based aberration generators and two separate wavefront sensors, the system can measure and apply wavefront correction with two MEMS deformable mirrors. This paper describes the multi-conjugate adaptive optics system and the testing and calibration of the system and demonstrates preliminary results with this system.

  13. Subaperture correlation based digital adaptive optics for full field optical coherence tomography.

    PubMed

    Kumar, Abhishek; Drexler, Wolfgang; Leitgeb, Rainer A

    2013-05-06

    This paper proposes a sub-aperture correlation based numerical phase correction method for interferometric full field imaging systems provided the complex object field information can be extracted. This method corrects for the wavefront aberration at the pupil/ Fourier transform plane without the need of any adaptive optics, spatial light modulators (SLM) and additional cameras. We show that this method does not require the knowledge of any system parameters. In the simulation study, we consider a full field swept source OCT (FF SSOCT) system to show the working principle of the algorithm. Experimental results are presented for a technical and biological sample to demonstrate the proof of the principle.

  14. Deep Near-Infrared Surveys and Young Brown Dwarf Populations in Star-Forming Regions

    NASA Astrophysics Data System (ADS)

    Tamura, M.; Naoi, T.; Oasa, Y.; Nakajima, Y.; Nagashima, C.; Nagayama, T.; Baba, D.; Nagata, T.; Sato, S.; Kato, D.; Kurita, M.; Sugitani, K.; Itoh, Y.; Nakaya, H.; Pickles, A.

    2003-06-01

    We are currently conducting three kinds of IR surveys of star forming regions (SFRs) in order to seek for very low-mass young stellar populations. First is a deep JHKs-bands (simultaneous) survey with the SIRIUS camera on the IRSF 1.4m or the UH 2.2m telescopes. Second is a very deep JHKs survey with the CISCO IR camera on the Subaru 8.2m telescope. Third is a high resolution companion search around nearby YSOs with the CIAO adaptive optics coronagraph IR camera on the Subaru. In this contribution, we describe our SIRIUS camera and present preliminary results of the ongoing surveys with this new instrument.

  15. Phase and amplitude wave front sensing and reconstruction with a modified plenoptic camera

    NASA Astrophysics Data System (ADS)

    Wu, Chensheng; Ko, Jonathan; Nelson, William; Davis, Christopher C.

    2014-10-01

    A plenoptic camera is a camera that can retrieve the direction and intensity distribution of light rays collected by the camera and allows for multiple reconstruction functions such as: refocusing at a different depth, and for 3D microscopy. Its principle is to add a micro-lens array to a traditional high-resolution camera to form a semi-camera array that preserves redundant intensity distributions of the light field and facilitates back-tracing of rays through geometric knowledge of its optical components. Though designed to process incoherent images, we found that the plenoptic camera shows high potential in solving coherent illumination cases such as sensing both the amplitude and phase information of a distorted laser beam. Based on our earlier introduction of a prototype modified plenoptic camera, we have developed the complete algorithm to reconstruct the wavefront of the incident light field. In this paper the algorithm and experimental results will be demonstrated, and an improved version of this modified plenoptic camera will be discussed. As a result, our modified plenoptic camera can serve as an advanced wavefront sensor compared with traditional Shack- Hartmann sensors in handling complicated cases such as coherent illumination in strong turbulence where interference and discontinuity of wavefronts is common. Especially in wave propagation through atmospheric turbulence, this camera should provide a much more precise description of the light field, which would guide systems in adaptive optics to make intelligent analysis and corrections.

  16. The opto-mechanical design for GMOX: a next-generation instrument concept for Gemini

    NASA Astrophysics Data System (ADS)

    Smee, Stephen A.; Barkhouser, Robert; Robberto, Massimo; Ninkov, Zoran; Gennaro, Mario; Heckman, Timothy M.

    2016-08-01

    We present the opto-mechanical design of GMOX, the Gemini Multi-Object eXtra-wide-band spectrograph, a potential next-generation (Gen-4 #3) facility-class instrument for Gemini. GMOX is a wide-band, multi-object, spectrograph with spectral coverage spanning 350 nm to 2.4 um with a nominal resolving power of R 5000. Through the use of Digital Micromirror Device (DMD) technology, GMOX will be able to acquire spectra from hundreds of sources simultaneously, offering unparalleled flexibility in target selection. Utilizing this technology, GMOX can rapidly adapt individual slits to either seeing-limited or diffraction-limited conditions. The optical design splits the bandpass into three arms, blue, red, and near infrared, with the near-infrared arm being split into three channels covering the Y+J band, H band, and K band. A slit viewing camera in each arm provides imaging capability for target acquisition and fast-feedback for adaptive optics control with either ALTAIR (Gemini North) or GeMS (Gemini South). Mounted at the Cassegrain focus, GMOX is a large (1.3 m x 2.8 m x 2.0 m) complex instrument, with six dichroics, three DMDs (one per arm), five science cameras, and three acquisition cameras. Roughly half of these optics, including one DMD, operate at cryogenic temperature. To maximize stiffness and simplify assembly and alignment, the opto-mechanics are divided into three main sub-assemblies, including a near-infrared cryostat, each having sub-benches to facilitate ease of alignment and testing of the optics. In this paper we present the conceptual opto-mechanical design of GMOX, with an emphasis on the mounting strategy for the optics and the thermal design details related to the near-infrared cryostat.

  17. Image acquisition device of inspection robot based on adaptive rotation regulation of polarizer

    NASA Astrophysics Data System (ADS)

    Dong, Maoqi; Wang, Xingguang; Liang, Tao; Yang, Guoqing; Zhang, Chuangyou; Gao, Faqin

    2017-12-01

    An image processing device of inspection robot with adaptive polarization adjustment is proposed, that the device includes the inspection robot body, the image collecting mechanism, the polarizer and the polarizer automatic actuating device. Where, the image acquisition mechanism is arranged at the front of the inspection robot body for collecting equipment image data in the substation. Polarizer is fixed on the automatic actuating device of polarizer, and installed in front of the image acquisition mechanism, and that the optical axis of the camera vertically goes through the polarizer and the polarizer rotates with the optical axis of the visible camera as the central axis. The simulation results show that the system solves the fuzzy problems of the equipment that are caused by glare, reflection of light and shadow, and the robot can observe details of the running status of electrical equipment. And the full coverage of the substation equipment inspection robot observation target is achieved, which ensures the safe operation of the substation equipment.

  18. Rapid and highly integrated FPGA-based Shack-Hartmann wavefront sensor for adaptive optics system

    NASA Astrophysics Data System (ADS)

    Chen, Yi-Pin; Chang, Chia-Yuan; Chen, Shean-Jen

    2018-02-01

    In this study, a field programmable gate array (FPGA)-based Shack-Hartmann wavefront sensor (SHWS) programmed on LabVIEW can be highly integrated into customized applications such as adaptive optics system (AOS) for performing real-time wavefront measurement. Further, a Camera Link frame grabber embedded with FPGA is adopted to enhance the sensor speed reacting to variation considering its advantage of the highest data transmission bandwidth. Instead of waiting for a frame image to be captured by the FPGA, the Shack-Hartmann algorithm are implemented in parallel processing blocks design and let the image data transmission synchronize with the wavefront reconstruction. On the other hand, we design a mechanism to control the deformable mirror in the same FPGA and verify the Shack-Hartmann sensor speed by controlling the frequency of the deformable mirror dynamic surface deformation. Currently, this FPGAbead SHWS design can achieve a 266 Hz cyclic speed limited by the camera frame rate as well as leaves 40% logic slices for additionally flexible design.

  19. Adaptive optics; Proceedings of the Meeting, Arlington, VA, April 10, 11, 1985

    NASA Astrophysics Data System (ADS)

    Ludman, J. E.

    Papers are presented on the directed energy program for ballistic missile defense, a self-referencing wavefront interferometer for laser sources, the effects of mirror grating distortions on diffraction spots at wavefront sensors, and the optical design of an all-reflecting, high-resolution camera for active-optics on ground-based telescopes. Also considered are transverse coherence length observations, time dependent statistics of upper atmosphere optical turbulence, high altitude acoustic soundings, and the Cramer-Rao lower bound on wavefront sensor error. Other topics include wavefront reconstruction from noisy slope or difference data using the discrete Fourier transform, acoustooptic adaptive signal processing, the recording of phase deformations on a PLZT wafer for holographic and spatial light modulator applications, and an optical phase reconstructor using a multiplier-accumulator approach. Papers are also presented on an integrated optics wavefront measurement sensor, a new optical preprocessor for automatic vision systems, a model for predicting infrared atmospheric emission fluctuations, and optical logic gates and flip-flops based on polarization-bistable semiconductor lasers.

  20. Adaptive optics system application for solar telescope

    NASA Astrophysics Data System (ADS)

    Lukin, V. P.; Grigor'ev, V. M.; Antoshkin, L. V.; Botugina, N. N.; Emaleev, O. N.; Konyaev, P. A.; Kovadlo, P. G.; Krivolutskiy, N. P.; Lavrionova, L. N.; Skomorovski, V. I.

    2008-07-01

    The possibility of applying adaptive correction to ground-based solar astronomy is considered. Several experimental systems for image stabilization are described along with the results of their tests. Using our work along several years and world experience in solar adaptive optics (AO) we are assuming to obtain first light to the end of 2008 for the first Russian low order ANGARA solar AO system on the Big Solar Vacuum Telescope (BSVT) with 37 subapertures Shack-Hartmann wavefront sensor based of our modified correlation tracker algorithm, DALSTAR video camera, 37 elements deformable bimorph mirror, home made fast tip-tip mirror with separate correlation tracker. Too strong daytime turbulence is on the BSVT site and we are planning to obtain a partial correction for part of Sun surface image.

  1. Research on the liquid crystal adaptive optics system for human retinal imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Tong, Shoufeng; Song, Yansong; Zhao, Xin

    2013-12-01

    The blood vessels only in Human eye retinal can be observed directly. Many diseases that are not obvious in their early symptom can be diagnosed through observing the changes of distal micro blood vessel. In order to obtain the high resolution human retinal images,an adaptive optical system for correcting the aberration of the human eye was designed by using the Shack-Hartmann wavefront sensor and the Liquid Crystal Spatial Light Modulator(LCLSM) .For a subject eye with 8m-1 (8D)myopia, the wavefront error is reduced to 0.084 λ PV and 0.12 λRMS after adaptive optics(AO) correction ,which has reached diffraction limit.The results show that the LCLSM based AO system has the ability of correcting the aberration of the human eye efficiently,and making the blurred photoreceptor cell to clearly image on a CCD camera.

  2. Digital optical correlator x-ray telescope alignment monitoring system

    NASA Astrophysics Data System (ADS)

    Lis, Tomasz; Gaskin, Jessica; Jasper, John; Gregory, Don A.

    2018-01-01

    The High-Energy Replicated Optics to Explore the Sun (HEROES) program is a balloon-borne x-ray telescope mission to observe hard x-rays (˜20 to 70 keV) from the sun and multiple astrophysical targets. The payload consists of eight mirror modules with a total of 114 optics that are mounted on a 6-m-long optical bench. Each mirror module is complemented by a high-pressure xenon gas scintillation proportional counter. Attached to the payload is a camera that acquires star fields and then matches the acquired field to star maps to determine the pointing of the optical bench. Slight misalignments between the star camera, the optical bench, and the telescope elements attached to the optical bench may occur during flight due to mechanical shifts, thermal gradients, and gravitational effects. These misalignments can result in diminished imaging and reduced photon collection efficiency. To monitor these misalignments during flight, a supplementary Bench Alignment Monitoring System (BAMS) was added to the payload. BAMS hardware comprises two cameras mounted directly to the optical bench and rings of light-emitting diodes (LEDs) mounted onto the telescope components. The LEDs in these rings are mounted in a predefined, asymmetric pattern, and their positions are tracked using an optical/digital correlator. The BAMS analysis software is a digital adaption of an optical joint transform correlator. The aim is to enhance the observational proficiency of HEROES while providing insight into the magnitude of mechanically and thermally induced misalignments during flight. Results from a preflight test of the system are reported.

  3. Adaptive optics fundus images of cone photoreceptors in the macula of patients with retinitis pigmentosa.

    PubMed

    Tojo, Naoki; Nakamura, Tomoko; Fuchizawa, Chiharu; Oiwake, Toshihiko; Hayashi, Atsushi

    2013-01-01

    The purpose of this study was to examine cone photoreceptors in the macula of patients with retinitis pigmentosa using an adaptive optics fundus camera and to investigate any correlations between cone photoreceptor density and findings on optical coherence tomography and fundus autofluorescence. We examined two patients with typical retinitis pigmentosa who underwent ophthalmological examination, including measurement of visual acuity, and gathering of electroretinographic, optical coherence tomographic, fundus autofluorescent, and adaptive optics fundus images. The cone photoreceptors in the adaptive optics images of the two patients with retinitis pigmentosa and five healthy subjects were analyzed. An abnormal parafoveal ring of high-density fundus autofluorescence was observed in the macula in both patients. The border of the ring corresponded to the border of the external limiting membrane and the inner segment and outer segment line in the optical coherence tomographic images. Cone photoreceptors at the abnormal parafoveal ring were blurred and decreased in the adaptive optics images. The blurred area corresponded to the abnormal parafoveal ring in the fundus autofluorescence images. Cone densities were low at the blurred areas and at the nasal and temporal retina along a line from the fovea compared with those of healthy controls. The results for cone spacing and Voronoi domains in the macula corresponded with those for the cone densities. Cone densities were heavily decreased in the macula, especially at the parafoveal ring on high-density fundus autofluorescence in both patients with retinitis pigmentosa. Adaptive optics images enabled us to observe in vivo changes in the cone photoreceptors of patients with retinitis pigmentosa, which corresponded to changes in the optical coherence tomographic and fundus autofluorescence images.

  4. Adaptive optics retinal imaging: emerging clinical applications.

    PubMed

    Godara, Pooja; Dubis, Adam M; Roorda, Austin; Duncan, Jacque L; Carroll, Joseph

    2010-12-01

    The human retina is a uniquely accessible tissue. Tools like scanning laser ophthalmoscopy and spectral domain-optical coherence tomography provide clinicians with remarkably clear pictures of the living retina. Although the anterior optics of the eye permit such non-invasive visualization of the retina and associated pathology, the same optics induce significant aberrations that obviate cellular-resolution imaging in most cases. Adaptive optics (AO) imaging systems use active optical elements to compensate for aberrations in the optical path between the object and the camera. When applied to the human eye, AO allows direct visualization of individual rod and cone photoreceptor cells, retinal pigment epithelium cells, and white blood cells. AO imaging has changed the way vision scientists and ophthalmologists see the retina, helping to clarify our understanding of retinal structure, function, and the etiology of various retinal pathologies. Here, we review some of the advances that were made possible with AO imaging of the human retina and discuss applications and future prospects for clinical imaging.

  5. Imaging microscopic structures in pathological retinas using a flood-illumination adaptive optics retinal camera

    NASA Astrophysics Data System (ADS)

    Viard, Clément; Nakashima, Kiyoko; Lamory, Barbara; Pâques, Michel; Levecq, Xavier; Château, Nicolas

    2011-03-01

    This research is aimed at characterizing in vivo differences between healthy and pathological retinal tissues at the microscopic scale using a compact adaptive optics (AO) retinal camera. Tests were performed in 120 healthy eyes and 180 eyes suffering from 19 different pathological conditions, including age-related maculopathy (ARM), glaucoma and rare diseases such as inherited retinal dystrophies. Each patient was first examined using SD-OCT and infrared SLO. Retinal areas of 4°x4° were imaged using an AO flood-illumination retinal camera based on a large-stroke deformable mirror. Contrast was finally enhanced by registering and averaging rough images using classical algorithms. Cellular-resolution images could be obtained in most cases. In ARM, AO images revealed granular contents in drusen, which were invisible in SLO or OCT images, and allowed the observation of the cone mosaic between drusen. In glaucoma cases, visual field was correlated to changes in cone visibility. In inherited retinal dystrophies, AO helped to evaluate cone loss across the retina. Other microstructures, slightly larger in size than cones, were also visible in several retinas. AO provided potentially useful diagnostic and prognostic information in various diseases. In addition to cones, other microscopic structures revealed by AO images may also be of interest in monitoring retinal diseases.

  6. VizieR Online Data Catalog: The multiplicity of M dwarfs in young moving groups (Shan+, 2017)

    NASA Astrophysics Data System (ADS)

    Shan, Y.; Yee, J. C.; Bowler, B. P.; Cieza, L. A.; Montet, B. T.; Canovas, H.; Liu, M. C.; Close, L. M.; Hinz, P. M.; Males, J. R.; Morzinski, K. M.; Vaz, A.; Bailey, V. P.; Follette K. B.; MagAO Team

    2018-04-01

    Adaptive optics observations were conducted on the 6.5m Magellan Clay Telescope at the Las Campanas Observatory in Chile using the MagAO instrument. Images were taken with two science cameras simultaneously: Clio in the near-infrared, and VisAO in the optical. The MagAO/Clio observations in H or Ks bands span 2014 Apr 17-21 to 2015 Nov 26-27. (4 data files).

  7. 3D imaging and wavefront sensing with a plenoptic objective

    NASA Astrophysics Data System (ADS)

    Rodríguez-Ramos, J. M.; Lüke, J. P.; López, R.; Marichal-Hernández, J. G.; Montilla, I.; Trujillo-Sevilla, J.; Femenía, B.; Puga, M.; López, M.; Fernández-Valdivia, J. J.; Rosa, F.; Dominguez-Conde, C.; Sanluis, J. C.; Rodríguez-Ramos, L. F.

    2011-06-01

    Plenoptic cameras have been developed over the last years as a passive method for 3d scanning. Several superresolution algorithms have been proposed in order to increase the resolution decrease associated with lightfield acquisition with a microlenses array. A number of multiview stereo algorithms have also been applied in order to extract depth information from plenoptic frames. Real time systems have been implemented using specialized hardware as Graphical Processing Units (GPUs) and Field Programmable Gates Arrays (FPGAs). In this paper, we will present our own implementations related with the aforementioned aspects but also two new developments consisting of a portable plenoptic objective to transform every conventional 2d camera in a 3D CAFADIS plenoptic camera, and the novel use of a plenoptic camera as a wavefront phase sensor for adaptive optics (OA). The terrestrial atmosphere degrades the telescope images due to the diffraction index changes associated with the turbulence. These changes require a high speed processing that justify the use of GPUs and FPGAs. Na artificial Laser Guide Stars (Na-LGS, 90km high) must be used to obtain the reference wavefront phase and the Optical Transfer Function of the system, but they are affected by defocus because of the finite distance to the telescope. Using the telescope as a plenoptic camera allows us to correct the defocus and to recover the wavefront phase tomographically. These advances significantly increase the versatility of the plenoptic camera, and provides a new contribution to relate the wave optics and computer vision fields, as many authors claim.

  8. Progress with the lick adaptive optics system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gavel, D T; Olivier, S S; Bauman, B

    2000-03-01

    Progress and results of observations with the Lick Observatory Laser Guide Star Adaptive Optics System are presented. This system is optimized for diffraction-limited imaging in the near infrared, 1-2 micron wavelength bands. We describe our development efforts in a number of component areas including, a redesign of the optical bench layout, the commissioning of a new infrared science camera, and improvements to the software and user interface. There is also an ongoing effort to characterize the system performance with both natural and laser guide stars and to fold this data into a refined system model. Such a model can bemore » used to help plan future observations, for example, predicting the point-spread function as a function of seeing and guide star magnitude.« less

  9. Development of mobile phone based transcutaneous billirubinometry

    NASA Astrophysics Data System (ADS)

    Dumont, Alexander P.; Harrison, Brandon; McCormick, Zachary T.; Ganesh Kumar, Nishant; Patil, Chetan A.

    2017-03-01

    Infants in the US are routinely screened for risk of neurodevelopmental impairment due to neonatal jaundice using transcutaneous bilirubinometry (TcB). In low-resource settings, such as sub-Saharan Africa, TcB devices are not common, however, mobile camera-phones are now widespread. We provide an update on the development of TcB using the built-in camera and flash of a mobile phone, along with a snap-on adapter containing optical filters. We will present Monte Carlo Extreme modeling of diffuse reflectance in neonatal skin, implications in design, and refined analysis methods.

  10. Adaptive Beamforming Algorithms for High Resolution Microwave Imaging

    DTIC Science & Technology

    1991-04-01

    frequency- and phase -locked. With a system of radio camera size it must be assumed that oscillators will drift and, similarly, that electronic circuits in...propagation-induced phase errors an array as large as the one under discussion is likely to experience differ- ent weather conditions across it. The nominal...human optical system. Such a passing-scene display with human optical resolving power would be available to the air - man at night as well as during the

  11. Digital micromirror device camera with per-pixel coded exposure for high dynamic range imaging.

    PubMed

    Feng, Wei; Zhang, Fumin; Wang, Weijing; Xing, Wei; Qu, Xinghua

    2017-05-01

    In this paper, we overcome the limited dynamic range of the conventional digital camera, and propose a method of realizing high dynamic range imaging (HDRI) from a novel programmable imaging system called a digital micromirror device (DMD) camera. The unique feature of the proposed new method is that the spatial and temporal information of incident light in our DMD camera can be flexibly modulated, and it enables the camera pixels always to have reasonable exposure intensity by DMD pixel-level modulation. More importantly, it allows different light intensity control algorithms used in our programmable imaging system to achieve HDRI. We implement the optical system prototype, analyze the theory of per-pixel coded exposure for HDRI, and put forward an adaptive light intensity control algorithm to effectively modulate the different light intensity to recover high dynamic range images. Via experiments, we demonstrate the effectiveness of our method and implement the HDRI on different objects.

  12. Mary, a Pipeline to Aid Discovery of Optical Transients

    NASA Astrophysics Data System (ADS)

    Andreoni, I.; Jacobs, C.; Hegarty, S.; Pritchard, T.; Cooke, J.; Ryder, S.

    2017-09-01

    The ability to quickly detect transient sources in optical images and trigger multi-wavelength follow up is key for the discovery of fast transients. These include events rare and difficult to detect such as kilonovae, supernova shock breakout, and `orphan' Gamma-ray Burst afterglows. We present the Mary pipeline, a (mostly) automated tool to discover transients during high-cadenced observations with the Dark Energy Camera at Cerro Tololo Inter-American Observatory (CTIO). The observations are part of the `Deeper Wider Faster' programme, a multi-facility, multi-wavelength programme designed to discover fast transients, including counterparts to Fast Radio Bursts and gravitational waves. Our tests of the Mary pipeline on Dark Energy Camera images return a false positive rate of 2.2% and a missed fraction of 3.4% obtained in less than 2 min, which proves the pipeline to be suitable for rapid and high-quality transient searches. The pipeline can be adapted to search for transients in data obtained with imagers other than Dark Energy Camera.

  13. Performance of laser guide star adaptive optics at Lick Observatory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olivier, S.S.; An, J.; Avicola, K.

    1995-07-19

    A sodium-layer laser guide star adaptive optics system has been developed at Lawrence Livermore National Laboratory (LLNL) for use on the 3-meter Shane telescope at Lick Observatory. The system is based on a 127-actuator continuous-surface deformable mirror, a Hartmann wavefront sensor equipped with a fast-framing low-noise CCD camera, and a pulsed solid-state-pumped dye laser tuned to the atomic sodium resonance line at 589 nm. The adaptive optics system has been tested on the Shane telescope using natural reference stars yielding up to a factor of 12 increase in image peak intensity and a factor of 6.5 reduction in image fullmore » width at half maximum (FWHM). The results are consistent with theoretical expectations. The laser guide star system has been installed and operated on the Shane telescope yielding a beam with 22 W average power at 589 nm. Based on experimental data, this laser should generate an 8th magnitude guide star at this site, and the integrated laser guide star adaptive optics system should produce images with Strehl ratios of 0.4 at 2.2 {mu}m in median seeing and 0.7 at 2.2 {mu}m in good seeing.« less

  14. Performance of the Keck Observatory adaptive-optics system.

    PubMed

    van Dam, Marcos A; Le Mignant, David; Macintosh, Bruce A

    2004-10-10

    The adaptive-optics (AO) system at the W. M. Keck Observatory is characterized. We calculate the error budget of the Keck AO system operating in natural guide star mode with a near-infrared imaging camera. The measurement noise and bandwidth errors are obtained by modeling the control loops and recording residual centroids. Results of sky performance tests are presented: The AO system is shown to deliver images with average Strehl ratios of as much as 0.37 at 1.58 microm when a bright guide star is used and of 0.19 for a magnitude 12 star. The images are consistent with the predicted wave-front error based on our error budget estimates.

  15. Conceptual design for a user-friendly adaptive optics system at Lick Observatory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bissinger, H.D.; Olivier, S.; Max, C.

    1996-03-08

    In this paper, we present a conceptual design for a general-purpose adaptive optics system, usable with all Cassegrain facility instruments on the 3 meter Shane telescope at the University of California`s Lick Observatory located on Mt. Hamilton near San Jose, California. The overall design goal for this system is to take the sodium-layer laser guide star adaptive optics technology out of the demonstration stage and to build a user-friendly astronomical tool. The emphasis will be on ease of calibration, improved stability and operational simplicity in order to allow the system to be run routinely by observatory staff. A prototype adaptivemore » optics system and a 20 watt sodium-layer laser guide star system have already been built at Lawrence Livermore National Laboratory for use at Lick Observatory. The design presented in this paper is for a next- generation adaptive optics system that extends the capabilities of the prototype system into the visible with more degrees of freedom. When coupled with a laser guide star system that is upgraded to a power matching the new adaptive optics system, the combined system will produce diffraction-limited images for near-IR cameras. Atmospheric correction at wavelengths of 0.6-1 mm will significantly increase the throughput of the most heavily used facility instrument at Lick, the Kast Spectrograph, and will allow it to operate with smaller slit widths and deeper limiting magnitudes. 8 refs., 2 figs.« less

  16. Background Registration-Based Adaptive Noise Filtering of LWIR/MWIR Imaging Sensors for UAV Applications

    PubMed Central

    Kim, Byeong Hak; Kim, Min Young; Chae, You Seong

    2017-01-01

    Unmanned aerial vehicles (UAVs) are equipped with optical systems including an infrared (IR) camera such as electro-optical IR (EO/IR), target acquisition and designation sights (TADS), or forward looking IR (FLIR). However, images obtained from IR cameras are subject to noise such as dead pixels, lines, and fixed pattern noise. Nonuniformity correction (NUC) is a widely employed method to reduce noise in IR images, but it has limitations in removing noise that occurs during operation. Methods have been proposed to overcome the limitations of the NUC method, such as two-point correction (TPC) and scene-based NUC (SBNUC). However, these methods still suffer from unfixed pattern noise. In this paper, a background registration-based adaptive noise filtering (BRANF) method is proposed to overcome the limitations of conventional methods. The proposed BRANF method utilizes background registration processing and robust principle component analysis (RPCA). In addition, image quality verification methods are proposed that can measure the noise filtering performance quantitatively without ground truth images. Experiments were performed for performance verification with middle wave infrared (MWIR) and long wave infrared (LWIR) images obtained from practical military optical systems. As a result, it is found that the image quality improvement rate of BRANF is 30% higher than that of conventional NUC. PMID:29280970

  17. Background Registration-Based Adaptive Noise Filtering of LWIR/MWIR Imaging Sensors for UAV Applications.

    PubMed

    Kim, Byeong Hak; Kim, Min Young; Chae, You Seong

    2017-12-27

    Unmanned aerial vehicles (UAVs) are equipped with optical systems including an infrared (IR) camera such as electro-optical IR (EO/IR), target acquisition and designation sights (TADS), or forward looking IR (FLIR). However, images obtained from IR cameras are subject to noise such as dead pixels, lines, and fixed pattern noise. Nonuniformity correction (NUC) is a widely employed method to reduce noise in IR images, but it has limitations in removing noise that occurs during operation. Methods have been proposed to overcome the limitations of the NUC method, such as two-point correction (TPC) and scene-based NUC (SBNUC). However, these methods still suffer from unfixed pattern noise. In this paper, a background registration-based adaptive noise filtering (BRANF) method is proposed to overcome the limitations of conventional methods. The proposed BRANF method utilizes background registration processing and robust principle component analysis (RPCA). In addition, image quality verification methods are proposed that can measure the noise filtering performance quantitatively without ground truth images. Experiments were performed for performance verification with middle wave infrared (MWIR) and long wave infrared (LWIR) images obtained from practical military optical systems. As a result, it is found that the image quality improvement rate of BRANF is 30% higher than that of conventional NUC.

  18. Intelligent correction of laser beam propagation through turbulent media using adaptive optics

    NASA Astrophysics Data System (ADS)

    Ko, Jonathan; Wu, Chensheng; Davis, Christopher C.

    2014-10-01

    Adaptive optics methods have long been used by researchers in the astronomy field to retrieve correct images of celestial bodies. The approach is to use a deformable mirror combined with Shack-Hartmann sensors to correct the slightly distorted image when it propagates through the earth's atmospheric boundary layer, which can be viewed as adding relatively weak distortion in the last stage of propagation. However, the same strategy can't be easily applied to correct images propagating along a horizontal deep turbulence path. In fact, when turbulence levels becomes very strong (Cn 2>10-13 m-2/3), limited improvements have been made in correcting the heavily distorted images. We propose a method that reconstructs the light field that reaches the camera, which then provides information for controlling a deformable mirror. An intelligent algorithm is applied that provides significant improvement in correcting images. In our work, the light field reconstruction has been achieved with a newly designed modified plenoptic camera. As a result, by actively intervening with the coherent illumination beam, or by giving it various specific pre-distortions, a better (less turbulence affected) image can be obtained. This strategy can also be expanded to much more general applications such as correcting laser propagation through random media and can also help to improve designs in free space optical communication systems.

  19. Object recognition through turbulence with a modified plenoptic camera

    NASA Astrophysics Data System (ADS)

    Wu, Chensheng; Ko, Jonathan; Davis, Christopher

    2015-03-01

    Atmospheric turbulence adds accumulated distortion to images obtained by cameras and surveillance systems. When the turbulence grows stronger or when the object is further away from the observer, increasing the recording device resolution helps little to improve the quality of the image. Many sophisticated methods to correct the distorted images have been invented, such as using a known feature on or near the target object to perform a deconvolution process, or use of adaptive optics. However, most of the methods depend heavily on the object's location, and optical ray propagation through the turbulence is not directly considered. Alternatively, selecting a lucky image over many frames provides a feasible solution, but at the cost of time. In our work, we propose an innovative approach to improving image quality through turbulence by making use of a modified plenoptic camera. This type of camera adds a micro-lens array to a traditional high-resolution camera to form a semi-camera array that records duplicate copies of the object as well as "superimposed" turbulence at slightly different angles. By performing several steps of image reconstruction, turbulence effects will be suppressed to reveal more details of the object independently (without finding references near the object). Meanwhile, the redundant information obtained by the plenoptic camera raises the possibility of performing lucky image algorithmic analysis with fewer frames, which is more efficient. In our work, the details of our modified plenoptic cameras and image processing algorithms will be introduced. The proposed method can be applied to coherently illuminated object as well as incoherently illuminated objects. Our result shows that the turbulence effect can be effectively suppressed by the plenoptic camera in the hardware layer and a reconstructed "lucky image" can help the viewer identify the object even when a "lucky image" by ordinary cameras is not achievable.

  20. Real-time real-sky dual-conjugate adaptive optics experiment

    NASA Astrophysics Data System (ADS)

    Knutsson, Per; Owner-Petersen, Mette

    2006-06-01

    The current status of a real-time real-sky dual-conjugate adaptive optics experiment is presented. This experiment is a follow-up on a lab experiment at Lund Observatory that demonstrated dual-conjugate adaptive optics on a static atmosphere. The setup is to be placed at Lund Observatory. This means that the setup will be available 24h a day and does not have to share time with other instruments. The optical design of the experiment is finalized. A siderostat will be used to track the guide object and all other optical components are placed on an optical table. A small telescope, 35 cm aperture, is used and following this a tip-tilt mirror and two deformable mirrors are placed. The wave-front sensor is a Shack-Hartmann sensor using a SciMeasure Li'l Joe CCD39 camera system. The maximum update rate of the setup will be 0.5 kHz and the control system will be running under Linux. The effective wavelength will be 750 nm. All components in the setup have been acquired and the completion of the setup is underway. Collaborating partners in this project are the Applied Optics Group at National University of Ireland, Galway and the Swedish Defense Research Agency.

  1. Fourier-domain optical coherence tomography and adaptive optics reveal nerve fiber layer loss and photoreceptor changes in a patient with optic nerve drusen.

    PubMed

    Choi, Stacey S; Zawadzki, Robert J; Greiner, Mark A; Werner, John S; Keltner, John L

    2008-06-01

    New technology allows more precise definition of structural alterations of all retinal layers although it has not been used previously in cases of optic disc drusen. Using Stratus and Fourier domain (FD) optical coherence tomography (OCT) and adaptive optics (AO) through a flood-illuminated fundus camera, we studied the retinas of a patient with long-standing optic disc drusen and acute visual loss at high altitude attributed to ischemic optic neuropathy. Stratus OCT and FD-OCT confirmed severe thinning of the retinal nerve fiber layer (RNFL). FD-OCT revealed disturbances in the photoreceptor layer heretofore not described in optic disc drusen patients. AO confirmed the FD-OCT findings in the photoreceptor layer and also showed reduced cone density at retinal locations associated with reduced visual sensitivity. Based on this study, changes occur not only in the RNFL but also in the photoreceptor layer in optic nerve drusen complicated by ischemic optic neuropathy. This is the first reported application of FD-OCT and the AO to this condition. Such new imaging technology may in the future allow monitoring of disease progression more precisely and accurately.

  2. Adaptive metalenses with simultaneous electrical control of focal length, astigmatism, and shift.

    PubMed

    She, Alan; Zhang, Shuyan; Shian, Samuel; Clarke, David R; Capasso, Federico

    2018-02-01

    Focal adjustment and zooming are universal features of cameras and advanced optical systems. Such tuning is usually performed longitudinally along the optical axis by mechanical or electrical control of focal length. However, the recent advent of ultrathin planar lenses based on metasurfaces (metalenses), which opens the door to future drastic miniaturization of mobile devices such as cell phones and wearable displays, mandates fundamentally different forms of tuning based on lateral motion rather than longitudinal motion. Theory shows that the strain field of a metalens substrate can be directly mapped into the outgoing optical wavefront to achieve large diffraction-limited focal length tuning and control of aberrations. We demonstrate electrically tunable large-area metalenses controlled by artificial muscles capable of simultaneously performing focal length tuning (>100%) as well as on-the-fly astigmatism and image shift corrections, which until now were only possible in electron optics. The device thickness is only 30 μm. Our results demonstrate the possibility of future optical microscopes that fully operate electronically, as well as compact optical systems that use the principles of adaptive optics to correct many orders of aberrations simultaneously.

  3. Field-Sensitive Materials for Optical Applications

    NASA Technical Reports Server (NTRS)

    Choi, Sang H.; Little, Mark

    2002-01-01

    The purpose of investigation is to develop the fundamental materials and fabrication technology for field-controlled spectrally active optics that are essential for industry, NASA, and DOD (Department of Defense) applications such as: membrane optics, filters for LIDARs (Light Detection and Ranging), windows for sensors and probes, telescopes, spectroscopes, cameras, light valves, light switches, flat-panel displays, etc. The proposed idea is based on the quantum-dots (QD) array or thin-film of field-sensitive Stark and Zeeman materials and the bound excitonic state of organic crystals that will offer optical adaptability and reconfigurability. Major tasks are the development of concept demonstration article and test data of field-controlled spectrally smart active optics (FCSAO) for optical multi-functional capabilities on a selected spectral range.

  4. Sandbox CCDs

    NASA Astrophysics Data System (ADS)

    Janesick, James R.; Elliott, Tom S.; Winzenread, Rusty; Pinter, Jeff H.; Dyck, Rudolph H.

    1995-04-01

    Seven new CCDs are presented. The devices will be used in a variety of applications ranging from generating color cinema movies to adaptive optics camera systems to compensate for atmospheric turbulence at major astronomical observatories. This paper highlights areas of design, fabrication, and operation techniques to achieve state-of-the-art performance. We discuss current limitations of CCD technology for several key parameters.

  5. Agreement in Cone Density Derived from Gaze-Directed Single Images Versus Wide-Field Montage Using Adaptive Optics Flood Illumination Ophthalmoscopy

    PubMed Central

    Chew, Avenell L.; Sampson, Danuta M.; Kashani, Irwin; Chen, Fred K.

    2017-01-01

    Purpose We compared cone density measurements derived from the center of gaze-directed single images with reconstructed wide-field montages using the rtx1 adaptive optics (AO) retinal camera. Methods A total of 29 eyes from 29 healthy subjects were imaged with the rtx1 camera. Of 20 overlapping AO images acquired, 12 (at 3.2°, 5°, and 7°) were used for calculating gaze-directed cone densities. Wide-field AO montages were reconstructed and cone densities were measured at the corresponding 12 loci as determined by field projection relative to the foveal center aligned to the foveal dip on optical coherence tomography. Limits of agreement in cone density measurement between single AO images and wide-field AO montages were calculated. Results Cone density measurements failed in 1 or more gaze directions or retinal loci in up to 58% and 33% of the subjects using single AO images or wide-field AO montage, respectively. Although there were no significant overall differences between cone densities derived from single AO images and wide-field AO montages at any of the 12 gazes and locations (P = 0.01–0.65), the limits of agreement between the two methods ranged from as narrow as −2200 to +2600, to as wide as −4200 to +3800 cones/mm2. Conclusions Cone density measurement using the rtx1 AO camera is feasible using both methods. Local variation in image quality and altered visibility of cones after generating montages may contribute to the discrepancies. Translational Relevance Cone densities from single AO images are not interchangeable with wide-field montage derived–measurements. PMID:29285417

  6. High-speed adaptive optics line scan confocal retinal imaging for human eye

    PubMed Central

    Wang, Xiaolin; Zhang, Yuhua

    2017-01-01

    Purpose Continuous and rapid eye movement causes significant intraframe distortion in adaptive optics high resolution retinal imaging. To minimize this artifact, we developed a high speed adaptive optics line scan confocal retinal imaging system. Methods A high speed line camera was employed to acquire retinal image and custom adaptive optics was developed to compensate the wave aberration of the human eye’s optics. The spatial resolution and signal to noise ratio were assessed in model eye and in living human eye. The improvement of imaging fidelity was estimated by reduction of intra-frame distortion of retinal images acquired in the living human eyes with frame rates at 30 frames/second (FPS), 100 FPS, and 200 FPS. Results The device produced retinal image with cellular level resolution at 200 FPS with a digitization of 512×512 pixels/frame in the living human eye. Cone photoreceptors in the central fovea and rod photoreceptors near the fovea were resolved in three human subjects in normal chorioretinal health. Compared with retinal images acquired at 30 FPS, the intra-frame distortion in images taken at 200 FPS was reduced by 50.9% to 79.7%. Conclusions We demonstrated the feasibility of acquiring high resolution retinal images in the living human eye at a speed that minimizes retinal motion artifact. This device may facilitate research involving subjects with nystagmus or unsteady fixation due to central vision loss. PMID:28257458

  7. High-speed adaptive optics line scan confocal retinal imaging for human eye.

    PubMed

    Lu, Jing; Gu, Boyu; Wang, Xiaolin; Zhang, Yuhua

    2017-01-01

    Continuous and rapid eye movement causes significant intraframe distortion in adaptive optics high resolution retinal imaging. To minimize this artifact, we developed a high speed adaptive optics line scan confocal retinal imaging system. A high speed line camera was employed to acquire retinal image and custom adaptive optics was developed to compensate the wave aberration of the human eye's optics. The spatial resolution and signal to noise ratio were assessed in model eye and in living human eye. The improvement of imaging fidelity was estimated by reduction of intra-frame distortion of retinal images acquired in the living human eyes with frame rates at 30 frames/second (FPS), 100 FPS, and 200 FPS. The device produced retinal image with cellular level resolution at 200 FPS with a digitization of 512×512 pixels/frame in the living human eye. Cone photoreceptors in the central fovea and rod photoreceptors near the fovea were resolved in three human subjects in normal chorioretinal health. Compared with retinal images acquired at 30 FPS, the intra-frame distortion in images taken at 200 FPS was reduced by 50.9% to 79.7%. We demonstrated the feasibility of acquiring high resolution retinal images in the living human eye at a speed that minimizes retinal motion artifact. This device may facilitate research involving subjects with nystagmus or unsteady fixation due to central vision loss.

  8. Adaptive optics parallel spectral domain optical coherence tomography for imaging the living retina

    NASA Astrophysics Data System (ADS)

    Zhang, Yan; Rha, Jungtae; Jonnal, Ravi S.; Miller, Donald T.

    2005-06-01

    Although optical coherence tomography (OCT) can axially resolve and detect reflections from individual cells, there are no reports of imaging cells in the living human retina using OCT. To supplement the axial resolution and sensitivity of OCT with the necessary lateral resolution and speed, we developed a novel spectral domain OCT (SD-OCT) camera based on a free-space parallel illumination architecture and equipped with adaptive optics (AO). Conventional flood illumination, also with AO, was integrated into the camera and provided confirmation of the focus position in the retina with an accuracy of ±10.3 μm. Short bursts of narrow B-scans (100x560 μm) of the living retina were subsequently acquired at 500 Hz during dynamic compensation (up to 14 Hz) that successfully corrected the most significant ocular aberrations across a dilated 6 mm pupil. Camera sensitivity (up to 94 dB) was sufficient for observing reflections from essentially all neural layers of the retina. Signal-to-noise of the detected reflection from the photoreceptor layer was highly sensitive to the level of cular aberrations and defocus with changes of 11.4 and 13.1 dB (single pass) observed when the ocular aberrations (astigmatism, 3rd order and higher) were corrected and when the focus was shifted by 200 μm (0.54 diopters) in the retina, respectively. The 3D resolution of the B-scans (3.0x3.0x5.7 μm) is the highest reported to date in the living human eye and was sufficient to observe the interface between the inner and outer segments of individual photoreceptor cells, resolved in both lateral and axial dimensions. However, high contrast speckle, which is intrinsic to OCT, was present throughout the AO parallel SD-OCT B-scans and obstructed correlating retinal reflections to cell-sized retinal structures.

  9. Recent Developments In High Speed Lens Design At The NPRL

    NASA Astrophysics Data System (ADS)

    Mcdowell, M. W.; Klee, H. W.

    1987-09-01

    Although the lens provides the link between the high speed camera and the outside world, there has over the years been little evidence of co-operation between the optical design and high speed photography communities. It is still only too common for a manufacturer to develop a camera of improved performance and resolution and then to combine this with a standard camera lens. These lenses were often designed for a completely different recording medium and, more often than not, their use results in avoidable degradation of the overall system performance. There is a tendency to assume that a specialized lens would be too expensive and that pushing the aperture automatically implies more complex optical systems. In the present paper some recent South African developments in the design of large aperture lenses are described. The application of a new design principle, based on the work earlier this century of Bernhard Schmidt, shows that ultra-fast lenses need not be overly complex and a basic four-element lens configuration can be adapted to a wide variety of applications.

  10. Apparatus for observing a hostile environment

    DOEpatents

    Nance, Thomas A.; Boylston, Micah L.; Robinson, Casandra W.; Sexton, William C.; Heckendorn, Frank M.

    2000-01-01

    An apparatus is provided for observing a hostile environment, comprising a housing and a camera capable of insertion within the housing. The housing is a double wall assembly with an inner and outer wall with an hermetically sealed chamber therebetween. A housing for an optical system used to observe a hostile environment is provided, comprising a transparent, double wall assembly. The double wall assembly has an inner wall and an outer wall with an hermetically sealed chamber therebetween. The double wall assembly has an opening and a void area in communication with the opening. The void area of the housing is adapted to accommodate the optical system within said void area. An apparatus for protecting an optical system used to observe a hostile environment is provided comprising a housing; a tube positioned within the housing; and a base for supporting the housing and the tube. The housing comprises a double wall assembly having an inner wall and an outerwall with an hermetically sealed chamber therebetween. The tube is adapted to house the optical system therein.

  11. Understanding the changes of cone reflectance in adaptive optics flood illumination retinal images over three years

    PubMed Central

    Mariotti, Letizia; Devaney, Nicholas; Lombardo, Giuseppe; Lombardo, Marco

    2016-01-01

    Although there is increasing interest in the investigation of cone reflectance variability, little is understood about its characteristics over long time scales. Cone detection and its automation is now becoming a fundamental step in the assessment and monitoring of the health of the retina and in the understanding of the photoreceptor physiology. In this work we provide an insight into the cone reflectance variability over time scales ranging from minutes to three years on the same eye, and for large areas of the retina (≥ 2.0 × 2.0 degrees) at two different retinal eccentricities using a commercial adaptive optics (AO) flood illumination retinal camera. We observed that the difference in reflectance observed in the cones increases with the time separation between the data acquisitions and this may have a negative impact on algorithms attempting to track cones over time. In addition, we determined that displacements of the light source within 0.35 mm of the pupil center, which is the farthest location from the pupil center used by operators of the AO camera to acquire high-quality images of the cone mosaic in clinical studies, does not significantly affect the cone detection and density estimation. PMID:27446708

  12. Understanding the changes of cone reflectance in adaptive optics flood illumination retinal images over three years.

    PubMed

    Mariotti, Letizia; Devaney, Nicholas; Lombardo, Giuseppe; Lombardo, Marco

    2016-07-01

    Although there is increasing interest in the investigation of cone reflectance variability, little is understood about its characteristics over long time scales. Cone detection and its automation is now becoming a fundamental step in the assessment and monitoring of the health of the retina and in the understanding of the photoreceptor physiology. In this work we provide an insight into the cone reflectance variability over time scales ranging from minutes to three years on the same eye, and for large areas of the retina (≥ 2.0 × 2.0 degrees) at two different retinal eccentricities using a commercial adaptive optics (AO) flood illumination retinal camera. We observed that the difference in reflectance observed in the cones increases with the time separation between the data acquisitions and this may have a negative impact on algorithms attempting to track cones over time. In addition, we determined that displacements of the light source within 0.35 mm of the pupil center, which is the farthest location from the pupil center used by operators of the AO camera to acquire high-quality images of the cone mosaic in clinical studies, does not significantly affect the cone detection and density estimation.

  13. First closed-loop visible AO test results for the advanced adaptive secondary AO system for the Magellan Telescope: MagAO's performance and status

    NASA Astrophysics Data System (ADS)

    Close, Laird M.; Males, Jared R.; Kopon, Derek A.; Gasho, Victor; Follette, Katherine B.; Hinz, Phil; Morzinski, Katie; Uomoto, Alan; Hare, Tyson; Riccardi, Armando; Esposito, Simone; Puglisi, Alfio; Pinna, Enrico; Busoni, Lorenzo; Arcidiacono, Carmelo; Xompero, Marco; Briguglio, Runa; Quiros-Pacheco, Fernando; Argomedo, Javier

    2012-07-01

    The heart of the 6.5 Magellan AO system (MagAO) is a 585 actuator adaptive secondary mirror (ASM) with <1 msec response times (0.7 ms typically). This adaptive secondary will allow low emissivity and high-contrast AO science. We fabricated a high order (561 mode) pyramid wavefront sensor (similar to that now successfully used at the Large Binocular Telescope). The relatively high actuator count (and small projected ~23 cm pitch) allows moderate Strehls to be obtained by MagAO in the “visible” (0.63-1.05 μm). To take advantage of this we have fabricated an AO CCD science camera called "VisAO". Complete “end-to-end” closed-loop lab tests of MagAO achieve a solid, broad-band, 37% Strehl (122 nm rms) at 0.76 μm (i’) with the VisAO camera in 0.8” simulated seeing (13 cm ro at V) with fast 33 mph winds and a 40 m Lo locked on R=8 mag artificial star. These relatively high visible wavelength Strehls are enabled by our powerful combination of a next generation ASM and a Pyramid WFS with 400 controlled modes and 1000 Hz sample speeds (similar to that used successfully on-sky at the LBT). Currently only the VisAO science camera is used for lab testing of MagAO, but this high level of measured performance (122 nm rms) promises even higher Strehls with our IR science cameras. On bright (R=8 mag) stars we should achieve very high Strehls (>70% at H) in the IR with the existing MagAO Clio2 (λ=1-5.3 μm) science camera/coronagraph or even higher (~98% Strehl) the Mid-IR (8-26 microns) with the existing BLINC/MIRAC4 science camera in the future. To eliminate non-common path vibrations, dispersions, and optical errors the VisAO science camera is fed by a common path advanced triplet ADC and is piggy-backed on the Pyramid WFS optical board itself. Also a high-speed shutter can be used to block periods of poor correction. The entire system passed CDR in June 2009, and we finished the closed-loop system level testing phase in December 2011. Final system acceptance (“pre-ship” review) was passed in February 2012. In May 2012 the entire AO system is was successfully shipped to Chile and fully tested/aligned. It is now in storage in the Magellan telescope clean room in anticipation of “First Light” scheduled for December 2012. An overview of the design, attributes, performance, and schedule for the Magellan AO system and its two science cameras are briefly presented here.

  14. Spatial super-resolution of colored images by micro mirrors

    NASA Astrophysics Data System (ADS)

    Dahan, Daniel; Yaacobi, Ami; Pinsky, Ephraim; Zalevsky, Zeev

    2018-06-01

    In this paper, we present two methods of dealing with the geometric resolution limit of color imaging sensors. It is possible to overcome the pixel size limit by adding a digital micro-mirror device component on the intermediate image plane of an optical system, and adapting its pattern in a computerized manner before sampling each frame. The full RGB image can be reconstructed from the Bayer camera by building a dedicated optical design, or by adjusting the demosaicing process to the special format of the enhanced image.

  15. Acousto-Optic Applications for Multichannel Adaptive Optical Processor

    DTIC Science & Technology

    1992-06-01

    AO cell and the two- channel line-scan camera system described in Subsection 4.1. The AO material for this IntraAction AOD-70 device was flint glass (n...Single-Channel 1.68 (flint glass ) 60,.0 AO Cell Multichannel 2.26 (TeO 2) 20.0 AO Cell Beam splitter 1.515 ( glass ) 50.8 Multichannel correlation was...Tone Intermodulation Dynamic Ranges of Longitudinal TeO2 Bragg Cells for Several Acoustic Power Densities 4-92 f f2 f 3 1 t SOURCE: Reference 21 TR-92

  16. Overview of LBTI: A Multipurpose Facility for High Spatial Resolution Observations

    NASA Technical Reports Server (NTRS)

    Hinz, P. M.; Defrere, D.; Skemer, A.; Bailey, V.; Stone, J.; Spalding, E.; Vaz, A.; Pinna, E.; Puglisi, A.; Esposito, S.; hide

    2016-01-01

    The Large Binocular Telescope Interferometer (LBTI) is a high spatial resolution instrument developed for coherent imaging and nulling interferometry using the 14.4 m baseline of the 2x8.4 m LBT. The unique telescope design, comprising of the dual apertures on a common elevation-azimuth mount, enables a broad use of observing modes. The full system is comprised of dual adaptive optics systems, a near-infrared phasing camera, a 1-5 micrometer camera (called LMIRCam), and an 8-13 micrometer camera (called NOMIC). The key program for LBTI is the Hunt for Observable Signatures of Terrestrial planetary Systems (HOSTS), a survey using nulling interferometry to constrain the typical brightness from exozodiacal dust around nearby stars. Additional observations focus on the detection and characterization of giant planets in the thermal infrared, high spatial resolution imaging of complex scenes such as Jupiter's moon, Io, planets forming in transition disks, and the structure of active Galactic Nuclei (AGN). Several instrumental upgrades are currently underway to improve and expand the capabilities of LBTI. These include: Improving the performance and limiting magnitude of the parallel adaptive optics systems; quadrupling the field of view of LMIRcam (increasing to 20"x20"); adding an integral field spectrometry mode; and implementing a new algorithm for path length correction that accounts for dispersion due to atmospheric water vapor. We present the current architecture and performance of LBTI, as well as an overview of the upgrades.

  17. Camera Optics.

    ERIC Educational Resources Information Center

    Ruiz, Michael J.

    1982-01-01

    The camera presents an excellent way to illustrate principles of geometrical optics. Basic camera optics of the single-lens reflex camera are discussed, including interchangeable lenses and accessories available to most owners. Several experiments are described and results compared with theoretical predictions or manufacturer specifications.…

  18. Optimal design and critical analysis of a high resolution video plenoptic demonstrator

    NASA Astrophysics Data System (ADS)

    Drazic, Valter; Sacré, Jean-Jacques; Bertrand, Jérôme; Schubert, Arno; Blondé, Etienne

    2011-03-01

    A plenoptic camera is a natural multi-view acquisition device also capable of measuring distances by correlating a set of images acquired under different parallaxes. Its single lens and single sensor architecture have two downsides: limited resolution and depth sensitivity. In a very first step and in order to circumvent those shortcomings, we have investigated how the basic design parameters of a plenoptic camera optimize both the resolution of each view and also its depth measuring capability. In a second step, we built a prototype based on a very high resolution Red One® movie camera with an external plenoptic adapter and a relay lens. The prototype delivered 5 video views of 820x410. The main limitation in our prototype is view cross talk due to optical aberrations which reduce the depth accuracy performance. We have simulated some limiting optical aberrations and predicted its impact on the performances of the camera. In addition, we developed adjustment protocols based on a simple pattern and analyzing programs which investigate the view mapping and amount of parallax crosstalk on the sensor on a pixel basis. The results of these developments enabled us to adjust the lenslet array with a sub micrometer precision and to mark the pixels of the sensor where the views do not register properly.

  19. Optimal design and critical analysis of a high-resolution video plenoptic demonstrator

    NASA Astrophysics Data System (ADS)

    Drazic, Valter; Sacré, Jean-Jacques; Schubert, Arno; Bertrand, Jérôme; Blondé, Etienne

    2012-01-01

    A plenoptic camera is a natural multiview acquisition device also capable of measuring distances by correlating a set of images acquired under different parallaxes. Its single lens and single sensor architecture have two downsides: limited resolution and limited depth sensitivity. As a first step and in order to circumvent those shortcomings, we investigated how the basic design parameters of a plenoptic camera optimize both the resolution of each view and its depth-measuring capability. In a second step, we built a prototype based on a very high resolution Red One® movie camera with an external plenoptic adapter and a relay lens. The prototype delivered five video views of 820 × 410. The main limitation in our prototype is view crosstalk due to optical aberrations that reduce the depth accuracy performance. We simulated some limiting optical aberrations and predicted their impact on the performance of the camera. In addition, we developed adjustment protocols based on a simple pattern and analysis of programs that investigated the view mapping and amount of parallax crosstalk on the sensor on a pixel basis. The results of these developments enabled us to adjust the lenslet array with a submicrometer precision and to mark the pixels of the sensor where the views do not register properly.

  20. Use of digital micromirror devices as dynamic pinhole arrays for adaptive confocal fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Pozzi, Paolo; Wilding, Dean; Soloviev, Oleg; Vdovin, Gleb; Verhaegen, Michel

    2018-02-01

    In this work, we present a new confocal laser scanning microscope capable to perform sensorless wavefront optimization in real time. The device is a parallelized laser scanning microscope in which the excitation light is structured in a lattice of spots by a spatial light modulator, while a deformable mirror provides aberration correction and scanning. A binary DMD is positioned in an image plane of the detection optical path, acting as a dynamic array of reflective confocal pinholes, images by a high performance cmos camera. A second camera detects images of the light rejected by the pinholes for sensorless aberration correction.

  1. On the collaborative design and simulation of space camera: stop structural/thermal/optical) analysis

    NASA Astrophysics Data System (ADS)

    Duan, Pengfei; Lei, Wenping

    2017-11-01

    A number of disciplines (mechanics, structures, thermal, and optics) are needed to design and build Space Camera. Separate design models are normally constructed by each discipline CAD/CAE tools. Design and analysis is conducted largely in parallel subject to requirements that have been levied on each discipline, and technical interaction between the different disciplines is limited and infrequent. As a result a unified view of the Space Camera design across discipline boundaries is not directly possible in the approach above, and generating one would require a large manual, and error-prone process. A collaborative environment that is built on abstract model and performance template allows engineering data and CAD/CAE results to be shared across above discipline boundaries within a common interface, so that it can help to attain speedy multivariate design and directly evaluate optical performance under environment loadings. A small interdisciplinary engineering team from Beijing Institute of Space Mechanics and Electricity has recently conducted a Structural/Thermal/Optical (STOP) analysis of a space camera with this collaborative environment. STOP analysis evaluates the changes in image quality that arise from the structural deformations when the thermal environment of the camera changes throughout its orbit. STOP analyses were conducted for four different test conditions applied during final thermal vacuum (TVAC) testing of the payload on the ground. The STOP Simulation Process begins with importing an integrated CAD model of the camera geometry into the collaborative environment, within which 1. Independent thermal and structural meshes are generated. 2. The thermal mesh and relevant engineering data for material properties and thermal boundary conditions are then used to compute temperature distributions at nodal points in both the thermal and structures mesh through Thermal Desktop, a COTS thermal design and analysis code. 3. Thermally induced structural deformations of the camera are then evaluated in Nastran, an industry standard code for structural design and analysis. 4. Thermal and structural results are next imported into SigFit, another COTS tool that computes deformation and best fit rigid body displacements for the optical surfaces. 5. SigFit creates a modified optical prescription that is imported into CODE V for evaluation of optical performance impacts. The integrated STOP analysis was validated using TVAC test data. For the four different TVAC tests, the relative errors between simulation and test data of measuring points temperatures were almost around 5%, while in some test conditions, they were even much lower to 1%. As to image quality MTF, relative error between simulation and test was 8.3% in the worst condition, others were all below 5%. Through the validation, it has been approved that the collaborative design and simulation environment can achieved the integrated STOP analysis of Space Camera efficiently. And further, the collaborative environment allows an interdisciplinary analysis that formerly might take several months to perform to be completed in two or three weeks, which is very adaptive to scheme demonstration of projects in earlier stages.

  2. Status of the GTC adaptive optics: integration in laboratory

    NASA Astrophysics Data System (ADS)

    Reyes García-Talavera, M.; Béjar, V. J. S.; López, J. C.; López, R. L.; Martín, C.; Martín, Y.; Montilla, I.; Núñez, M.; Puga, M.; Rodríguez, L. F.; Tenegi, F.; Tubío, O.; Bello, D.; Cavaller, L.; Prieto, G.; Rosado, M.

    2016-07-01

    Since the beginning of the development of the Gran Telescopio Canarias (GTC), an Adaptive Optics (AO) system was considered necessary to exploit the full diffraction-limited potential of the telescope. The GTC AO system designed during the last years is based on a single deformable mirror conjugated to the telescope pupil, and a Shack-Hartmann wavefront sensor with 20 x 20 subapertures, using an OCAM2 camera. The GTCAO system will provide a corrected beam with a Strehl Ratio (SR) of 0.65 in K-band with bright natural guide stars. Most of the subsystems have been manufactured and delivered. The upgrade for the operation with a Laser Guide Star (LGS) system has been recently approved. The present status of the GTCAO system, currently in its laboratory integration phase, is summarized in this paper.

  3. Avalanche photo diodes in the observatory environment: lucky imaging at 1-2.5 microns

    NASA Astrophysics Data System (ADS)

    Vaccarella, A.; Sharp, R.; Ellis, M.; Singh, S.; Bloxham, G.; Bouchez, A.; Conan, R.; Boz, R.; Bundy, D.; Davies, J.; Espeland, B.; Hart, J.; Herrald, N.; Ireland, M.; Jacoby, G.; Nielsen, J.; Vest, C.; Young, P.; Fordham, B.; Zovaro, A.

    2016-08-01

    The recent availability of large format near-infrared detectors with sub-election readout noise is revolutionizing our approach to wavefront sensing for adaptive optics. However, as with all near-infrared detector technologies, challenges exist in moving from the comfort of the laboratory test-bench into the harsh reality of the observatory environment. As part of the broader adaptive optics program for the GMT, we are developing a near-infrared Lucky Imaging camera for operational deployment at the ANU 2.3 m telescope at Siding Spring Observatory. The system provides an ideal test-bed for the rapidly evolving Selex/SAPHIRA eAPD technology while providing scientific imaging at angular resolution rivalling the Hubble Space Telescope at wavelengths λ = 1.3-2.5 μm.

  4. Advances in instrumentation at the W. M. Keck Observatory

    NASA Astrophysics Data System (ADS)

    Adkins, Sean M.; Armandroff, Taft E.; Johnson, James; Lewis, Hilton A.; Martin, Christopher; McLean, Ian S.; Wizinowich, Peter

    2012-09-01

    In this paper we describe both recently completed instrumentation projects and our current development efforts in terms of their role in the strategic plan, the key science areas they address, and their performance as measured or predicted. Projects reaching completion in 2012 include MOSFIRE, a near IR multi-object spectrograph, a laser guide star adaptive optics facility on the Keck I telescope, and an upgrade to the guide camera for the HIRES instrument on Keck I. Projects in development include a new seeing limited integral field spectrograph for the visible wavelength range called the Keck Cosmic Web Imager (KCWI), an upgrade to the telescope control systems on both Keck telescopes, a near-IR tip/tilt sensor for the Keck I adaptive optics system, and a new grating for the OSIRIS integral field spectrograph.

  5. Evaluation of white-to-white distance and anterior chamber depth measurements using the IOL Master, slit-lamp adapted optical coherence tomography and digital photographs in phakic eyes.

    PubMed

    Wilczyński, Michał; Pośpiech-Zabierek, Aleksandra

    2015-01-01

    The accurate measurement of the anterior chamber internal diameter and depth is important in ophthalmic diagnosis and before some eye surgery procedures. The purpose of the study was to compare the white-to-white distance measurements performed using the IOL-Master and photography with internal anterior chamber diameter determined using slit lamp adapted optical coherence tomography in healthy eyes, and to compare anterior chamber depth measurements by IOL-Master and slit lamp adapted optical coherence tomography. The data were gathered prospectively from a non-randomized consecutive series of patients. The examined group consisted of 46 eyes of 39 patients. White-to-white was measured using IOL-Master and photographs of the eye were taken with a digital camera. Internal anterior chamber diameter was measured with slit-lamp adapted optical coherence tomography. Anterior chamber depth was measured using the IOL Master and slit-lamp adapted optical coherence tomography. Statistical analysis was performed using parametric tests. A Bland-Altman plot was drawn. White-to-white distance by the IOL Master was 11.8 +/- 0.40 mm, on photographs it was 11.29 +/- 0.58 mm and internal anterior chamber diameter by slit-lamp adapted optical coherence tomography was 11.34?0.54 mm. A significant difference was found between IOL-Master and slit-lamp adapted optical coherence tomography (p<0.01), as well as between IOL Master and digital photographs (p<0.01). There was no difference between SL-OCT and digital photographs (p>0.05). All measurements were correlated (Spearman p<0.001). Mean anterior chamber depth determined using the IOL-Master was 2.99 +/- 0.50 mm and by slit-lamp adapted optical coherence tomography was 2.56 +/- 0.46 mm. The difference was statistically significant (p<0.001). The correlation between the values was also statistically significant (Spearman, p<0.001). Automated measurements using IOL-Master yield constantly higher values than measurements based on direct eye visualization slit-lamp adapted optical coherence tomography and digital photographs. In order to obtain accurate measurements of the internal anterior chamber diameter and anterior chamber depth, a method involving direct visualization of intraocular structures should be used.

  6. Lensless imaging for wide field of view

    NASA Astrophysics Data System (ADS)

    Nagahara, Hajime; Yagi, Yasushi

    2015-02-01

    It is desirable to engineer a small camera with a wide field of view (FOV) because of current developments in the field of wearable cameras and computing products, such as action cameras and Google Glass. However, typical approaches for achieving wide FOV, such as attaching a fisheye lens and convex mirrors, require a trade-off between optics size and the FOV. We propose camera optics that achieve a wide FOV, and are at the same time small and lightweight. The proposed optics are a completely lensless and catoptric design. They contain four mirrors, two for wide viewing, and two for focusing the image on the camera sensor. The proposed optics are simple and can be simply miniaturized, since we use only mirrors for the proposed optics and the optics are not susceptible to chromatic aberration. We have implemented the prototype optics of our lensless concept. We have attached the optics to commercial charge-coupled device/complementary metal oxide semiconductor cameras and conducted experiments to evaluate the feasibility of our proposed optics.

  7. Measurement of retinal wall-to-lumen ratio by adaptive optics retinal camera: a clinical research.

    PubMed

    Meixner, Eva; Michelson, Georg

    2015-11-01

    To measure the wall-to-lumen ratio (WLR) and the cross-sectional area of the vascular wall (WCSA) of retinal arterioles by an Adaptive Optics (AO) retinal camera. Forty-seven human subjects were examined and their medical history was explored. WLR and WCSA were measured on the basis of retinal arteriolar wall thickness (VW), lumen diameter (LD) and vessel diameter (VD) assessed by rtx1 Adaptive Optics retinal camera. WLR was calculated by the formula [Formula: see text]. Arterio-venous ratio (AVR) and microvascular abnormalities were attained by quantitative and qualitative assessment of fundus photographs. Influence of age, arterial hypertension, body mass index (BMI) and retinal microvascular abnormalities on the WLR was examined. An age-adjusted WLR was created to test influences on WLR independently of age. Considering WLR and WCSA, a distinction between eutrophic and hypertrophic retinal remodeling processes was possible. The intra-observer variability (IOV) was 6 % ± 0.9 for arteriolar wall thickness and 2 % ± 0.2 for arteriolar wall thickness plus vessel lumen. WLR depended significantly on the wall thickness (r = 0.715; p < 0.01) of retinal arterioles, but was independent of the total vessel diameter (r = 0.052; p = 0.728). WLR correlated significantly with age (r = 0.769; p < 0.01). Arterial hypertension and a higher BMI were significantly associated with an increased age-adjusted WLR. WLR correlated significantly with the stage of microvascular abnormalities. 55 % of the hypertensive subjects and 11 % of the normotensive subjects showed eutrophic remodeling, while hypertrophic remodeling was not detectable. WLR correlated inversely with AVR. AVR was independent of the arteriolar wall thickness, age and arterial hypertension. The technique of AO retinal imaging allows a direct measurement of the retinal vessel wall and lumen diameter with good intra-observer variability. Age, arterial hypertension and an elevated BMI level are significantly associated with an increased WLR. The wall-to-lumen ratio measured by AO can be used to detect structural retinal microvascular alterations in an early stage of remodeling processes.

  8. Adaptive optics high-resolution IR spectroscopy with silicon grisms and immersion gratings

    NASA Astrophysics Data System (ADS)

    Ge, Jian; McDavitt, Daniel L.; Chakraborty, Abhijit; Bernecker, John L.; Miller, Shane

    2003-02-01

    The breakthrough of silicon immersion grating technology at Penn State has the ability to revolutionize high-resolution infrared spectroscopy when it is coupled with adaptive optics at large ground-based telescopes. Fabrication of high quality silicon grism and immersion gratings up to 2 inches in dimension, less than 1% integrated scattered light, and diffraction-limited performance becomes a routine process thanks to newly developed techniques. Silicon immersion gratings with etched dimensions of ~ 4 inches are being developed at Penn State. These immersion gratings will be able to provide a diffraction-limited spectral resolution of R = 300,000 at 2.2 micron, or 130,000 at 4.6 micron. Prototype silicon grisms have been successfully used in initial scientific observations at the Lick 3m telescope with adaptive optics. Complete K band spectra of a total of 6 T Tauri and Ae/Be stars and their close companions at a spectral resolution of R ~ 3000 were obtained. This resolving power was achieved by using a silicon echelle grism with a 5 mm pupil diameter in an IR camera. These results represent the first scientific observations conducted by the high-resolution silicon grisms, and demonstrate the extremely high dispersing power of silicon-based gratings. New discoveries from this high spatial and spectral resolution IR spectroscopy will be reported. The future of silicon-based grating applications in ground-based AO IR instruments is promising. Silicon immersion gratings will make very high-resolution spectroscopy (R > 100,000) feasible with compact instruments for implementation on large telescopes. Silicon grisms will offer an efficient way to implement low-cost medium to high resolution IR spectroscopy (R ~ 1000-50000) through the conversion of existing cameras into spectrometers by locating a grism in the instrument's pupil location.

  9. Suspension and simple optical characterization of two-dimensional membranes

    NASA Astrophysics Data System (ADS)

    Northeast, David B.; Knobel, Robert G.

    2018-03-01

    We report on a method for suspending two-dimensional crystal materials in an electronic circuit using an only photoresists and solvents. Graphene and NbSe2 are suspended tens of nanometers above metal electrodes with clamping diameters of several microns. The optical cavity formed from the membrane/air/metal structures enables a quick method to measure the number of layers and the gap separation using comparisons between the expected colour and optical microscope images. This characterization technique can be used with just an illuminated microscope with a digital camera which makes it adaptable to environments where other means of characterization are not possible, such as inside nitrogen glove boxes used in handling oxygen-sensitive materials.

  10. LSST Camera Optics Design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riot, V J; Olivier, S; Bauman, B

    2012-05-24

    The Large Synoptic Survey Telescope (LSST) uses a novel, three-mirror, telescope design feeding a camera system that includes a set of broad-band filters and three refractive corrector lenses to produce a flat field at the focal plane with a wide field of view. Optical design of the camera lenses and filters is integrated in with the optical design of telescope mirrors to optimize performance. We discuss the rationale for the LSST camera optics design, describe the methodology for fabricating, coating, mounting and testing the lenses and filters, and present the results of detailed analyses demonstrating that the camera optics willmore » meet their performance goals.« less

  11. Design of smartphone-based spectrometer to assess fresh meat color

    NASA Astrophysics Data System (ADS)

    Jung, Youngkee; Kim, Hyun-Wook; Kim, Yuan H. Brad; Bae, Euiwon

    2017-02-01

    Based on its integrated camera, new optical attachment, and inherent computing power, we propose an instrument design and validation that can potentially provide an objective and accurate method to determine surface meat color change and myoglobin redox forms using a smartphone-based spectrometer. System is designed to be used as a reflection spectrometer which mimics the conventional spectrometry commonly used for meat color assessment. We utilize a 3D printing technique to make an optical cradle which holds all of the optical components for light collection, collimation, dispersion, and a suitable chamber. A light, which reflects a sample, enters a pinhole and is subsequently collimated by a convex lens. A diffraction grating spreads the wavelength over the camera's pixels to display a high resolution of spectrum. Pixel values in the smartphone image are translated to calibrate the wavelength values through three laser pointers which have different wavelength; 405, 532, 650 nm. Using an in-house app, the camera images are converted into a spectrum in the visible wavelength range based on the exterior light source. A controlled experiment simulating the refrigeration and shelving of the meat has been conducted and the results showed the capability to accurately measure the color change in quantitative and spectroscopic manner. We expect that this technology can be adapted to any smartphone and used to conduct a field-deployable color spectrum assay as a more practical application tool for various food sectors.

  12. Development of a wavefront sensor for terahertz pulses.

    PubMed

    Abraham, Emmanuel; Cahyadi, Harsono; Brossard, Mathilde; Degert, Jérôme; Freysz, Eric; Yasui, Takeshi

    2016-03-07

    Wavefront characterization of terahertz pulses is essential to optimize far-field intensity distribution of time-domain (imaging) spectrometers or increase the peak power of intense terahertz sources. In this paper, we report on the wavefront measurement of terahertz pulses using a Hartmann sensor associated with a 2D electro-optic imaging system composed of a ZnTe crystal and a CMOS camera. We quantitatively determined the deformations of planar and converging spherical wavefronts using the modal Zernike reconstruction least-squares method. Associated with deformable mirrors, the sensor will also open the route to terahertz adaptive optics.

  13. An Efficient Pipeline Wavefront Phase Recovery for the CAFADIS Camera for Extremely Large Telescopes

    PubMed Central

    Magdaleno, Eduardo; Rodríguez, Manuel; Rodríguez-Ramos, José Manuel

    2010-01-01

    In this paper we show a fast, specialized hardware implementation of the wavefront phase recovery algorithm using the CAFADIS camera. The CAFADIS camera is a new plenoptic sensor patented by the Universidad de La Laguna (Canary Islands, Spain): international patent PCT/ES2007/000046 (WIPO publication number WO/2007/082975). It can simultaneously measure the wavefront phase and the distance to the light source in a real-time process. The pipeline algorithm is implemented using Field Programmable Gate Arrays (FPGA). These devices present architecture capable of handling the sensor output stream using a massively parallel approach and they are efficient enough to resolve several Adaptive Optics (AO) problems in Extremely Large Telescopes (ELTs) in terms of processing time requirements. The FPGA implementation of the wavefront phase recovery algorithm using the CAFADIS camera is based on the very fast computation of two dimensional fast Fourier Transforms (FFTs). Thus we have carried out a comparison between our very novel FPGA 2D-FFTa and other implementations. PMID:22315523

  14. An efficient pipeline wavefront phase recovery for the CAFADIS camera for extremely large telescopes.

    PubMed

    Magdaleno, Eduardo; Rodríguez, Manuel; Rodríguez-Ramos, José Manuel

    2010-01-01

    In this paper we show a fast, specialized hardware implementation of the wavefront phase recovery algorithm using the CAFADIS camera. The CAFADIS camera is a new plenoptic sensor patented by the Universidad de La Laguna (Canary Islands, Spain): international patent PCT/ES2007/000046 (WIPO publication number WO/2007/082975). It can simultaneously measure the wavefront phase and the distance to the light source in a real-time process. The pipeline algorithm is implemented using Field Programmable Gate Arrays (FPGA). These devices present architecture capable of handling the sensor output stream using a massively parallel approach and they are efficient enough to resolve several Adaptive Optics (AO) problems in Extremely Large Telescopes (ELTs) in terms of processing time requirements. The FPGA implementation of the wavefront phase recovery algorithm using the CAFADIS camera is based on the very fast computation of two dimensional fast Fourier Transforms (FFTs). Thus we have carried out a comparison between our very novel FPGA 2D-FFTa and other implementations.

  15. High dynamic range image acquisition based on multiplex cameras

    NASA Astrophysics Data System (ADS)

    Zeng, Hairui; Sun, Huayan; Zhang, Tinghua

    2018-03-01

    High dynamic image is an important technology of photoelectric information acquisition, providing higher dynamic range and more image details, and it can better reflect the real environment, light and color information. Currently, the method of high dynamic range image synthesis based on different exposure image sequences cannot adapt to the dynamic scene. It fails to overcome the effects of moving targets, resulting in the phenomenon of ghost. Therefore, a new high dynamic range image acquisition method based on multiplex cameras system was proposed. Firstly, different exposure images sequences were captured with the camera array, using the method of derivative optical flow based on color gradient to get the deviation between images, and aligned the images. Then, the high dynamic range image fusion weighting function was established by combination of inverse camera response function and deviation between images, and was applied to generated a high dynamic range image. The experiments show that the proposed method can effectively obtain high dynamic images in dynamic scene, and achieves good results.

  16. Detailed Morphological Changes of Foveoschisis in Patient with X-Linked Retinoschisis Detected by SD-OCT and Adaptive Optics Fundus Camera.

    PubMed

    Akeo, Keiichiro; Kameya, Shuhei; Gocho, Kiyoko; Kubota, Daiki; Yamaki, Kunihiko; Takahashi, Hiroshi

    2015-01-01

    Purpose. To report the morphological and functional changes associated with a regression of foveoschisis in a patient with X-linked retinoschisis (XLRS). Methods. A 42-year-old man with XLRS underwent genetic analysis and detailed ophthalmic examinations. Functional assessments included best-corrected visual acuity (BCVA), full-field electroretinograms (ERGs), and multifocal ERGs (mfERGs). Morphological assessments included fundus photography, spectral-domain optical coherence tomography (SD-OCT), and adaptive optics (AO) fundus imaging. After the baseline clinical data were obtained, topical dorzolamide was applied to the patient. The patient was followed for 24 months. Results. A reported RS1 gene mutation was found (P203L) in the patient. At the baseline, his decimal BCVA was 0.15 in the right and 0.3 in the left eye. Fundus photographs showed bilateral spoke wheel-appearing maculopathy. SD-OCT confirmed the foveoschisis in the left eye. The AO images of the left eye showed spoke wheel retinal folds, and the folds were thinner than those in fundus photographs. During the follow-up period, the foveal thickness in the SD-OCT images and the number of retinal folds in the AO images were reduced. Conclusions. We have presented the detailed morphological changes of foveoschisis in a patient with XLRS detected by SD-OCT and AO fundus camera. However, the findings do not indicate whether the changes were influenced by topical dorzolamide or the natural history.

  17. Cone photoreceptor definition on adaptive optics retinal imaging

    PubMed Central

    Muthiah, Manickam Nick; Gias, Carlos; Chen, Fred Kuanfu; Zhong, Joe; McClelland, Zoe; Sallo, Ferenc B; Peto, Tunde; Coffey, Peter J; da Cruz, Lyndon

    2014-01-01

    Aims To quantitatively analyse cone photoreceptor matrices on images captured on an adaptive optics (AO) camera and assess their correlation to well-established parameters in the retinal histology literature. Methods High resolution retinal images were acquired from 10 healthy subjects, aged 20–35 years old, using an AO camera (rtx1, Imagine Eyes, France). Left eye images were captured at 5° of retinal eccentricity, temporal to the fovea for consistency. In three subjects, images were also acquired at 0, 2, 3, 5 and 7° retinal eccentricities. Cone photoreceptor density was calculated following manual and automated counting. Inter-photoreceptor distance was also calculated. Voronoi domain and power spectrum analyses were performed for all images. Results At 5° eccentricity, the cone density (cones/mm2 mean±SD) was 15.3±1.4×103 (automated) and 13.9±1.0×103 (manual) and the mean inter-photoreceptor distance was 8.6±0.4 μm. Cone density decreased and inter-photoreceptor distance increased with increasing retinal eccentricity from 2 to 7°. A regular hexagonal cone photoreceptor mosaic pattern was seen at 2, 3 and 5° of retinal eccentricity. Conclusions Imaging data acquired from the AO camera match cone density, intercone distance and show the known features of cone photoreceptor distribution in the pericentral retina as reported by histology, namely, decreasing density values from 2 to 7° of eccentricity and the hexagonal packing arrangement. This confirms that AO flood imaging provides reliable estimates of pericentral cone photoreceptor distribution in normal subjects. PMID:24729030

  18. Multiple-target tracking implementation in the ebCMOS camera system: the LUSIPHER prototype

    NASA Astrophysics Data System (ADS)

    Doan, Quang Tuyen; Barbier, Remi; Dominjon, Agnes; Cajgfinger, Thomas; Guerin, Cyrille

    2012-06-01

    The domain of the low light imaging systems progresses very fast, thanks to detection and electronic multiplication technology evolution, such as the emCCD (electron multiplying CCD) or the ebCMOS (electron bombarded CMOS). We present an ebCMOS camera system that is able to track every 2 ms more than 2000 targets with a mean number of photons per target lower than two. The point light sources (targets) are spots generated by a microlens array (Shack-Hartmann) used in adaptive optics. The Multiple-Target-Tracking designed and implemented on a rugged workstation is described. The results and the performances of the system on the identification and tracking are presented and discussed.

  19. Research on a solid state-streak camera based on an electro-optic crystal

    NASA Astrophysics Data System (ADS)

    Wang, Chen; Liu, Baiyu; Bai, Yonglin; Bai, Xiaohong; Tian, Jinshou; Yang, Wenzheng; Xian, Ouyang

    2006-06-01

    With excellent temporal resolution ranging from nanosecond to sub-picoseconds, a streak camera is widely utilized in measuring ultrafast light phenomena, such as detecting synchrotron radiation, examining inertial confinement fusion target, and making measurements of laser-induced discharge. In combination with appropriate optics or spectroscope, the streak camera delivers intensity vs. position (or wavelength) information on the ultrafast process. The current streak camera is based on a sweep electric pulse and an image converting tube with a wavelength-sensitive photocathode ranging from the x-ray to near infrared region. This kind of streak camera is comparatively costly and complex. This paper describes the design and performance of a new-style streak camera based on an electro-optic crystal with large electro-optic coefficient. Crystal streak camera accomplishes the goal of time resolution by direct photon beam deflection using the electro-optic effect which can replace the current streak camera from the visible to near infrared region. After computer-aided simulation, we design a crystal streak camera which has the potential of time resolution between 1ns and 10ns.Some further improvements in sweep electric circuits, a crystal with a larger electro-optic coefficient, for example LN (γ 33=33.6×10 -12m/v) and the optimal optic system may lead to better time resolution less than 1ns.

  20. ADAPTIVE OPTICS IMAGING OF FOVEAL SPARING IN GEOGRAPHIC ATROPHY SECONDARY TO AGE-RELATED MACULAR DEGENERATION.

    PubMed

    Querques, Giuseppe; Kamami-Levy, Cynthia; Georges, Anouk; Pedinielli, Alexandre; Capuano, Vittorio; Blanco-Garavito, Rocio; Poulon, Fanny; Souied, Eric H

    2016-02-01

    To describe adaptive optics (AO) imaging of foveal sparing in geographic atrophy (GA) secondary to age-related macular degeneration. Flood-illumination AO infrared (IR) fundus images were obtained in four consecutive patients with GA using an AO retinal camera (rtx1; Imagine Eyes). Adaptive optics IR images were overlaid with confocal scanning laser ophthalmoscope near-IR autofluorescence images to allow direct correlation of en face AO features with areas of foveal sparing. Adaptive optics appearance of GA and foveal sparing, preservation of functional photoreceptors, and cone densities in areas of foveal sparing were investigated. In 5 eyes of 4 patients (all female; mean age 74.2 ± 11.9 years), a total of 5 images, sized 4° × 4°, of foveal sparing visualized on confocal scanning laser ophthalmoscope near-IR autofluorescence were investigated by AO imaging. En face AO images revealed GA as regions of inhomogeneous hyperreflectivity with irregularly dispersed hyporeflective clumps. By direct comparison with adjacent regions of GA, foveal sparing appeared as well-demarcated areas of reduced reflectivity with less hyporeflective clumps (mean 14.2 vs. 3.2; P = 0.03). Of note, in these areas, en face AO IR images revealed cone photoreceptors as hyperreflective dots over the background reflectivity (mean cone density 3,271 ± 1,109 cones per square millimeter). Microperimetry demonstrated residual function in areas of foveal sparing detected by confocal scanning laser ophthalmoscope near-IR autofluorescence. Adaptive optics allows the appreciation of differences in reflectivity between regions of GA and foveal sparing. Preservation of functional cone photoreceptors was demonstrated on en face AO IR images in areas of foveal sparing detected by confocal scanning laser ophthalmoscope near-IR autofluorescence.

  1. Full-frame, high-speed 3D shape and deformation measurements using stereo-digital image correlation and a single color high-speed camera

    NASA Astrophysics Data System (ADS)

    Yu, Liping; Pan, Bing

    2017-08-01

    Full-frame, high-speed 3D shape and deformation measurement using stereo-digital image correlation (stereo-DIC) technique and a single high-speed color camera is proposed. With the aid of a skillfully designed pseudo stereo-imaging apparatus, color images of a test object surface, composed of blue and red channel images from two different optical paths, are recorded by a high-speed color CMOS camera. The recorded color images can be separated into red and blue channel sub-images using a simple but effective color crosstalk correction method. These separated blue and red channel sub-images are processed by regular stereo-DIC method to retrieve full-field 3D shape and deformation on the test object surface. Compared with existing two-camera high-speed stereo-DIC or four-mirror-adapter-assisted singe-camera high-speed stereo-DIC, the proposed single-camera high-speed stereo-DIC technique offers prominent advantages of full-frame measurements using a single high-speed camera but without sacrificing its spatial resolution. Two real experiments, including shape measurement of a curved surface and vibration measurement of a Chinese double-side drum, demonstrated the effectiveness and accuracy of the proposed technique.

  2. An optimized adaptive optics experimental setup for in vivo retinal imaging

    NASA Astrophysics Data System (ADS)

    Balderas-Mata, S. E.; Valdivieso González, L. G.; Ramírez Zavaleta, G.; López Olazagasti, E.; Tepichin Rodriguez, E.

    2012-10-01

    The use of Adaptive Optics (AO) in ophthalmologic instruments to image human retinas has been probed to improve the imaging lateral resolution, by correcting both static and dynamic aberrations inherent in human eyes. Typically, the configuration of the AO arm uses an infrared beam from a superluminescent diode (SLD), which is focused on the retina, acting as a point source. The back reflected light emerges through the eye optical system bringing with it the aberrations of the cornea. The aberrated wavefront is measured with a Shack - Hartmann wavefront sensor (SHWFS). However, the aberrations in the optical imaging system can reduced the performance of the wave front correction. The aim of this work is to present an optimized first stage AO experimental setup for in vivo retinal imaging. In our proposal, the imaging optical system has been designed in order to reduce spherical aberrations due to the lenses. The ANSI Standard is followed assuring the safety power levels. The performance of the system will be compared with a commercial aberrometer. This system will be used as the AO arm of a flood-illuminated fundus camera system for retinal imaging. We present preliminary experimental results showing the enhancement.

  3. A curve fitting method for extrinsic camera calibration from a single image of a cylindrical object

    NASA Astrophysics Data System (ADS)

    Winkler, A. W.; Zagar, B. G.

    2013-08-01

    An important step in the process of optical steel coil quality assurance is to measure the proportions of width and radius of steel coils as well as the relative position and orientation of the camera. This work attempts to estimate these extrinsic parameters from single images by using the cylindrical coil itself as the calibration target. Therefore, an adaptive least-squares algorithm is applied to fit parametrized curves to the detected true coil outline in the acquisition. The employed model allows for strictly separating the intrinsic and the extrinsic parameters. Thus, the intrinsic camera parameters can be calibrated beforehand using available calibration software. Furthermore, a way to segment the true coil outline in the acquired images is motivated. The proposed optimization method yields highly accurate results and can be generalized even to measure other solids which cannot be characterized by the identification of simple geometric primitives.

  4. Compact instrument for fluorescence image-guided surgery

    NASA Astrophysics Data System (ADS)

    Wang, Xinghua; Bhaumik, Srabani; Li, Qing; Staudinger, V. Paul; Yazdanfar, Siavash

    2010-03-01

    Fluorescence image-guided surgery (FIGS) is an emerging technique in oncology, neurology, and cardiology. To adapt intraoperative imaging for various surgical applications, increasingly flexible and compact FIGS instruments are necessary. We present a compact, portable FIGS system and demonstrate its use in cardiovascular mapping in a preclinical model of myocardial ischemia. Our system uses fiber optic delivery of laser diode excitation, custom optics with high collection efficiency, and compact consumer-grade cameras as a low-cost and compact alternative to open surgical FIGS systems. Dramatic size and weight reduction increases flexibility and access, and allows for handheld use or unobtrusive positioning over the surgical field.

  5. KAPAO first light: the design, construction and operation of a low-cost natural guide star adaptive optics system

    NASA Astrophysics Data System (ADS)

    Severson, Scott A.; Choi, Philip I.; Badham, Katherine E.; Bolger, Dalton; Contreras, Daniel S.; Gilbreth, Blaine N.; Guerrero, Christian; Littleton, Erik; Long, Joseph; McGonigle, Lorcan P.; Morrison, William A.; Ortega, Fernando; Rudy, Alex R.; Wong, Jonathan R.; Spjut, Erik; Baranec, Christoph; Riddle, Reed

    2014-07-01

    We present the instrument design and first light observations of KAPAO, a natural guide star adaptive optics (AO) system for the Pomona College Table Mountain Observatory (TMO) 1-meter telescope. The KAPAO system has dual science channels with visible and near-infrared cameras, a Shack-Hartmann wavefront sensor, and a commercially available 140-actuator MEMS deformable mirror. The pupil relays are two pairs of custom off-axis parabolas and the control system is based on a version of the Robo-AO control software. The AO system and telescope are remotely operable, and KAPAO is designed to share the Cassegrain focus with the existing TMO polarimeter. We discuss the extensive integration of undergraduate students in the program including the multiple senior theses/capstones and summer assistantships amongst our partner institutions. This material is based upon work supported by the National Science Foundation under Grant No. 0960343.

  6. Photoreceptor counting and montaging of en-face retinal images from an adaptive optics fundus camera

    PubMed Central

    Xue, Bai; Choi, Stacey S.; Doble, Nathan; Werner, John S.

    2008-01-01

    A fast and efficient method for quantifying photoreceptor density in images obtained with an en-face flood-illuminated adaptive optics (AO) imaging system is described. To improve accuracy of cone counting, en-face images are analyzed over extended areas. This is achieved with two separate semiautomated algorithms: (1) a montaging algorithm that joins retinal images with overlapping common features without edge effects and (2) a cone density measurement algorithm that counts the individual cones in the montaged image. The accuracy of the cone density measurement algorithm is high, with >97% agreement for a simulated retinal image (of known density, with low contrast) and for AO images from normal eyes when compared with previously reported histological data. Our algorithms do not require spatial regularity in cone packing and are, therefore, useful for counting cones in diseased retinas, as demonstrated for eyes with Stargardt’s macular dystrophy and retinitis pigmentosa. PMID:17429482

  7. An adaptive optics package designed for astronomical use with a laser guide star tuned to an absorption line of atomic sodium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salmon, J.T.; Avicola, K.; Brase, J.M.

    1994-04-11

    We present the design and implementation of a very compact adaptive optic system that senses the return light from a sodium guide-star and controls a deformable mirror and a pointing mirror to compensate atmospheric perturbations in the wavefront. The deformable mirror has 19 electrostrictive actuators and triangular subapertures. The wavefront sensor is a Hartmann sensor with lenslets on triangular centers. The high-bandwidth steering mirror assembly incorporates an analog controller that samples the tilt with an avalanche photodiode quad cell. An {line_integral}/25 imaging leg focuses the light into a science camera that can either obtain long-exposure images or speckle data. Inmore » laboratory tests overall Strehl ratios were improved by a factor of 3 when a mylar sheet was used as an aberrator. The crossover frequency at unity gain is 30 Hz.« less

  8. Photoreceptor counting and montaging of en-face retinal images from an adaptive optics fundus camera

    NASA Astrophysics Data System (ADS)

    Xue, Bai; Choi, Stacey S.; Doble, Nathan; Werner, John S.

    2007-05-01

    A fast and efficient method for quantifying photoreceptor density in images obtained with an en-face flood-illuminated adaptive optics (AO) imaging system is described. To improve accuracy of cone counting, en-face images are analyzed over extended areas. This is achieved with two separate semiautomated algorithms: (1) a montaging algorithm that joins retinal images with overlapping common features without edge effects and (2) a cone density measurement algorithm that counts the individual cones in the montaged image. The accuracy of the cone density measurement algorithm is high, with >97% agreement for a simulated retinal image (of known density, with low contrast) and for AO images from normal eyes when compared with previously reported histological data. Our algorithms do not require spatial regularity in cone packing and are, therefore, useful for counting cones in diseased retinas, as demonstrated for eyes with Stargardt's macular dystrophy and retinitis pigmentosa.

  9. Adaptive DFT-based Interferometer Fringe Tracking

    NASA Technical Reports Server (NTRS)

    Wilson, Edward; Pedretti, Ettore; Bregman, Jesse; Mah, Robert W.; Traub, Wesley A.

    2004-01-01

    An automatic interferometer fringe tracking system has been developed, implemented, and tested at the Infrared Optical Telescope Array (IOTA) observatory at Mt. Hopkins, Arizona. The system can minimize the optical path differences (OPDs) for all three baselines of the Michelson stellar interferometer at IOTA. Based on sliding window discrete Fourier transform (DFT) calculations that were optimized for computational efficiency and robustness to atmospheric disturbances, the algorithm has also been tested extensively on off-line data. Implemented in ANSI C on the 266 MHz PowerPC processor running the VxWorks real-time operating system, the algorithm runs in approximately 2.0 milliseconds per scan (including all three interferograms), using the science camera and piezo scanners to measure and correct the OPDs. The adaptive DFT-based tracking algorithm should be applicable to other systems where there is a need to detect or track a signal with an approximately constant-frequency carrier pulse.

  10. Optical Transient Monitor (OTM) for BOOTES Project

    NASA Astrophysics Data System (ADS)

    Páta, P.; Bernas, M.; Castro-Tirado, A. J.; Hudec, R.

    2003-04-01

    The Optical Transient Monitor (OTM) is a software for control of three wide and ultra-wide filed cameras of BOOTES (Burst Observer and Optical Transient Exploring System) station. The OTM is a PC based and it is powerful tool for taking images from two SBIG CCD cameras in same time or from one camera only. The control program for BOOTES cameras is Windows 98 or MSDOS based. Now the version for Windows 2000 is prepared. There are five main supported modes of work. The OTM program could control cameras and evaluate image data without human interaction.

  11. A new adaptive light beam focusing principle for scanning light stimulation systems.

    PubMed

    Bitzer, L A; Meseth, M; Benson, N; Schmechel, R

    2013-02-01

    In this article a novel principle to achieve optimal focusing conditions or rather the smallest possible beam diameter for scanning light stimulation systems is presented. It is based on the following methodology: First, a reference point on a camera sensor is introduced where optimal focusing conditions are adjusted and the distance between the light focusing optic and the reference point is determined using a laser displacement sensor. In a second step, this displacement sensor is used to map the topography of the sample under investigation. Finally, the actual measurement is conducted, using optimal focusing conditions in each measurement point at the sample surface, that are determined by the height difference between camera sensor and the sample topography. This principle is independent of the measurement values, the optical or electrical properties of the sample, the used light source, or the selected wavelength. Furthermore, the samples can be tilted, rough, bent, or of different surface materials. In the following the principle is implemented using an optical beam induced current system, but basically it can be applied to any other scanning light stimulation system. Measurements to demonstrate its operation are shown, using a polycrystalline silicon solar cell.

  12. Cone photoreceptor definition on adaptive optics retinal imaging.

    PubMed

    Muthiah, Manickam Nick; Gias, Carlos; Chen, Fred Kuanfu; Zhong, Joe; McClelland, Zoe; Sallo, Ferenc B; Peto, Tunde; Coffey, Peter J; da Cruz, Lyndon

    2014-08-01

    To quantitatively analyse cone photoreceptor matrices on images captured on an adaptive optics (AO) camera and assess their correlation to well-established parameters in the retinal histology literature. High resolution retinal images were acquired from 10 healthy subjects, aged 20-35 years old, using an AO camera (rtx1, Imagine Eyes, France). Left eye images were captured at 5° of retinal eccentricity, temporal to the fovea for consistency. In three subjects, images were also acquired at 0, 2, 3, 5 and 7° retinal eccentricities. Cone photoreceptor density was calculated following manual and automated counting. Inter-photoreceptor distance was also calculated. Voronoi domain and power spectrum analyses were performed for all images. At 5° eccentricity, the cone density (cones/mm(2) mean±SD) was 15.3±1.4×10(3) (automated) and 13.9±1.0×10(3) (manual) and the mean inter-photoreceptor distance was 8.6±0.4 μm. Cone density decreased and inter-photoreceptor distance increased with increasing retinal eccentricity from 2 to 7°. A regular hexagonal cone photoreceptor mosaic pattern was seen at 2, 3 and 5° of retinal eccentricity. Imaging data acquired from the AO camera match cone density, intercone distance and show the known features of cone photoreceptor distribution in the pericentral retina as reported by histology, namely, decreasing density values from 2 to 7° of eccentricity and the hexagonal packing arrangement. This confirms that AO flood imaging provides reliable estimates of pericentral cone photoreceptor distribution in normal subjects. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  13. Depth estimation and camera calibration of a focused plenoptic camera for visual odometry

    NASA Astrophysics Data System (ADS)

    Zeller, Niclas; Quint, Franz; Stilla, Uwe

    2016-08-01

    This paper presents new and improved methods of depth estimation and camera calibration for visual odometry with a focused plenoptic camera. For depth estimation we adapt an algorithm previously used in structure-from-motion approaches to work with images of a focused plenoptic camera. In the raw image of a plenoptic camera, scene patches are recorded in several micro-images under slightly different angles. This leads to a multi-view stereo-problem. To reduce the complexity, we divide this into multiple binocular stereo problems. For each pixel with sufficient gradient we estimate a virtual (uncalibrated) depth based on local intensity error minimization. The estimated depth is characterized by the variance of the estimate and is subsequently updated with the estimates from other micro-images. Updating is performed in a Kalman-like fashion. The result of depth estimation in a single image of the plenoptic camera is a probabilistic depth map, where each depth pixel consists of an estimated virtual depth and a corresponding variance. Since the resulting image of the plenoptic camera contains two plains: the optical image and the depth map, camera calibration is divided into two separate sub-problems. The optical path is calibrated based on a traditional calibration method. For calibrating the depth map we introduce two novel model based methods, which define the relation of the virtual depth, which has been estimated based on the light-field image, and the metric object distance. These two methods are compared to a well known curve fitting approach. Both model based methods show significant advantages compared to the curve fitting method. For visual odometry we fuse the probabilistic depth map gained from one shot of the plenoptic camera with the depth data gained by finding stereo correspondences between subsequent synthesized intensity images of the plenoptic camera. These images can be synthesized totally focused and thus finding stereo correspondences is enhanced. In contrast to monocular visual odometry approaches, due to the calibration of the individual depth maps, the scale of the scene can be observed. Furthermore, due to the light-field information better tracking capabilities compared to the monocular case can be expected. As result, the depth information gained by the plenoptic camera based visual odometry algorithm proposed in this paper has superior accuracy and reliability compared to the depth estimated from a single light-field image.

  14. The AOLI Non-Linear Curvature Wavefront Sensor: High sensitivity reconstruction for low-order AO

    NASA Astrophysics Data System (ADS)

    Crass, Jonathan; King, David; Mackay, Craig

    2013-12-01

    Many adaptive optics (AO) systems in use today require bright reference objects to determine the effects of atmospheric distortions on incoming wavefronts. This requirement is because Shack Hartmann wavefront sensors (SHWFS) distribute incoming light from reference objects into a large number of sub-apertures. Bright natural reference objects occur infrequently across the sky leading to the use of laser guide stars which add complexity to wavefront measurement systems. The non-linear curvature wavefront sensor as described by Guyon et al. has been shown to offer a significant increase in sensitivity when compared to a SHWFS. This facilitates much greater sky coverage using natural guide stars alone. This paper describes the current status of the non-linear curvature wavefront sensor being developed as part of an adaptive optics system for the Adaptive Optics Lucky Imager (AOLI) project. The sensor comprises two photon-counting EMCCD detectors from E2V Technologies, recording intensity at four near-pupil planes. These images are used with a reconstruction algorithm to determine the phase correction to be applied by an ALPAO 241-element deformable mirror. The overall system is intended to provide low-order correction for a Lucky Imaging based multi CCD imaging camera. We present the current optical design of the instrument including methods to minimise inherent optical effects, principally chromaticity. Wavefront reconstruction methods are discussed and strategies for their optimisation to run at the required real-time speeds are introduced. Finally, we discuss laboratory work with a demonstrator setup of the system.

  15. Exploring the imaging properties of thin lenses for cryogenic infrared cameras

    NASA Astrophysics Data System (ADS)

    Druart, Guillaume; Verdet, Sebastien; Guerineau, Nicolas; Magli, Serge; Chambon, Mathieu; Grulois, Tatiana; Matallah, Noura

    2016-05-01

    Designing a cryogenic camera is a good strategy to miniaturize and simplify an infrared camera using a cooled detector. Indeed, the integration of optics inside the cold shield allows to simply athermalize the design, guarantees a cold pupil and releases the constraint on having a high back focal length for small focal length systems. By this way, cameras made of a single lens or two lenses are viable systems with good optical features and a good stability in image correction. However it involves a relatively significant additional optical mass inside the dewar and thus increases the cool down time of the camera. ONERA is currently exploring a minimalist strategy consisting in giving an imaging function to thin optical plates that are found in conventional dewars. By this way, we could make a cryogenic camera that has the same cool down time as a traditional dewar without an imagery function. Two examples will be presented: the first one is a camera using a dual-band infrared detector made of a lens outside the dewar and a lens inside the cold shield, the later having the main optical power of the system. We were able to design a cold plano-convex lens with a thickness lower than 1mm. The second example is an evolution of a former cryogenic camera called SOIE. We replaced the cold meniscus by a plano-convex Fresnel lens with a decrease of the optical thermal mass of 66%. The performances of both cameras will be compared.

  16. A dual-band adaptor for infrared imaging.

    PubMed

    McLean, A G; Ahn, J-W; Maingi, R; Gray, T K; Roquemore, A L

    2012-05-01

    A novel imaging adaptor providing the capability to extend a standard single-band infrared (IR) camera into a two-color or dual-band device has been developed for application to high-speed IR thermography on the National Spherical Tokamak Experiment (NSTX). Temperature measurement with two-band infrared imaging has the advantage of being mostly independent of surface emissivity, which may vary significantly in the liquid lithium divertor installed on NSTX as compared to that of an all-carbon first wall. In order to take advantage of the high-speed capability of the existing IR camera at NSTX (1.6-6.2 kHz frame rate), a commercial visible-range optical splitter was extensively modified to operate in the medium wavelength and long wavelength IR. This two-band IR adapter utilizes a dichroic beamsplitter, which reflects 4-6 μm wavelengths and transmits 7-10 μm wavelength radiation, each with >95% efficiency and projects each IR channel image side-by-side on the camera's detector. Cutoff filters are used in each IR channel, and ZnSe imaging optics and mirrors optimized for broadband IR use are incorporated into the design. In-situ and ex-situ temperature calibration and preliminary data of the NSTX divertor during plasma discharges are presented, with contrasting results for dual-band vs. single-band IR operation.

  17. Optical registration of spaceborne low light remote sensing camera

    NASA Astrophysics Data System (ADS)

    Li, Chong-yang; Hao, Yan-hui; Xu, Peng-mei; Wang, Dong-jie; Ma, Li-na; Zhao, Ying-long

    2018-02-01

    For the high precision requirement of spaceborne low light remote sensing camera optical registration, optical registration of dual channel for CCD and EMCCD is achieved by the high magnification optical registration system. System integration optical registration and accuracy of optical registration scheme for spaceborne low light remote sensing camera with short focal depth and wide field of view is proposed in this paper. It also includes analysis of parallel misalignment of CCD and accuracy of optical registration. Actual registration results show that imaging clearly, MTF and accuracy of optical registration meet requirements, it provide important guarantee to get high quality image data in orbit.

  18. Our solution for fusion of simultaneusly acquired whole body scintigrams and optical images, as usesful tool in clinical practice in patients with differentiated thyroid carcinomas after radioiodine therapy. A useful tool in clinical practice.

    PubMed

    Matovic, Milovan; Jankovic, Milica; Barjaktarovic, Marko; Jeremic, Marija

    2017-01-01

    After radioiodine therapy of differentiated thyroid cancer (DTC) patients, whole body scintigraphy (WBS) is standard procedure before releasing the patient from the hospital. A common problem is the precise localization of regions where the iod-avide tissue is located. Sometimes is practically impossible to perform precise topographic localization of such regions. In order to face this problem, we have developed a low-cost Vision-Fusion system for web-camera image acquisition simultaneously with routine scintigraphic whole body acquisition including the algorithm for fusion of images given from both cameras. For image acquisition in the gamma part of the spectra we used e.cam dual head gamma camera (Siemens, Erlangen, Germany) in WBS modality, with matrix size of 256×1024 pixels and bed speed of 6cm/min, equipped with high energy collimator. For optical image acquisition in visible part of spectra we have used web-camera model C905 (Logitech, USA) with Carl Zeiss® optics, native resolution 1600×1200 pixels, 34 o field of view, 30g weight, with autofocus option turned "off" and auto white balance turned "on". Web camera is connected to upper head of gamma camera (GC) by a holder of lightweight aluminum rod and a plexiglas adapter. Our own Vision-Fusion software for image acquisition and coregistration was developed using NI LabVIEW programming environment 2015 (National Instruments, Texas, USA) and two additional LabVIEW modules: NI Vision Acquisition Software (VAS) and NI Vision Development Module (VDM). Vision acquisition software enables communication and control between laptop computer and web-camera. Vision development module is image processing library used for image preprocessing and fusion. Software starts the web-camera image acquisition before starting image acquisition on GC and stops it when GC completes the acquisition. Web-camera is in continuous acquisition mode with frame rate f depending on speed of patient bed movement v (f=v/∆ cm , where ∆ cm is a displacement step that can be changed in Settings option of Vision-Fusion software; by default, ∆ cm is set to 1cm corresponding to ∆ p =15 pixels). All images captured while patient's bed is moving are processed. Movement of patient's bed is checked using cross-correlation of two successive images. After each image capturing, algorithm extracts the central region of interest (ROI) of the image, with the same width as captured image (1600 pixels) and the height that is equal to the ∆ p displacement in pixels. All extracted central ROI are placed next to each other in the overall whole-body image. Stacking of narrow central ROI introduces negligible distortion in the overall whole-body image. The first step for fusion of the scintigram and the optical image was determination of spatial transformation between them. We have made an experiment with two markers (point radioactivity sources of 99m Tc pertechnetate 1MBq) visible in both images (WBS and optical) to find transformation of coordinates between images. The distance between point markers is used for spatial coregistration of the gamma and optical images. At the end of coregistration process, gamma image is rescaled in spatial domain and added to the optical image (green or red channel, amplification changeable from user interface). We tested our system for 10 patients with DTC who received radioiodine therapy (8 women and two men, with average age of 50.10±12.26 years). Five patients received 5.55Gbq, three 3.70GBq and two 1.85GBq. Whole-body scintigraphy and optical image acquisition were performed 72 hours after application of radioiodine therapy. Based on our first results during clinical testing of our system, we can conclude that our system can improve diagnostic possibility of whole body scintigraphy to detect thyroid remnant tissue in patients with DTC after radioiodine therapy.

  19. Effects of age, blood pressure and antihypertensive treatments on retinal arterioles remodeling assessed by adaptive optics.

    PubMed

    Rosenbaum, David; Mattina, Alessandro; Koch, Edouard; Rossant, Florence; Gallo, Antonio; Kachenoura, Nadjia; Paques, Michel; Redheuil, Alban; Girerd, Xavier

    2016-06-01

    In humans, adaptive optics camera enables precise large-scale noninvasive retinal microcirculation evaluation to assess ageing, blood pressure and antihypertensive treatments respective roles on retinal arterioles anatomy. We used adaptive optics camera rtx1 (Imagine-Eyes, Orsay, France) to measure wall thickness, internal diameter and to calculate wall-to-lumen ratio (WLR) and wall cross-sectional area of retinal arterioles. This assessment was repeated within a short period in two subgroups of hypertensive individuals without or with a drug-induced blood pressure drop. In 1000 individuals, mean wall thickness, lumen diameter and WLR were 23.2 ± 3.9, 78.0 ± 10.9 and 0.300 ± 0.054 μm, respectively. Blood pressure and age both independently increased WLR by thickening arterial wall. In opposite, hypertension narrowed lumen in younger as compared to older individuals (73.2 ± 9.0 vs. 81.7 ± 10.2 μm; P < 0.001), whereas age exerted no influence on lumen diameter. Short-term blood pressure drop (-29.3 ± 17.3/-14.4 ± 10.0 mmHg) induced a WLR decrease (-6.0 ± 8.0%) because of lumen dilatation (+4.4 ± 5.9%) without wall thickness changes. By contrast, no modifications were observed in individuals with stable blood pressure. In treated and controlled hypertensives under monotherapy WLR normalization was observed because of combined wall decrease and lumen dilatation independently of antihypertensive pharmacological classes. In multivariate analysis, hypertension drug regimen was not an independent predictor of any retinal anatomical indices. Retinal arteriolar remodeling comprised blood pressure and age-driven wall thickening as well as blood pressure-triggered lumen narrowing in younger individuals. Remodeling reversal observed in controlled hypertensives seems to include short-term functional and long-term structural changes.

  20. KAPAO-Alpha: An On-The-Sky Testbed for Adaptive Optics on Small Aperture Telescopes

    NASA Astrophysics Data System (ADS)

    Morrison, Will; Choi, P. I.; Severson, S. A.; Spjut, E.; Contreras, D. S.; Gilbreth, B. N.; McGonigle, L. P.; Rudy, A. R.; Xue, A.; Baranec, C.; Riddle, R.

    2012-05-01

    We present initial in-lab and on-sky results of a natural guide star adaptive optics instrument, KAPAO-Alpha, being deployed on Pomona College’s 1-meter telescope at Table Mountain Observatory. The instrument is an engineering prototype designed to help us identify and solve design and integration issues before building KAPAO, a low-cost, dual-band, natural guide star AO system currently in active development and scheduled for first light in 2013. The Alpha system operates at visible wavelengths, employs Shack-Hartmann wavefront sensing, and is assembled entirely from commercially available components that include: off-the-shelf optics, a 140-actuator BMC deformable mirror, a high speed SciMeasure Lil’ Joe camera, and an EMCCD for science image acquisition. Wavefront reconstruction operating at 1-kHz speeds is handled with a consumer-grade computer running custom software adopted from the Robo-AO project. The assembly and integration of the Alpha instrument has been undertaken as a Pomona College undergraduate thesis. As part of the larger KAPAO project, it is supported by the National Science Foundation under Grant No. 0960343.

  1. Optimization of a miniature short-wavelength infrared objective optics of a short-wavelength infrared to visible upconversion layer attached to a mobile-devices visible camera

    NASA Astrophysics Data System (ADS)

    Kadosh, Itai; Sarusi, Gabby

    2017-10-01

    The use of dual cameras in parallax in order to detect and create 3-D images in mobile devices has been increasing over the last few years. We propose a concept where the second camera will be operating in the short-wavelength infrared (SWIR-1300 to 1800 nm) and thus have night vision capability while preserving most of the other advantages of dual cameras in terms of depth and 3-D capabilities. In order to maintain commonality of the two cameras, we propose to attach to one of the cameras a SWIR to visible upconversion layer that will convert the SWIR image into a visible image. For this purpose, the fore optics (the objective lenses) should be redesigned for the SWIR spectral range and the additional upconversion layer, whose thickness is <1 μm. Such layer should be attached in close proximity to the mobile device visible range camera sensor (the CMOS sensor). This paper presents such a SWIR objective optical design and optimization that is formed and fit mechanically to the visible objective design but with different lenses in order to maintain the commonality and as a proof-of-concept. Such a SWIR objective design is very challenging since it requires mimicking the original visible mobile camera lenses' sizes and the mechanical housing, so we can adhere to the visible optical and mechanical design. We present in depth a feasibility study and the overall optical system performance of such a SWIR mobile-device camera fore optics design.

  2. Mach-zehnder based optical marker/comb generator for streak camera calibration

    DOEpatents

    Miller, Edward Kirk

    2015-03-03

    This disclosure is directed to a method and apparatus for generating marker and comb indicia in an optical environment using a Mach-Zehnder (M-Z) modulator. High speed recording devices are configured to record image or other data defining a high speed event. To calibrate and establish time reference, the markers or combs are indicia which serve as timing pulses (markers) or a constant-frequency train of optical pulses (comb) to be imaged on a streak camera for accurate time based calibration and time reference. The system includes a camera, an optic signal generator which provides an optic signal to an M-Z modulator and biasing and modulation signal generators configured to provide input to the M-Z modulator. An optical reference signal is provided to the M-Z modulator. The M-Z modulator modulates the reference signal to a higher frequency optical signal which is output through a fiber coupled link to the streak camera.

  3. Optical synthesizer for a large quadrant-array CCD camera: Center director's discretionary fund

    NASA Technical Reports Server (NTRS)

    Hagyard, Mona J.

    1992-01-01

    The objective of this program was to design and develop an optical device, an optical synthesizer, that focuses four contiguous quadrants of a solar image on four spatially separated CCD arrays that are part of a unique CCD camera system. This camera and the optical synthesizer will be part of the new NASA-Marshall Experimental Vector Magnetograph, and instrument developed to measure the Sun's magnetic field as accurately as present technology allows. The tasks undertaken in the program are outlined and the final detailed optical design is presented.

  4. Two-dimensional angular transmission characterization of CPV modules.

    PubMed

    Herrero, R; Domínguez, C; Askins, S; Antón, I; Sala, G

    2010-11-08

    This paper proposes a fast method to characterize the two-dimensional angular transmission function of a concentrator photovoltaic (CPV) system. The so-called inverse method, which has been used in the past for the characterization of small optical components, has been adapted to large-area CPV modules. In the inverse method, the receiver cell is forward biased to produce a Lambertian light emission, which reveals the reverse optical path of the optics. Using a large-area collimator mirror, the light beam exiting the optics is projected on a Lambertian screen to create a spatially resolved image of the angular transmission function. An image is then obtained using a CCD camera. To validate this method, the angular transmission functions of a real CPV module have been measured by both direct illumination (flash CPV simulator and sunlight) and the inverse method, and the comparison shows good agreement.

  5. Method used to test the imaging consistency of binocular camera's left-right optical system

    NASA Astrophysics Data System (ADS)

    Liu, Meiying; Wang, Hu; Liu, Jie; Xue, Yaoke; Yang, Shaodong; Zhao, Hui

    2016-09-01

    To binocular camera, the consistency of optical parameters of the left and the right optical system is an important factor that will influence the overall imaging consistency. In conventional testing procedure of optical system, there lacks specifications suitable for evaluating imaging consistency. In this paper, considering the special requirements of binocular optical imaging system, a method used to measure the imaging consistency of binocular camera is presented. Based on this method, a measurement system which is composed of an integrating sphere, a rotary table and a CMOS camera has been established. First, let the left and the right optical system capture images in normal exposure time under the same condition. Second, a contour image is obtained based on the multiple threshold segmentation result and the boundary is determined using the slope of contour lines near the pseudo-contour line. Third, the constraint of gray level based on the corresponding coordinates of left-right images is established and the imaging consistency could be evaluated through standard deviation σ of the imaging grayscale difference D (x, y) between the left and right optical system. The experiments demonstrate that the method is suitable for carrying out the imaging consistency testing for binocular camera. When the standard deviation 3σ distribution of imaging gray difference D (x, y) between the left and right optical system of the binocular camera does not exceed 5%, it is believed that the design requirements have been achieved. This method could be used effectively and paves the way for the imaging consistency testing of the binocular camera.

  6. Free-form reflective optics for mid-infrared camera and spectrometer on board SPICA

    NASA Astrophysics Data System (ADS)

    Fujishiro, Naofumi; Kataza, Hirokazu; Wada, Takehiko; Ikeda, Yuji; Sakon, Itsuki; Oyabu, Shinki

    2017-11-01

    SPICA (Space Infrared Telescope for Cosmology and Astrophysics) is an astronomical mission optimized for mid-and far-infrared astronomy with a cryogenically cooled 3-m class telescope, envisioned for launch in early 2020s. Mid-infrared Camera and Spectrometer (MCS) is a focal plane instrument for SPICA with imaging and spectroscopic observing capabilities in the mid-infrared wavelength range of 5-38μm. MCS consists of two relay optical modules and following four scientific optical modules of WFC (Wide Field Camera; 5'x 5' field of view, f/11.7 and f/4.2 cameras), LRS (Low Resolution Spectrometer; 2'.5 long slits, prism dispersers, f/5.0 and f/1.7 cameras, spectral resolving power R ∼ 50-100), MRS (Mid Resolution Spectrometer; echelles, integral field units by image slicer, f/3.3 and f/1.9 cameras, R ∼ 1100-3000) and HRS (High Resolution Spectrometer; immersed echelles, f/6.0 and f/3.6 cameras, R ∼ 20000-30000). Here, we present optical design and expected optical performance of MCS. Most parts of MCS optics adopt off-axis reflective system for covering the wide wavelength range of 5-38μm without chromatic aberration and minimizing problems due to changes in shapes and refractive indices of materials from room temperature to cryogenic temperature. In order to achieve the high specification requirements of wide field of view, small F-number and large spectral resolving power with compact size, we employed the paraxial and aberration analysis of off-axial optical systems (Araki 2005 [1]) which is a design method using free-form surfaces for compact reflective optics such as head mount displays. As a result, we have successfully designed compact reflective optics for MCS with as-built performance of diffraction-limited image resolution.

  7. CCD Camera Lens Interface for Real-Time Theodolite Alignment

    NASA Technical Reports Server (NTRS)

    Wake, Shane; Scott, V. Stanley, III

    2012-01-01

    Theodolites are a common instrument in the testing, alignment, and building of various systems ranging from a single optical component to an entire instrument. They provide a precise way to measure horizontal and vertical angles. They can be used to align multiple objects in a desired way at specific angles. They can also be used to reference a specific location or orientation of an object that has moved. Some systems may require a small margin of error in position of components. A theodolite can assist with accurately measuring and/or minimizing that error. The technology is an adapter for a CCD camera with lens to attach to a Leica Wild T3000 Theodolite eyepiece that enables viewing on a connected monitor, and thus can be utilized with multiple theodolites simultaneously. This technology removes a substantial part of human error by relying on the CCD camera and monitors. It also allows image recording of the alignment, and therefore provides a quantitative means to measure such error.

  8. Optical fiducial timing system for X-ray streak cameras with aluminum coated optical fiber ends

    DOEpatents

    Nilson, David G.; Campbell, E. Michael; MacGowan, Brian J.; Medecki, Hector

    1988-01-01

    An optical fiducial timing system is provided for use with interdependent groups of X-ray streak cameras (18). The aluminum coated (80) ends of optical fibers (78) are positioned with the photocathodes (20, 60, 70) of the X-ray streak cameras (18). The other ends of the optical fibers (78) are placed together in a bundled array (90). A fiducial optical signal (96), that is comprised of 2.omega. or 1.omega. laser light, after introduction to the bundled array (90), travels to the aluminum coated (82) optical fiber ends and ejects quantities of electrons (84) that are recorded on the data recording media (52) of the X-ray streak cameras (18). Since both 2.omega. and 1.omega. laser light can travel long distances in optical fiber with only a slight attenuation, the initial arial power density of the fiducial optical signal (96) is well below the damage threshold of the fused silica or other material that comprises the optical fibers (78, 90). Thus the fiducial timing system can be repeatably used over long durations of time.

  9. Study on portable optical 3D coordinate measuring system

    NASA Astrophysics Data System (ADS)

    Ren, Tongqun; Zhu, Jigui; Guo, Yinbiao

    2009-05-01

    A portable optical 3D coordinate measuring system based on digital Close Range Photogrammetry (CRP) technology and binocular stereo vision theory is researched. Three ultra-red LED with high stability is set on a hand-hold target to provide measuring feature and establish target coordinate system. Ray intersection based field directional calibrating is done for the intersectant binocular measurement system composed of two cameras by a reference ruler. The hand-hold target controlled by Bluetooth wireless communication is free moved to implement contact measurement. The position of ceramic contact ball is pre-calibrated accurately. The coordinates of target feature points are obtained by binocular stereo vision model from the stereo images pair taken by cameras. Combining radius compensation for contact ball and residual error correction, object point can be resolved by transfer of axes using target coordinate system as intermediary. This system is suitable for on-field large-scale measurement because of its excellent portability, high precision, wide measuring volume, great adaptability and satisfying automatization. It is tested that the measuring precision is near to +/-0.1mm/m.

  10. Scalar wave-optical reconstruction of plenoptic camera images.

    PubMed

    Junker, André; Stenau, Tim; Brenner, Karl-Heinz

    2014-09-01

    We investigate the reconstruction of plenoptic camera images in a scalar wave-optical framework. Previous publications relating to this topic numerically simulate light propagation on the basis of ray tracing. However, due to continuing miniaturization of hardware components it can be assumed that in combination with low-aperture optical systems this technique may not be generally valid. Therefore, we study the differences between ray- and wave-optical object reconstructions of true plenoptic camera images. For this purpose we present a wave-optical reconstruction algorithm, which can be run on a regular computer. Our findings show that a wave-optical treatment is capable of increasing the detail resolution of reconstructed objects.

  11. Guaranteeing Failsafe Operation of Extended-Scene Shack-Hartmann Wavefront Sensor Algorithm

    NASA Technical Reports Server (NTRS)

    Sidick, Erikin

    2009-01-01

    A Shack-Hartmann sensor (SHS) is an optical instrument consisting of a lenslet array and a camera. It is widely used for wavefront sensing in optical testing and astronomical adaptive optics. The camera is placed at the focal point of the lenslet array and points at a star or any other point source. The image captured is an array of spot images. When the wavefront error at the lenslet array changes, the position of each spot measurably shifts from its original position. Determining the shifts of the spot images from their reference points shows the extent of the wavefront error. An adaptive cross-correlation (ACC) algorithm has been developed to use scenes as well as point sources for wavefront error detection. Qualifying an extended scene image is often not an easy task due to changing conditions in scene content, illumination level, background, Poisson noise, read-out noise, dark current, sampling format, and field of view. The proposed new technique based on ACC algorithm analyzes the effects of these conditions on the performance of the ACC algorithm and determines the viability of an extended scene image. If it is viable, then it can be used for error correction; if it is not, the image fails and will not be further processed. By potentially testing for a wide variety of conditions, the algorithm s accuracy can be virtually guaranteed. In a typical application, the ACC algorithm finds image shifts of more than 500 Shack-Hartmann camera sub-images relative to a reference sub -image or cell when performing one wavefront sensing iteration. In the proposed new technique, a pair of test and reference cells is selected from the same frame, preferably from two well-separated locations. The test cell is shifted by an integer number of pixels, say, for example, from m= -5 to 5 along the x-direction by choosing a different area on the same sub-image, and the shifts are estimated using the ACC algorithm. The same is done in the y-direction. If the resulting shift estimate errors are less than a pre-determined threshold (e.g., 0.03 pixel), the image is accepted. Otherwise, it is rejected.

  12. Texture-adaptive hyperspectral video acquisition system with a spatial light modulator

    NASA Astrophysics Data System (ADS)

    Fang, Xiaojing; Feng, Jiao; Wang, Yongjin

    2014-10-01

    We present a new hybrid camera system based on spatial light modulator (SLM) to capture texture-adaptive high-resolution hyperspectral video. The hybrid camera system records a hyperspectral video with low spatial resolution using a gray camera and a high-spatial resolution video using a RGB camera. The hyperspectral video is subsampled by the SLM. The subsampled points can be adaptively selected according to the texture characteristic of the scene by combining with digital imaging analysis and computational processing. In this paper, we propose an adaptive sampling method utilizing texture segmentation and wavelet transform (WT). We also demonstrate the effectiveness of the sampled pattern on the SLM with the proposed method.

  13. Multi-color pyrometry imaging system and method of operating the same

    DOEpatents

    Estevadeordal, Jordi; Nirmalan, Nirm Velumylum; Tralshawala, Nilesh; Bailey, Jeremy Clyde

    2017-03-21

    A multi-color pyrometry imaging system for a high-temperature asset includes at least one viewing port in optical communication with at least one high-temperature component of the high-temperature asset. The system also includes at least one camera device in optical communication with the at least one viewing port. The at least one camera device includes a camera enclosure and at least one camera aperture defined in the camera enclosure, The at least one camera aperture is in optical communication with the at least one viewing port. The at least one camera device also includes a multi-color filtering mechanism coupled to the enclosure. The multi-color filtering mechanism is configured to sequentially transmit photons within a first predetermined wavelength band and transmit photons within a second predetermined wavelength band that is different than the first predetermined wavelength band.

  14. Instrument performance enhancement and modification through an extended instrument paradigm

    NASA Astrophysics Data System (ADS)

    Mahan, Stephen Lee

    An extended instrument paradigm is proposed, developed and shown in various applications. The CBM (Chin, Blass, Mahan) method is an extension to the linear systems model of observing systems. In the most obvious and practical application of image enhancement of an instrument characterized by a time-invariant instrumental response function, CBM can be used to enhance images or spectra through a simple convolution application of the CBM filter for a resolution improvement of as much as a factor of two. The CBM method can be used in many applications. We discuss several within this work including imaging through turbulent atmospheres, or what we've called Adaptive Imaging. Adaptive Imaging provides an alternative approach for the investigator desiring results similar to those obtainable with adaptive optics, however on a minimal budget. The CBM method is also used in a backprojected filtered image reconstruction method for Positron Emission Tomography. In addition, we can use information theoretic methods to aid in the determination of model instrumental response function parameters for images having an unknown origin. Another application presented herein involves the use of the CBM method for the determination of the continuum level of a Fourier transform spectrometer observation of ethylene, which provides a means for obtaining reliable intensity measurements in an automated manner. We also present the application of CBM to hyperspectral image data of the comet Shoemaker-Levy 9 impact with Jupiter taken with an acousto-optical tunable filter equipped CCD camera to an adaptive optics telescope.

  15. A liquid crystal microlens array with aluminum and graphene electrodes for plenoptic imaging

    NASA Astrophysics Data System (ADS)

    Lei, Yu; Tong, Qing; Luo, Jun; Zhang, Xinyu; Sang, Hongshi; Xie, Changsheng

    2015-12-01

    Currently, several semiconducting oxide materials such as typical indium tin oxide are widely used as the transparent conducting electrodes (TCEs) in liquid crystal microlens arrays. In this paper, we fabricate a liquid crystal microlens array using graphene rather than semiconducting oxides as the TCE. Common optical experiments are carried out to acquire the focusing features of the graphene-based liquid crystal microlens array (GLCMLA) driven electrically. The acquired optical fields show that the GLCMLA can converge incident collimating lights efficiently. The relationship between the focal length and the applied voltage signal is presented. Then the GLCMLA is deployed in a plenoptic camera prototype and the raw images are acquired so as to verify their imaging capability. Our experiments demonstrate that graphene has already presented a broad application prospect in the area of adaptive optics.

  16. RESTORATION OF ATMOSPHERICALLY DEGRADED IMAGES. VOLUME 3.

    DTIC Science & Technology

    AERIAL CAMERAS, LASERS, ILLUMINATION, TRACKING CAMERAS, DIFFRACTION, PHOTOGRAPHIC GRAIN, DENSITY, DENSITOMETERS, MATHEMATICAL ANALYSIS, OPTICAL SCANNING, SYSTEMS ENGINEERING, TURBULENCE, OPTICAL PROPERTIES, SATELLITE TRACKING SYSTEMS.

  17. HIGH SPEED KERR CELL FRAMING CAMERA

    DOEpatents

    Goss, W.C.; Gilley, L.F.

    1964-01-01

    The present invention relates to a high speed camera utilizing a Kerr cell shutter and a novel optical delay system having no moving parts. The camera can selectively photograph at least 6 frames within 9 x 10/sup -8/ seconds during any such time interval of an occurring event. The invention utilizes particularly an optical system which views and transmits 6 images of an event to a multi-channeled optical delay relay system. The delay relay system has optical paths of successively increased length in whole multiples of the first channel optical path length, into which optical paths the 6 images are transmitted. The successively delayed images are accepted from the exit of the delay relay system by an optical image focusing means, which in turn directs the images into a Kerr cell shutter disposed to intercept the image paths. A camera is disposed to simultaneously view and record the 6 images during a single exposure of the Kerr cell shutter. (AEC)

  18. FOCAL PLANE WAVEFRONT SENSING USING RESIDUAL ADAPTIVE OPTICS SPECKLES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Codona, Johanan L.; Kenworthy, Matthew, E-mail: jlcodona@gmail.com

    2013-04-20

    Optical imperfections, misalignments, aberrations, and even dust can significantly limit sensitivity in high-contrast imaging systems such as coronagraphs. An upstream deformable mirror (DM) in the pupil can be used to correct or compensate for these flaws, either to enhance the Strehl ratio or suppress the residual coronagraphic halo. Measurement of the phase and amplitude of the starlight halo at the science camera is essential for determining the DM shape that compensates for any non-common-path (NCP) wavefront errors. Using DM displacement ripples to create a series of probe and anti-halo speckles in the focal plane has been proposed for space-based coronagraphsmore » and successfully demonstrated in the lab. We present the theory and first on-sky demonstration of a technique to measure the complex halo using the rapidly changing residual atmospheric speckles at the 6.5 m MMT telescope using the Clio mid-IR camera. The AO system's wavefront sensor measurements are used to estimate the residual wavefront, allowing us to approximately compute the rapidly evolving phase and amplitude of speckle halo. When combined with relatively short, synchronized science camera images, the complex speckle estimates can be used to interferometrically analyze the images, leading to an estimate of the static diffraction halo with NCP effects included. In an operational system, this information could be collected continuously and used to iteratively correct quasi-static NCP errors or suppress imperfect coronagraphic halos.« less

  19. Optical designs for the Mars '03 rover cameras

    NASA Astrophysics Data System (ADS)

    Smith, Gregory H.; Hagerott, Edward C.; Scherr, Lawrence M.; Herkenhoff, Kenneth E.; Bell, James F.

    2001-12-01

    In 2003, NASA is planning to send two robotic rover vehicles to explore the surface of Mars. The spacecraft will land on airbags in different, carefully chosen locations. The search for evidence indicating conditions favorable for past or present life will be a high priority. Each rover will carry a total of ten cameras of five various types. There will be a stereo pair of color panoramic cameras, a stereo pair of wide- field navigation cameras, one close-up camera on a movable arm, two stereo pairs of fisheye cameras for hazard avoidance, and one Sun sensor camera. This paper discusses the lenses for these cameras. Included are the specifications, design approaches, expected optical performances, prescriptions, and tolerances.

  20. Adaptive optics self-calibration using differential OTF (dOTF)

    NASA Astrophysics Data System (ADS)

    Rodack, Alexander T.; Knight, Justin M.; Codona, Johanan L.; Miller, Kelsey L.; Guyon, Olivier

    2015-09-01

    We demonstrate self-calibration of an adaptive optical system using differential OTF [Codona, JL; Opt. Eng. 0001; 52(9):097105-097105. doi:10.1117/1.OE.52.9.097105]. We use a deformable mirror (DM) along with science camera focal plane images to implement a closed-loop servo that both flattens the DM and corrects for non-common-path aberrations within the telescope. The pupil field modification required for dOTF measurement is introduced by displacing actuators near the edge of the illuminated pupil. Simulations were used to develop methods to retrieve the phase from the complex amplitude dOTF measurements for both segmented and continuous sheet MEMS DMs and tests were performed using a Boston Micromachines continuous sheet DM for verification. We compute the actuator correction updates directly from the phase of the dOTF measurements, reading out displacements and/or slopes at segment and actuator positions. Through simulation, we also explore the effectiveness of these techniques for a variety of photons collected in each dOTF exposure pair.

  1. Subaru Near Infrared Coronagraphic Images of T Tauri

    NASA Astrophysics Data System (ADS)

    Mayama, Satoshi; Tamura, Motohide; Hayashi, Masahiko; Itoh, Yoichi; Fukagawa, Misato; Suto, Hiroshi; Ishii, Miki; Murakawa, Koji; Oasa, Yumiko; Hayashi, Saeko S.; Yamashita, Takuya; Morino, Junichi; Oya, Shin; Naoi, Takahiro; Pyo, Tae-Soo; Nishikawa, Takayuki; Kudo, Tomoyuki; Usuda, Tomonori; Ando, Hiroyasu; Miyama, Shoken M.; Kaifu, Norio

    2006-04-01

    High angular resolution near-infrared (JHK) adaptive optics images of T Tau were obtained with the infrared camera Coronagraphic Imager with Adaptive Optics (CIAO) mounted on the 8.2m Subaru Telescope in 2002 and 2004. The images resolve a complex circumstellar structure around a multiple system. We resolved T Tau Sa and Sb as well as T Tau N and S. The estimated orbit of T Tau Sb indicates that it is probably bound to T Tau Sa. The K band flux of T Tau S decreased by ˜ 1.7 Jy in 2002 November compared with that in 2001 mainly because T Tau Sa became fainter. The arc-like ridge detected in our near-infrared images is consistent with what is seen at visible wavelengths, supporting the interpretation in previous studies that the arc is part of the cavity wall seen relatively pole-on. Halo emission is detected out to ˜2''from T Tau N. This may be light scattered off the common envelope surrounding the T Tauri multiple system.

  2. The PALM-3000 high-order adaptive optics system for Palomar Observatory

    NASA Astrophysics Data System (ADS)

    Bouchez, Antonin H.; Dekany, Richard G.; Angione, John R.; Baranec, Christoph; Britton, Matthew C.; Bui, Khanh; Burruss, Rick S.; Cromer, John L.; Guiwits, Stephen R.; Henning, John R.; Hickey, Jeff; McKenna, Daniel L.; Moore, Anna M.; Roberts, Jennifer E.; Trinh, Thang Q.; Troy, Mitchell; Truong, Tuan N.; Velur, Viswa

    2008-07-01

    Deployed as a multi-user shared facility on the 5.1 meter Hale Telescope at Palomar Observatory, the PALM-3000 highorder upgrade to the successful Palomar Adaptive Optics System will deliver extreme AO correction in the near-infrared, and diffraction-limited images down to visible wavelengths, using both natural and sodium laser guide stars. Wavefront control will be provided by two deformable mirrors, a 3368 active actuator woofer and 349 active actuator tweeter, controlled at up to 3 kHz using an innovative wavefront processor based on a cluster of 17 graphics processing units. A Shack-Hartmann wavefront sensor with selectable pupil sampling will provide high-order wavefront sensing, while an infrared tip/tilt sensor and visible truth wavefront sensor will provide low-order LGS control. Four back-end instruments are planned at first light: the PHARO near-infrared camera/spectrograph, the SWIFT visible light integral field spectrograph, Project 1640, a near-infrared coronagraphic integral field spectrograph, and 888Cam, a high-resolution visible light imager.

  3. Using Arago's spot to monitor optical axis shift in a Petzval refractor.

    PubMed

    Bruns, Donald G

    2017-03-10

    Measuring the change in the optical alignment of a camera attached to a telescope is necessary to perform astrometric measurements. Camera movement when the telescope is refocused changes the plate constants, invalidating the calibration. Monitoring the shift in the optical axis requires a stable internal reference source. This is easily implemented in a Petzval refractor by adding an illuminated pinhole and a small obscuration that creates a spot of Arago on the camera. Measurements of the optical axis shift for a commercial telescope are given as an example.

  4. Laser guide star pointing camera for ESO LGS Facilities

    NASA Astrophysics Data System (ADS)

    Bonaccini Calia, D.; Centrone, M.; Pedichini, F.; Ricciardi, A.; Cerruto, A.; Ambrosino, F.

    2014-08-01

    Every observatory using LGS-AO routinely has the experience of the long time needed to bring and acquire the laser guide star in the wavefront sensor field of view. This is mostly due to the difficulty of creating LGS pointing models, because of the opto-mechanical flexures and hysteresis in the launch and receiver telescope structures. The launch telescopes are normally sitting on the mechanical structure of the larger receiver telescope. The LGS acquisition time is even longer in case of multiple LGS systems. In this framework the optimization of the LGS systems absolute pointing accuracy is relevant to boost the time efficiency of both science and technical observations. In this paper we show the rationale, the design and the feasibility tests of a LGS Pointing Camera (LPC), which has been conceived for the VLT Adaptive Optics Facility 4LGSF project. The LPC would assist in pointing the four LGS, while the VLT is doing the initial active optics cycles to adjust its own optics on a natural star target, after a preset. The LPC allows minimizing the needed accuracy for LGS pointing model calibrations, while allowing to reach sub-arcsec LGS absolute pointing accuracy. This considerably reduces the LGS acquisition time and observations operation overheads. The LPC is a smart CCD camera, fed by a 150mm diameter aperture of a Maksutov telescope, mounted on the top ring of the VLT UT4, running Linux and acting as server for the client 4LGSF. The smart camera is able to recognize within few seconds the sky field using astrometric software, determining the stars and the LGS absolute positions. Upon request it returns the offsets to give to the LGS, to position them at the required sky coordinates. As byproduct goal, once calibrated the LPC can calculate upon request for each LGS, its return flux, its fwhm and the uplink beam scattering levels.

  5. Thermal Remote Sensing with Uav-Based Workflows

    NASA Astrophysics Data System (ADS)

    Boesch, R.

    2017-08-01

    Climate change will have a significant influence on vegetation health and growth. Predictions of higher mean summer temperatures and prolonged summer draughts may pose a threat to agriculture areas and forest canopies. Rising canopy temperatures can be an indicator of plant stress because of the closure of stomata and a decrease in the transpiration rate. Thermal cameras are available for decades, but still often used for single image analysis, only in oblique view manner or with visual evaluations of video sequences. Therefore remote sensing using a thermal camera can be an important data source to understand transpiration processes. Photogrammetric workflows allow to process thermal images similar to RGB data. But low spatial resolution of thermal cameras, significant optical distortion and typically low contrast require an adapted workflow. Temperature distribution in forest canopies is typically completely unknown and less distinct than for urban or industrial areas, where metal constructions and surfaces yield high contrast and sharp edge information. The aim of this paper is to investigate the influence of interior camera orientation, tie point matching and ground control points on the resulting accuracy of bundle adjustment and dense cloud generation with a typically used photogrammetric workflow for UAVbased thermal imagery in natural environments.

  6. ELT-scale Adaptive Optics real-time control with thes Intel Xeon Phi Many Integrated Core Architecture

    NASA Astrophysics Data System (ADS)

    Jenkins, David R.; Basden, Alastair; Myers, Richard M.

    2018-05-01

    We propose a solution to the increased computational demands of Extremely Large Telescope (ELT) scale adaptive optics (AO) real-time control with the Intel Xeon Phi Knights Landing (KNL) Many Integrated Core (MIC) Architecture. The computational demands of an AO real-time controller (RTC) scale with the fourth power of telescope diameter and so the next generation ELTs require orders of magnitude more processing power for the RTC pipeline than existing systems. The Xeon Phi contains a large number (≥64) of low power x86 CPU cores and high bandwidth memory integrated into a single socketed server CPU package. The increased parallelism and memory bandwidth are crucial to providing the performance for reconstructing wavefronts with the required precision for ELT scale AO. Here, we demonstrate that the Xeon Phi KNL is capable of performing ELT scale single conjugate AO real-time control computation at over 1.0kHz with less than 20μs RMS jitter. We have also shown that with a wavefront sensor camera attached the KNL can process the real-time control loop at up to 966Hz, the maximum frame-rate of the camera, with jitter remaining below 20μs RMS. Future studies will involve exploring the use of a cluster of Xeon Phis for the real-time control of the MCAO and MOAO regimes of AO. We find that the Xeon Phi is highly suitable for ELT AO real time control.

  7. KAPAO: a MEMS-based natural guide star adaptive optics system

    NASA Astrophysics Data System (ADS)

    Severson, Scott A.; Choi, Philip I.; Contreras, Daniel S.; Gilbreth, Blaine N.; Littleton, Erik; McGonigle, Lorcan P.; Morrison, William A.; Rudy, Alex R.; Wong, Jonathan R.; Xue, Andrew; Spjut, Erik; Baranec, Christoph; Riddle, Reed

    2013-03-01

    We describe KAPAO, our project to develop and deploy a low-cost, remote-access, natural guide star adaptive optics (AO) system for the Pomona College Table Mountain Observatory (TMO) 1-meter telescope. We use a commercially available 140-actuator BMC MEMS deformable mirror and a version of the Robo-AO control software developed by Caltech and IUCAA. We have structured our development around the rapid building and testing of a prototype system, KAPAO-Alpha, while simultaneously designing our more capable final system, KAPAO-Prime. The main differences between these systems are the prototype's reliance on off-the-shelf optics and a single visible-light science camera versus the final design's improved throughput and capabilities due to the use of custom optics and dual-band, visible and near-infrared imaging. In this paper, we present the instrument design and on-sky closed-loop testing of KAPAO-Alpha as well as our plans for KAPAO-Prime. The primarily undergraduate-education nature of our partner institutions, both public (Sonoma State University) and private (Pomona and Harvey Mudd Colleges), has enabled us to engage physics, astronomy, and engineering undergraduates in all phases of this project. This material is based upon work supported by the National Science Foundation under Grant No. 0960343.

  8. Image Intensifier Modules For Use With Commercially Available Solid State Cameras

    NASA Astrophysics Data System (ADS)

    Murphy, Howard; Tyler, Al; Lake, Donald W.

    1989-04-01

    A modular approach to design has contributed greatly to the success of the family of machine vision video equipment produced by EG&G Reticon during the past several years. Internal modularity allows high-performance area (matrix) and line scan cameras to be assembled with two or three electronic subassemblies with very low labor costs, and permits camera control and interface circuitry to be realized by assemblages of various modules suiting the needs of specific applications. Product modularity benefits equipment users in several ways. Modular matrix and line scan cameras are available in identical enclosures (Fig. 1), which allows enclosure components to be purchased in volume for economies of scale and allows field replacement or exchange of cameras within a customer-designed system to be easily accomplished. The cameras are optically aligned (boresighted) at final test; modularity permits optical adjustments to be made with the same precise test equipment for all camera varieties. The modular cameras contain two, or sometimes three, hybrid microelectronic packages (Fig. 2). These rugged and reliable "submodules" perform all of the electronic operations internal to the camera except for the job of image acquisition performed by the monolithic image sensor. Heat produced by electrical power dissipation in the electronic modules is conducted through low resistance paths to the camera case by the metal plates, which results in a thermally efficient and environmentally tolerant camera with low manufacturing costs. A modular approach has also been followed in design of the camera control, video processor, and computer interface accessory called the Formatter (Fig. 3). This unit can be attached directly onto either a line scan or matrix modular camera to form a self-contained units, or connected via a cable to retain the advantages inherent to a small, light weight, and rugged image sensing component. Available modules permit the bus-structured Formatter to be configured as required by a specific camera application. Modular line and matrix scan cameras incorporating sensors with fiber optic faceplates (Fig 4) are also available. These units retain the advantages of interchangeability, simple construction, ruggedness, and optical precision offered by the more common lens input units. Fiber optic faceplate cameras are used for a wide variety of applications. A common usage involves mating of the Reticon-supplied camera to a customer-supplied intensifier tube for low light level and/or short exposure time situations.

  9. Optical Design and Optimization of Translational Reflective Adaptive Optics Ophthalmoscopes

    NASA Astrophysics Data System (ADS)

    Sulai, Yusufu N. B.

    The retina serves as the primary detector for the biological camera that is the eye. It is composed of numerous classes of neurons and support cells that work together to capture and process an image formed by the eye's optics, which is then transmitted to the brain. Loss of sight due to retinal or neuro-ophthalmic disease can prove devastating to one's quality of life, and the ability to examine the retina in vivo is invaluable in the early detection and monitoring of such diseases. Adaptive optics (AO) ophthalmoscopy is a promising diagnostic tool in early stages of development, still facing significant challenges before it can become a clinical tool. The work in this thesis is a collection of projects with the overarching goal of broadening the scope and applicability of this technology. We begin by providing an optical design approach for AO ophthalmoscopes that reduces the aberrations that degrade the performance of the AO correction. Next, we demonstrate how to further improve image resolution through the use of amplitude pupil apodization and non-common path aberration correction. This is followed by the development of a viewfinder which provides a larger field of view for retinal navigation. Finally, we conclude with the development of an innovative non-confocal light detection scheme which improves the non-invasive visualization of retinal vasculature and reveals the cone photoreceptor inner segments in healthy and diseased eyes.

  10. High signal-to-noise-ratio electro-optical terahertz imaging system based on an optical demodulating detector array.

    PubMed

    Spickermann, Gunnar; Friederich, Fabian; Roskos, Hartmut G; Bolívar, Peter Haring

    2009-11-01

    We present a 64x48 pixel 2D electro-optical terahertz (THz) imaging system using a photonic mixing device time-of-flight camera as an optical demodulating detector array. The combination of electro-optic detection with a time-of-flight camera increases sensitivity drastically, enabling the use of a nonamplified laser source for high-resolution real-time THz electro-optic imaging.

  11. Application of PLZT electro-optical shutter to diaphragm of visible and mid-infrared cameras

    NASA Astrophysics Data System (ADS)

    Fukuyama, Yoshiyuki; Nishioka, Shunji; Chonan, Takao; Sugii, Masakatsu; Shirahata, Hiromichi

    1997-04-01

    Pb0.9La0.09(Zr0.65,Ti0.35)0.9775O3 9/65/35) commonly used as an electro-optical shutter exhibits large phase retardation with low applied voltage. This shutter features as follows; (1) high shutter speed, (2) wide optical transmittance, and (3) high optical density in 'OFF'-state. If the shutter is applied to a diaphragm of video-camera, it could protect its sensor from intense lights. We have tested the basic characteristics of the PLZT electro-optical shutter and resolved power of imaging. The ratio of optical transmittance at 'ON' and 'OFF'-states was 1.1 X 103. The response time of the PLZT shutter from 'ON'-state to 'OFF'-state was 10 micro second. MTF reduction when putting the PLZT shutter in from of the visible video- camera lens has been observed only with 12 percent at a spatial frequency of 38 cycles/mm which are sensor resolution of the video-camera. Moreover, we took the visible image of the Si-CCD video-camera. The He-Ne laser ghost image was observed at 'ON'-state. On the contrary, the ghost image was totally shut out at 'OFF'-state. From these teste, it has been found that the PLZT shutter is useful for the diaphragm of the visible video-camera. The measured optical transmittance of PLZT wafer with no antireflection coating was 78 percent over the range from 2 to 6 microns.

  12. NEW EXTINCTION AND MASS ESTIMATES FROM OPTICAL PHOTOMETRY OF THE VERY LOW MASS BROWN DWARF COMPANION CT CHAMAELEONTIS B WITH THE MAGELLAN AO SYSTEM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Ya-Lin; Close, Laird M.; Males, Jared R.

    We used the Magellan adaptive optics system and its VisAO CCD camera to image the young low mass brown dwarf companion CT Chamaeleontis B for the first time at visible wavelengths. We detect it at r', i', z', and Y{sub S}. With our new photometry and T {sub eff} ∼ 2500 K derived from the shape of its K-band spectrum, we find that CT Cha B has A{sub V} = 3.4 ± 1.1 mag, and a mass of 14-24 M{sub J} according to the DUSTY evolutionary tracks and its 1-5 Myr age. The overluminosity of our r' detection indicates thatmore » the companion has significant Hα emission and a mass accretion rate ∼6 × 10{sup –10} M {sub ☉} yr{sup –1}, similar to some substellar companions. Proper motion analysis shows that another point source within 2'' of CT Cha A is not physical. This paper demonstrates how visible wavelength adaptive optics photometry (r', i', z', Y{sub S}) allows for a better estimate of extinction, luminosity, and mass accretion rate of young substellar companions.« less

  13. New Extinction and Mass Estimates from Optical Photometry of the Very Low Mass Brown Dwarf Companion CT Chamaeleontis B with the Magellan AO System

    NASA Astrophysics Data System (ADS)

    Wu, Ya-Lin; Close, Laird M.; Males, Jared R.; Barman, Travis S.; Morzinski, Katie M.; Follette, Katherine B.; Bailey, Vanessa; Rodigas, Timothy J.; Hinz, Philip; Puglisi, Alfio; Xompero, Marco; Briguglio, Runa

    2015-03-01

    We used the Magellan adaptive optics system and its VisAO CCD camera to image the young low mass brown dwarf companion CT Chamaeleontis B for the first time at visible wavelengths. We detect it at r', i', z', and YS . With our new photometry and T eff ~ 2500 K derived from the shape of its K-band spectrum, we find that CT Cha B has AV = 3.4 ± 1.1 mag, and a mass of 14-24 MJ according to the DUSTY evolutionary tracks and its 1-5 Myr age. The overluminosity of our r' detection indicates that the companion has significant Hα emission and a mass accretion rate ~6 × 10-10 M ⊙ yr-1, similar to some substellar companions. Proper motion analysis shows that another point source within 2'' of CT Cha A is not physical. This paper demonstrates how visible wavelength adaptive optics photometry (r', i', z', YS ) allows for a better estimate of extinction, luminosity, and mass accretion rate of young substellar companions. This paper includes data gathered with the 6.5 m Magellan Clay Telescope at Las Campanas Observatory, Chile.

  14. Demonstrations of Optical Spectra with a Video Camera

    ERIC Educational Resources Information Center

    Kraftmakher, Yaakov

    2012-01-01

    The use of a video camera may markedly improve demonstrations of optical spectra. First, the output electrical signal from the camera, which provides full information about a picture to be transmitted, can be used for observing the radiant power spectrum on the screen of a common oscilloscope. Second, increasing the magnification by the camera…

  15. Optical Design of the LSST Camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olivier, S S; Seppala, L; Gilmore, K

    2008-07-16

    The Large Synoptic Survey Telescope (LSST) uses a novel, three-mirror, modified Paul-Baker design, with an 8.4-meter primary mirror, a 3.4-m secondary, and a 5.0-m tertiary feeding a camera system that includes a set of broad-band filters and refractive corrector lenses to produce a flat focal plane with a field of view of 9.6 square degrees. Optical design of the camera lenses and filters is integrated with optical design of telescope mirrors to optimize performance, resulting in excellent image quality over the entire field from ultra-violet to near infra-red wavelengths. The LSST camera optics design consists of three refractive lenses withmore » clear aperture diameters of 1.55 m, 1.10 m and 0.69 m and six interchangeable, broad-band, filters with clear aperture diameters of 0.75 m. We describe the methodology for fabricating, coating, mounting and testing these lenses and filters, and we present the results of detailed tolerance analyses, demonstrating that the camera optics will perform to the specifications required to meet their performance goals.« less

  16. Experimental setup for camera-based measurements of electrically and optically stimulated luminescence of silicon solar cells and wafers.

    PubMed

    Hinken, David; Schinke, Carsten; Herlufsen, Sandra; Schmidt, Arne; Bothe, Karsten; Brendel, Rolf

    2011-03-01

    We report in detail on the luminescence imaging setup developed within the last years in our laboratory. In this setup, the luminescence emission of silicon solar cells or silicon wafers is analyzed quantitatively. Charge carriers are excited electrically (electroluminescence) using a power supply for carrier injection or optically (photoluminescence) using a laser as illumination source. The luminescence emission arising from the radiative recombination of the stimulated charge carriers is measured spatially resolved using a camera. We give details of the various components including cameras, optical filters for electro- and photo-luminescence, the semiconductor laser and the four-quadrant power supply. We compare a silicon charged-coupled device (CCD) camera with a back-illuminated silicon CCD camera comprising an electron multiplier gain and a complementary metal oxide semiconductor indium gallium arsenide camera. For the detection of the luminescence emission of silicon we analyze the dominant noise sources along with the signal-to-noise ratio of all three cameras at different operation conditions.

  17. CAOS-CMOS camera.

    PubMed

    Riza, Nabeel A; La Torre, Juan Pablo; Amin, M Junaid

    2016-06-13

    Proposed and experimentally demonstrated is the CAOS-CMOS camera design that combines the coded access optical sensor (CAOS) imager platform with the CMOS multi-pixel optical sensor. The unique CAOS-CMOS camera engages the classic CMOS sensor light staring mode with the time-frequency-space agile pixel CAOS imager mode within one programmable optical unit to realize a high dynamic range imager for extreme light contrast conditions. The experimentally demonstrated CAOS-CMOS camera is built using a digital micromirror device, a silicon point-photo-detector with a variable gain amplifier, and a silicon CMOS sensor with a maximum rated 51.3 dB dynamic range. White light imaging of three different brightness simultaneously viewed targets, that is not possible by the CMOS sensor, is achieved by the CAOS-CMOS camera demonstrating an 82.06 dB dynamic range. Applications for the camera include industrial machine vision, welding, laser analysis, automotive, night vision, surveillance and multispectral military systems.

  18. Dust deposition on the decks of the Mars Exploration Rovers: 10 years of dust dynamics on the Panoramic Camera calibration targets.

    PubMed

    Kinch, Kjartan M; Bell, James F; Goetz, Walter; Johnson, Jeffrey R; Joseph, Jonathan; Madsen, Morten Bo; Sohl-Dickstein, Jascha

    2015-05-01

    The Panoramic Cameras on NASA's Mars Exploration Rovers have each returned more than 17,000 images of their calibration targets. In order to make optimal use of this data set for reflectance calibration, a correction must be made for the presence of air fall dust. Here we present an improved dust correction procedure based on a two-layer scattering model, and we present a dust reflectance spectrum derived from long-term trends in the data set. The dust on the calibration targets appears brighter than dusty areas of the Martian surface. We derive detailed histories of dust deposition and removal revealing two distinct environments: At the Spirit landing site, half the year is dominated by dust deposition, the other half by dust removal, usually in brief, sharp events. At the Opportunity landing site the Martian year has a semiannual dust cycle with dust removal happening gradually throughout two removal seasons each year. The highest observed optical depth of settled dust on the calibration target is 1.5 on Spirit and 1.1 on Opportunity (at 601 nm). We derive a general prediction for dust deposition rates of 0.004 ± 0.001 in units of surface optical depth deposited per sol (Martian solar day) per unit atmospheric optical depth. We expect this procedure to lead to improved reflectance-calibration of the Panoramic Camera data set. In addition, it is easily adapted to similar data sets from other missions in order to deliver improved reflectance calibration as well as data on dust reflectance properties and deposition and removal history.

  19. Dust deposition on the decks of the Mars Exploration Rovers: 10 years of dust dynamics on the Panoramic Camera calibration targets

    PubMed Central

    Bell, James F.; Goetz, Walter; Johnson, Jeffrey R.; Joseph, Jonathan; Madsen, Morten Bo; Sohl‐Dickstein, Jascha

    2015-01-01

    Abstract The Panoramic Cameras on NASA's Mars Exploration Rovers have each returned more than 17,000 images of their calibration targets. In order to make optimal use of this data set for reflectance calibration, a correction must be made for the presence of air fall dust. Here we present an improved dust correction procedure based on a two‐layer scattering model, and we present a dust reflectance spectrum derived from long‐term trends in the data set. The dust on the calibration targets appears brighter than dusty areas of the Martian surface. We derive detailed histories of dust deposition and removal revealing two distinct environments: At the Spirit landing site, half the year is dominated by dust deposition, the other half by dust removal, usually in brief, sharp events. At the Opportunity landing site the Martian year has a semiannual dust cycle with dust removal happening gradually throughout two removal seasons each year. The highest observed optical depth of settled dust on the calibration target is 1.5 on Spirit and 1.1 on Opportunity (at 601 nm). We derive a general prediction for dust deposition rates of 0.004 ± 0.001 in units of surface optical depth deposited per sol (Martian solar day) per unit atmospheric optical depth. We expect this procedure to lead to improved reflectance‐calibration of the Panoramic Camera data set. In addition, it is easily adapted to similar data sets from other missions in order to deliver improved reflectance calibration as well as data on dust reflectance properties and deposition and removal history. PMID:27981072

  20. Development, Deployment, and Cost Effectiveness of a Self-Administered Stereo Non Mydriatic Automated Retinal Camera (SNARC) Containing Automated Retinal Lesion (ARL) Detection Using Adaptive Optics

    DTIC Science & Technology

    2010-10-01

    Requirements Application Server  BEA Weblogic Express 9.2 or higher  Java v5Apache Struts v2  Hibernate v2  C3PO  SQL*Net client / JDBC Database Server...designed for the desktop o An HTML and JavaScript browser-based front end designed for mobile Smartphones - A Java -based framework utilizing Apache...Technology Requirements The recommended technologies are as follows: Technology Use Requirements Java Application Provides the backend application

  1. Stereo electro-optical tracker study for the measurement of model deformations at the National Transonic Facility

    NASA Astrophysics Data System (ADS)

    Hertel, R. J.; Hoilman, K. A.

    1982-01-01

    The effects of model vibration, camera and window nonlinearities, and aerodynamic disturbances in the optical path on the measurement of target position is examined. Window distortion, temperature and pressure changes, laminar and turbulent boundary layers, shock waves, target intensity and, target vibration are also studied. A general computer program was developed to trace optical rays through these disturbances. The use of a charge injection device camera as an alternative to the image dissector camera was examined.

  2. Space telescope optical telescope assembly/scientific instruments. Phase B: -Preliminary design and program definition study; Volume 2A: Planetary camera report

    NASA Technical Reports Server (NTRS)

    1976-01-01

    Development of the F/48, F/96 Planetary Camera for the Large Space Telescope is discussed. Instrument characteristics, optical design, and CCD camera submodule thermal design are considered along with structural subsystem and thermal control subsystem. Weight, electrical subsystem, and support equipment requirements are also included.

  3. The research of adaptive-exposure on spot-detecting camera in ATP system

    NASA Astrophysics Data System (ADS)

    Qian, Feng; Jia, Jian-jun; Zhang, Liang; Wang, Jian-Yu

    2013-08-01

    High precision acquisition, tracking, pointing (ATP) system is one of the key techniques of laser communication. The spot-detecting camera is used to detect the direction of beacon in laser communication link, so that it can get the position information of communication terminal for ATP system. The positioning accuracy of camera decides the capability of laser communication system directly. So the spot-detecting camera in satellite-to-earth laser communication ATP systems needs high precision on target detection. The positioning accuracy of cameras should be better than +/-1μ rad . The spot-detecting cameras usually adopt centroid algorithm to get the position information of light spot on detectors. When the intensity of beacon is moderate, calculation results of centroid algorithm will be precise. But the intensity of beacon changes greatly during communication for distance, atmospheric scintillation, weather etc. The output signal of detector will be insufficient when the camera underexposes to beacon because of low light intensity. On the other hand, the output signal of detector will be saturated when the camera overexposes to beacon because of high light intensity. The calculation accuracy of centroid algorithm becomes worse if the spot-detecting camera underexposes or overexposes, and then the positioning accuracy of camera will be reduced obviously. In order to improve the accuracy, space-based cameras should regulate exposure time in real time according to light intensity. The algorithm of adaptive-exposure technique for spot-detecting camera based on metal-oxide-semiconductor (CMOS) detector is analyzed. According to analytic results, a CMOS camera in space-based laser communication system is described, which utilizes the algorithm of adaptive-exposure to adapting exposure time. Test results from imaging experiment system formed verify the design. Experimental results prove that this design can restrain the reduction of positioning accuracy for the change of light intensity. So the camera can keep stable and high positioning accuracy during communication.

  4. High-angular-resolution NIR astronomy with large arrays (SHARP I and SHARP II)

    NASA Astrophysics Data System (ADS)

    Hofmann, Reiner; Brandl, Bernhard; Eckart, Andreas; Eisenhauer, Frank; Tacconi-Garman, Lowell E.

    1995-06-01

    SHARP I and SHARP II are near infrared cameras for high-angular-resolution imaging. Both cameras are built around a 256 X 256 pixel NICMOS 3 HgCdTe array from Rockwell which is sensitive in the 1 - 2.5 micrometers range. With a 0.05'/pixel scale, they can produce diffraction limited K-band images at 4-m-class telescopes. For a 256 X 256 array, this pixel scale results in a field of view of 12.8' X 12.8' which is well suited for the observation of galactic and extragalactic near-infrared sources. Photometric and low resolution spectroscopic capabilities are added by photometric band filters (J, H, K), narrow band filters ((lambda) /(Delta) (lambda) approximately equals 100) for selected spectral lines, and a CVF ((lambda) /(Delta) (lambda) approximately equals 70). A cold shutter permits short exposure times down to about 10 ms. The data acquisition electronics permanently accepts the maximum frame rate of 8 Hz which is defined by the detector time constants (data rate 1 Mbyte/s). SHARP I has been especially designed for speckle observations at ESO's 3.5 m New Technology Telescope and is in operation since 1991. SHARP II is used at ESO's 3.6 m telescope together with the adaptive optics system COME-ON + since 1993. A new version of SHARP II is presently under test, which incorporates exchangeable camera optics for observations with scales of 0.035, 0.05, and 0.1'/pixel. The first scale extends diffraction limited observations down to the J-band, while the last one provides a larger field of view. To demonstrate the power of the cameras, images of the galactic center obtained with SHARP I, and images of the R136 region in 30 Doradus observed with SHARP II are presented.

  5. Computerized digital dermoscopy.

    PubMed

    Gewirtzman, A J; Braun, R P

    2003-01-01

    Within the past 15 years, dermoscopy has become a widely used non-invasive technique for physicians to better visualize pigmented lesions. Dermoscopy has helped trained physicians to better diagnose pigmented lesions. Now, the digital revolution is beginning to enhance standard dermoscopic procedures. Using digital dermoscopy, physicians are better able to document pigmented lesions for patient follow-up and to get second opinions, either through teledermoscopy with an expert colleague or by using computer-assisted diagnosis. As the market for digital dermoscopy products begins to grow, so do the number of decisions physicians need to make when choosing a system to fit their needs. The current market for digital dermoscopy includes two varieties of relatively simple and cheap attachments which can convert a consumer digital camera into a digital dermoscope. A coupling adapter acts as a fastener between the camera and an ordinary dermoscope, whereas a dermoscopy attachment includes the dermoscope optics and light source and can be attached directly to the camera. Other options for digital dermoscopy include complete dermoscopy systems that use a hand-held video camera linked directly to a computer. These systems differ from each other in whether or not they are calibrated as well as the quality of the camera and software interface. Another option in digital skin imaging involves spectral analysis rather than dermoscopy. This article serves as a guide to the current systems available and their capabilities.

  6. The CAFADIS camera: a new tomographic wavefront sensor for Adaptive Optics

    NASA Astrophysics Data System (ADS)

    Rodríguez, J. M.; Femenía, B.; Montilla, I.; Rodríguez-Ramos, L. F.; Marichal-Hernández, J. G.; Lüke, J. P.; López, R.; Díaz, J. J.; Martín, Y.

    The CAFADIS camera is a new wavefront sensor (WFS) patented by the Universidad de La Laguna. CAFADIS is a system based on the concept of plenoptic camera originally proposed by Adelson and Wang [Single lens stereo with a plenoptic camera, IEEE Transactions on Pattern Analysis and Machine Intelligence 14 (1992)] and its most salient feature is its ability to simultaneously measuring wavefront maps and distances to objects [Wavefront and distance measurements using the CAFADIS camera, in Astronomical telescopes, Marseille (2008)]. This makes of CAFADIS an interesting alternative for LGS-based AO systems as it is capable of measuring from an LGS-beacon the atmospheric turbulence wavefront and simultaneously the distance to the LGS beacon thus removing the need of a NGS defocus sensor to probe changes in distance to the LGS beacon due to drifts of the mesospheric Na layer. In principle, the concept can also be employed to recover 3D profiles of the Na Layer allowing for optimizations of the measurement of the distance to the LGS-beacon. Currently we are investigating the possibility of extending the plenoptic WFS into a tomographic wavefront sensor. Simulations will be shown of a plenoptic WFS when operated within an LGS-based AO system for the recovery of wavefront maps at different heights. The preliminary results presented here show the tomographic ability of CAFADIS.

  7. Hybrid wavefront sensing and image correction algorithm for imaging through turbulent media

    NASA Astrophysics Data System (ADS)

    Wu, Chensheng; Robertson Rzasa, John; Ko, Jonathan; Davis, Christopher C.

    2017-09-01

    It is well known that passive image correction of turbulence distortions often involves using geometry-dependent deconvolution algorithms. On the other hand, active imaging techniques using adaptive optic correction should use the distorted wavefront information for guidance. Our work shows that a hybrid hardware-software approach is possible to obtain accurate and highly detailed images through turbulent media. The processing algorithm also takes much fewer iteration steps in comparison with conventional image processing algorithms. In our proposed approach, a plenoptic sensor is used as a wavefront sensor to guide post-stage image correction on a high-definition zoomable camera. Conversely, we show that given the ground truth of the highly detailed image and the plenoptic imaging result, we can generate an accurate prediction of the blurred image on a traditional zoomable camera. Similarly, the ground truth combined with the blurred image from the zoomable camera would provide the wavefront conditions. In application, our hybrid approach can be used as an effective way to conduct object recognition in a turbulent environment where the target has been significantly distorted or is even unrecognizable.

  8. Orbital docking system centerline color television camera system test

    NASA Technical Reports Server (NTRS)

    Mongan, Philip T.

    1993-01-01

    A series of tests was run to verify that the design of the centerline color television camera (CTVC) system is adequate optically for the STS-71 Space Shuttle Orbiter docking mission with the Mir space station. In each test, a mockup of the Mir consisting of hatch, docking mechanism, and docking target was positioned above the Johnson Space Center's full fuselage trainer, which simulated the Orbiter with a mockup of the external airlock and docking adapter. Test subjects viewed the docking target through the CTVC under 30 different lighting conditions and evaluated target resolution, field of view, light levels, light placement, and methods of target alignment. Test results indicate that the proposed design will provide adequate visibility through the centerline camera for a successful docking, even with a reasonable number of light failures. It is recommended that the flight deck crew have individual switching capability for docking lights to provide maximum shadow management and that centerline lights be retained to deal with light failures and user preferences. Procedures for light management should be developed and target alignment aids should be selected during simulated docking runs.

  9. Defining ray sets for the analysis of lenslet-based optical systems including plenoptic cameras and Shack-Hartmann wavefront sensors

    NASA Astrophysics Data System (ADS)

    Moore, Lori

    Plenoptic cameras and Shack-Hartmann wavefront sensors are lenslet-based optical systems that do not form a conventional image. The addition of a lens array into these systems allows for the aberrations generated by the combination of the object and the optical components located prior to the lens array to be measured or corrected with post-processing. This dissertation provides a ray selection method to determine the rays that pass through each lenslet in a lenslet-based system. This first-order, ray trace method is developed for any lenslet-based system with a well-defined fore optic, where in this dissertation the fore optic is all of the optical components located prior to the lens array. For example, in a plenoptic camera the fore optic is a standard camera lens. Because a lens array at any location after the exit pupil of the fore optic is considered in this analysis, it is applicable to both plenoptic cameras and Shack-Hartmann wavefront sensors. Only a generic, unaberrated fore optic is considered, but this dissertation establishes a framework for considering the effect of an aberrated fore optic in lenslet-based systems. The rays from the fore optic that pass through a lenslet placed at any location after the fore optic are determined. This collection of rays is reduced to three rays that describe the entire lenslet ray set. The lenslet ray set is determined at the object, image, and pupil planes of the fore optic. The consideration of the apertures that define the lenslet ray set for an on-axis lenslet leads to three classes of lenslet-based systems. Vignetting of the lenslet rays is considered for off-axis lenslets. Finally, the lenslet ray set is normalized into terms similar to the field and aperture vector used to describe the aberrated wavefront of the fore optic. The analysis in this dissertation is complementary to other first-order models that have been developed for a specific plenoptic camera layout or Shack-Hartmann wavefront sensor application. This general analysis determines the location where the rays of each lenslet pass through the fore optic establishing a framework to consider the effect of an aberrated fore optic in a future analysis.

  10. Close-range photogrammetry with video cameras

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Snow, W. L.; Goad, W. K.

    1985-01-01

    Examples of photogrammetric measurements made with video cameras uncorrected for electronic and optical lens distortions are presented. The measurement and correction of electronic distortions of video cameras using both bilinear and polynomial interpolation are discussed. Examples showing the relative stability of electronic distortions over long periods of time are presented. Having corrected for electronic distortion, the data are further corrected for lens distortion using the plumb line method. Examples of close-range photogrammetric data taken with video cameras corrected for both electronic and optical lens distortion are presented.

  11. Close-Range Photogrammetry with Video Cameras

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Snow, W. L.; Goad, W. K.

    1983-01-01

    Examples of photogrammetric measurements made with video cameras uncorrected for electronic and optical lens distortions are presented. The measurement and correction of electronic distortions of video cameras using both bilinear and polynomial interpolation are discussed. Examples showing the relative stability of electronic distortions over long periods of time are presented. Having corrected for electronic distortion, the data are further corrected for lens distortion using the plumb line method. Examples of close-range photogrammetric data taken with video cameras corrected for both electronic and optical lens distortion are presented.

  12. Effect of indocyanine green angiography using infrared fundus camera on subsequent dark adaptation and electroretinogram.

    PubMed

    Wen, Feng; Yu, Minzhong; Wu, Dezheng; Ma, Juanmei; Wu, Lezheng

    2002-07-01

    To observe the effect of indocyanine green angiography (ICGA) with infrared fundus camera on subsequent dark adaptation and the Ganzfeld electroretinogram (ERG), the ERGs of 38 eyes with different retinal diseases were recorded before and after ICGA during a 40-min dark adaptation period. ICGA was performed with Topcon 50IA retina camera. Ganzfeld ERG was recorded with Neuropack II evoked response recorder. The results showed that ICGA did not affect the latencies and the amplitudes in ERG of rod response, cone response and mixed maximum response (p>0.05). It suggests that ICGA using infrared fundus camera could be performed prior to the recording of the Ganzfeld ERG.

  13. Optical performance analysis of plenoptic camera systems

    NASA Astrophysics Data System (ADS)

    Langguth, Christin; Oberdörster, Alexander; Brückner, Andreas; Wippermann, Frank; Bräuer, Andreas

    2014-09-01

    Adding an array of microlenses in front of the sensor transforms the capabilities of a conventional camera to capture both spatial and angular information within a single shot. This plenoptic camera is capable of obtaining depth information and providing it for a multitude of applications, e.g. artificial re-focusing of photographs. Without the need of active illumination it represents a compact and fast optical 3D acquisition technique with reduced effort in system alignment. Since the extent of the aperture limits the range of detected angles, the observed parallax is reduced compared to common stereo imaging systems, which results in a decreased depth resolution. Besides, the gain of angular information implies a degraded spatial resolution. This trade-off requires a careful choice of the optical system parameters. We present a comprehensive assessment of possible degrees of freedom in the design of plenoptic systems. Utilizing a custom-built simulation tool, the optical performance is quantified with respect to particular starting conditions. Furthermore, a plenoptic camera prototype is demonstrated in order to verify the predicted optical characteristics.

  14. Analysis of the effect on optical equipment caused by solar position in target flight measure

    NASA Astrophysics Data System (ADS)

    Zhu, Shun-hua; Hu, Hai-bin

    2012-11-01

    Optical equipment is widely used to measure flight parameters in target flight performance test, but the equipment is sensitive to the sun's rays. In order to avoid the disadvantage of sun's rays directly shines to the optical equipment camera lens when measuring target flight parameters, the angle between observation direction and the line which connects optical equipment camera lens and the sun should be kept at a big range, The calculation method of the solar azimuth and altitude to the optical equipment at any time and at any place on the earth, the equipment observation direction model and the calculating model of angle between observation direction and the line which connects optical equipment camera lens are introduced in this article. Also, the simulation of the effect on optical equipment caused by solar position at different time, different date, different month and different target flight direction is given in this article.

  15. Potential for application of an acoustic camera in particle tracking velocimetry.

    PubMed

    Wu, Fu-Chun; Shao, Yun-Chuan; Wang, Chi-Kuei; Liou, Jim

    2008-11-01

    We explored the potential and limitations for applying an acoustic camera as the imaging instrument of particle tracking velocimetry. The strength of the acoustic camera is its usability in low-visibility environments where conventional optical cameras are ineffective, while its applicability is limited by lower temporal and spatial resolutions. We conducted a series of experiments in which acoustic and optical cameras were used to simultaneously image the rotational motion of tracer particles, allowing for a comparison of the acoustic- and optical-based velocities. The results reveal that the greater fluctuations associated with the acoustic-based velocities are primarily attributed to the lower temporal resolution. The positive and negative biases induced by the lower spatial resolution are balanced, with the positive ones greater in magnitude but the negative ones greater in quantity. These biases reduce with the increase in the mean particle velocity and approach minimum as the mean velocity exceeds the threshold value that can be sensed by the acoustic camera.

  16. Navigating surgical fluorescence cameras using near-infrared optical tracking.

    PubMed

    van Oosterom, Matthias; den Houting, David; van de Velde, Cornelis; van Leeuwen, Fijs

    2018-05-01

    Fluorescence guidance facilitates real-time intraoperative visualization of the tissue of interest. However, due to attenuation, the application of fluorescence guidance is restricted to superficial lesions. To overcome this shortcoming, we have previously applied three-dimensional surgical navigation to position the fluorescence camera in reach of the superficial fluorescent signal. Unfortunately, in open surgery, the near-infrared (NIR) optical tracking system (OTS) used for navigation also induced an interference during NIR fluorescence imaging. In an attempt to support future implementation of navigated fluorescence cameras, different aspects of this interference were characterized and solutions were sought after. Two commercial fluorescence cameras for open surgery were studied in (surgical) phantom and human tissue setups using two different NIR OTSs and one OTS simulating light-emitting diode setup. Following the outcome of these measurements, OTS settings were optimized. Measurements indicated the OTS interference was caused by: (1) spectral overlap between the OTS light and camera, (2) OTS light intensity, (3) OTS duty cycle, (4) OTS frequency, (5) fluorescence camera frequency, and (6) fluorescence camera sensitivity. By optimizing points 2 to 4, navigation of fluorescence cameras during open surgery could be facilitated. Optimization of the OTS and camera compatibility can be used to support navigated fluorescence guidance concepts. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  17. Joint atmospheric turbulence detection and adaptive demodulation technique using the CNN for the OAM-FSO communication.

    PubMed

    Li, Jin; Zhang, Min; Wang, Danshi; Wu, Shaojun; Zhan, Yueying

    2018-04-16

    A novel joint atmospheric turbulence (AT) detection and adaptive demodulation technique based on convolutional neural network (CNN) are proposed for the OAM-based free-space optical (FSO) communication. The AT detecting accuracy (ATDA) and the adaptive demodulating accuracy (ADA) of the 4-OAM, 8-OAM, 16-OAM FSO communication systems over computer-simulated 1000-m turbulent channels with 4, 6, 10 kinds of classic ATs are investigated, respectively. Compared to previous approaches using the self-organizing mapping (SOM), deep neural network (DNN) and other CNNs, the proposed CNN achieves the highest ATDA and ADA due to the advanced multi-layer representation learning without feature extractors designed carefully by numerous experts. For the AT detection, the ATDA of CNN is near 95.2% for 6 kinds of typical ATs, in cases of both weak and strong ATs. For the adaptive demodulation of optical vortices (OV) carrying OAM modes, the ADA of CNN is about 99.8% for the 8-OAM system over the computer-simulated 1000-m free-space strong turbulent link. In addition, the effects of image resolution, iteration number, activation functions and the structure of the CNN are also studied comprehensively. The proposed technique has the potential to be embedded in charge-coupled device (CCD) cameras deployed at the receiver to improve the reliability and flexibility for the OAM-FSO communication.

  18. Wavefront Sensing With Switched Lenses for Defocus Diversity

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H.

    2007-01-01

    In an alternative hardware design for an apparatus used in image-based wavefront sensing, defocus diversity is introduced by means of fixed lenses that are mounted in a filter wheel (see figure) so that they can be alternately switched into a position in front of the focal plane of an electronic camera recording the image formed by the optical system under test. [The terms image-based, wavefront sensing, and defocus diversity are defined in the first of the three immediately preceding articles, Broadband Phase Retrieval for Image-Based Wavefront Sensing (GSC-14899-1).] Each lens in the filter wheel is designed so that the optical effect of placing it at the assigned position is equivalent to the optical effect of translating the camera a specified defocus distance along the optical axis. Heretofore, defocus diversity has been obtained by translating the imaging camera along the optical axis to various defocus positions. Because data must be taken at multiple, accurately measured defocus positions, it is necessary to mount the camera on a precise translation stage that must be calibrated for each defocus position and/or to use an optical encoder for measurement and feedback control of the defocus positions. Additional latency is introduced into the wavefront sensing process as the camera is translated to the various defocus positions. Moreover, if the optical system under test has a large focal length, the required defocus values are large, making it necessary to use a correspondingly bulky translation stage. By eliminating the need for translation of the camera, the alternative design simplifies and accelerates the wavefront-sensing process. This design is cost-effective in that the filterwheel/lens mechanism can be built from commercial catalog components. After initial calibration of the defocus value of each lens, a selected defocus value is introduced by simply rotating the filter wheel to place the corresponding lens in front of the camera. The rotation of the wheel can be automated by use of a motor drive, and further calibration is not necessary. Because a camera-translation stage is no longer needed, the size of the overall apparatus can be correspondingly reduced.

  19. Red ball ranging optimization based on dual camera ranging method

    NASA Astrophysics Data System (ADS)

    Kuang, Lei; Sun, Weijia; Liu, Jiaming; Tang, Matthew Wai-Chung

    2018-05-01

    In this paper, the process of positioning and moving to target red ball by NAO robot through its camera system is analyzed and improved using the dual camera ranging method. The single camera ranging method, which is adapted by NAO robot, was first studied and experimented. Since the existing error of current NAO Robot is not a single variable, the experiments were divided into two parts to obtain more accurate single camera ranging experiment data: forward ranging and backward ranging. Moreover, two USB cameras were used in our experiments that adapted Hough's circular method to identify a ball, while the HSV color space model was used to identify red color. Our results showed that the dual camera ranging method reduced the variance of error in ball tracking from 0.68 to 0.20.

  20. Seeing in a different light—using an infrared camera to teach heat transfer and optical phenomena

    NASA Astrophysics Data System (ADS)

    Pei Wong, Choun; Subramaniam, R.

    2018-05-01

    The infrared camera is a useful tool in physics education to ‘see’ in the infrared. In this paper, we describe four simple experiments that focus on phenomena related to heat transfer and optics that are encountered at undergraduate physics level using an infrared camera, and discuss the strengths and limitations of this tool for such purposes.

  1. Seeing in a Different Light--Using an Infrared Camera to Teach Heat Transfer and Optical Phenomena

    ERIC Educational Resources Information Center

    Wong, Choun Pei; Subramaniam, R.

    2018-01-01

    The infrared camera is a useful tool in physics education to 'see' in the infrared. In this paper, we describe four simple experiments that focus on phenomena related to heat transfer and optics that are encountered at undergraduate physics level using an infrared camera, and discuss the strengths and limitations of this tool for such purposes.

  2. Condenser for illuminating a ringfield camera with synchrotron emission light

    DOEpatents

    Sweatt, W.C.

    1996-04-30

    The present invention relates generally to the field of condensers for collecting light from a synchrotron radiation source and directing the light into a ringfield of a lithography camera. The present invention discloses a condenser comprising collecting, processing, and imaging optics. The collecting optics are comprised of concave and convex spherical mirrors that collect the light beams. The processing optics, which receive the light beams, are comprised of flat mirrors that converge and direct the light beams into a real entrance pupil of the camera in a symmetrical pattern. In the real entrance pupil are located flat mirrors, common to the beams emitted from the preceding mirrors, for generating substantially parallel light beams and for directing the beams toward the ringfield of a camera. Finally, the imaging optics are comprised of a spherical mirror, also common to the beams emitted from the preceding mirrors, images the real entrance pupil through the resistive mask and into the virtual entrance pupil of the camera. Thus, the condenser is comprised of a plurality of beams with four mirrors corresponding to a single beam plus two common mirrors. 9 figs.

  3. Condenser for illuminating a ringfield camera with synchrotron emission light

    DOEpatents

    Sweatt, William C.

    1996-01-01

    The present invention relates generally to the field of condensers for collecting light from a synchrotron radiation source and directing the light into a ringfield of a lithography camera. The present invention discloses a condenser comprising collecting, processing, and imaging optics. The collecting optics are comprised of concave and convex spherical mirrors that collect the light beams. The processing optics, which receive the light beams, are comprised of flat mirrors that converge and direct the light beams into a real entrance pupil of the camera in a symmetrical pattern. In the real entrance pupil are located flat mirrors, common to the beams emitted from the preceding mirrors, for generating substantially parallel light beams and for directing the beams toward the ringfield of a camera. Finally, the imaging optics are comprised of a spherical mirror, also common to the beams emitted from the preceding mirrors, images the real entrance pupil through the resistive mask and into the virtual entrance pupil of the camera. Thus, the condenser is comprised of a plurality of beams with four mirrors corresponding to a single beam plus two common mirrors.

  4. Applying UV cameras for SO2 detection to distant or optically thick volcanic plumes

    USGS Publications Warehouse

    Kern, Christoph; Werner, Cynthia; Elias, Tamar; Sutton, A. Jeff; Lübcke, Peter

    2013-01-01

    Ultraviolet (UV) camera systems represent an exciting new technology for measuring two dimensional sulfur dioxide (SO2) distributions in volcanic plumes. The high frame rate of the cameras allows the retrieval of SO2 emission rates at time scales of 1 Hz or higher, thus allowing the investigation of high-frequency signals and making integrated and comparative studies with other high-data-rate volcano monitoring techniques possible. One drawback of the technique, however, is the limited spectral information recorded by the imaging systems. Here, a framework for simulating the sensitivity of UV cameras to various SO2 distributions is introduced. Both the wavelength-dependent transmittance of the optical imaging system and the radiative transfer in the atmosphere are modeled. The framework is then applied to study the behavior of different optical setups and used to simulate the response of these instruments to volcanic plumes containing varying SO2 and aerosol abundances located at various distances from the sensor. Results show that UV radiative transfer in and around distant and/or optically thick plumes typically leads to a lower sensitivity to SO2 than expected when assuming a standard Beer–Lambert absorption model. Furthermore, camera response is often non-linear in SO2 and dependent on distance to the plume and plume aerosol optical thickness and single scatter albedo. The model results are compared with camera measurements made at Kilauea Volcano (Hawaii) and a method for integrating moderate resolution differential optical absorption spectroscopy data with UV imagery to retrieve improved SO2 column densities is discussed.

  5. Combustion pinhole-camera system

    DOEpatents

    Witte, A.B.

    1982-05-19

    A pinhole camera system is described utilizing a sealed optical-purge assembly which provides optical access into a coal combustor or other energy conversion reactors. The camera system basically consists of a focused-purge pinhole optical port assembly, a conventional TV vidicon receiver, an external, variable density light filter which is coupled electronically to the vidicon automatic gain control (agc). The key component of this system is the focused-purge pinhole optical port assembly which utilizes a purging inert gas to keep debris from entering the port and a lens arrangement which transfers the pinhole to the outside of the port assembly. One additional feature of the port assembly is that it is not flush with the interior of the combustor.

  6. Combustion pinhole camera system

    DOEpatents

    Witte, A.B.

    1984-02-21

    A pinhole camera system is described utilizing a sealed optical-purge assembly which provides optical access into a coal combustor or other energy conversion reactors. The camera system basically consists of a focused-purge pinhole optical port assembly, a conventional TV vidicon receiver, an external, variable density light filter which is coupled electronically to the vidicon automatic gain control (agc). The key component of this system is the focused-purge pinhole optical port assembly which utilizes a purging inert gas to keep debris from entering the port and a lens arrangement which transfers the pinhole to the outside of the port assembly. One additional feature of the port assembly is that it is not flush with the interior of the combustor. 2 figs.

  7. Combustion pinhole camera system

    DOEpatents

    Witte, Arvel B.

    1984-02-21

    A pinhole camera system utilizing a sealed optical-purge assembly which provides optical access into a coal combustor or other energy conversion reactors. The camera system basically consists of a focused-purge pinhole optical port assembly, a conventional TV vidicon receiver, an external, variable density light filter which is coupled electronically to the vidicon automatic gain control (agc). The key component of this system is the focused-purge pinhole optical port assembly which utilizes a purging inert gas to keep debris from entering the port and a lens arrangement which transfers the pinhole to the outside of the port assembly. One additional feature of the port assembly is that it is not flush with the interior of the combustor.

  8. Adaptive DFT-Based Interferometer Fringe Tracking

    NASA Astrophysics Data System (ADS)

    Wilson, Edward; Pedretti, Ettore; Bregman, Jesse; Mah, Robert W.; Traub, Wesley A.

    An automatic interferometer fringe tracking system has been developed, implemented, and tested at the Infrared Optical Telescope Array (IOTA) Observatory at Mount Hopkins, Arizona. The system can minimize the optical path differences (OPDs) for all three baselines of the Michelson stellar interferometer at IOTA. Based on sliding window discrete Fourier-transform (DFT) calculations that were optimized for computational efficiency and robustness to atmospheric disturbances, the algorithm has also been tested extensively on offline data. Implemented in ANSI C on the 266 MHz PowerPC processor running the VxWorks real-time operating system, the algorithm runs in approximately 2.0 milliseconds per scan (including all three interferograms), using the science camera and piezo scanners to measure and correct the OPDs. The adaptive DFT-based tracking algorithm should be applicable to other systems where there is a need to detect or track a signal with an approximately constant-frequency carrier pulse. One example of such an application might be to the field of thin-film measurement by ellipsometry, using a broadband light source and a Fourier-transform spectrometer to detect the resulting fringe patterns.

  9. Direct imaging of exoplanets in the habitable zone with adaptive optics

    NASA Astrophysics Data System (ADS)

    Males, Jared R.; Close, Laird M.; Guyon, Olivier; Morzinski, Katie; Puglisi, Alfio; Hinz, Philip; Follette, Katherine B.; Monnier, John D.; Tolls, Volker; Rodigas, Timothy J.; Weinberger, Alycia; Boss, Alan; Kopon, Derek; Wu, Ya-lin; Esposito, Simone; Riccardi, Armando; Xompero, Marco; Briguglio, Runa; Pinna, Enrico

    2014-07-01

    One of the primary goals of exoplanet science is to find and characterize habitable planets, and direct imaging will play a key role in this effort. Though imaging a true Earth analog is likely out of reach from the ground, the coming generation of giant telescopes will find and characterize many planets in and near the habitable zones (HZs) of nearby stars. Radial velocity and transit searches indicate that such planets are common, but imaging them will require achieving extreme contrasts at very small angular separations, posing many challenges for adaptive optics (AO) system design. Giant planets in the HZ may even be within reach with the latest generation of high-contrast imagers for a handful of very nearby stars. Here we will review the definition of the HZ, and the characteristics of detectable planets there. We then review some of the ways that direct imaging in the HZ will be different from the typical exoplanet imaging survey today. Finally, we present preliminary results from our observations of the HZ of α Centauri A with the Magellan AO system's VisAO and Clio2 cameras.

  10. SOUL: the Single conjugated adaptive Optics Upgrade for LBT

    NASA Astrophysics Data System (ADS)

    Pinna, E.; Esposito, S.; Hinz, P.; Agapito, G.; Bonaglia, M.; Puglisi, A.; Xompero, M.; Riccardi, A.; Briguglio, R.; Arcidiacono, C.; Carbonaro, L.; Fini, L.; Montoya, M.; Durney, O.

    2016-07-01

    We present here SOUL: the Single conjugated adaptive Optics Upgrade for LBT. Soul will upgrade the wavefront sensors replacing the existing CCD detector with an EMCCD camera and the rest of the system in order to enable the closed loop operations at a faster cycle rate and with higher number of slopes. Thanks to reduced noise, higher number of pixel and framerate, we expect a gain (for a given SR) around 1.5-2 magnitudes at all wavelengths in the range 7.5 70% in I-band and 0.6asec seeing) and the sky coverage will be multiplied by a factor 5 at all galactic latitudes. Upgrading the SCAO systems at all the 4 focal stations, SOUL will provide these benefits in 2017 to the LBTI interferometer and in 2018 to the 2 LUCI NIR spectro-imagers. In the same year the SOUL correction will be exploited also by the new generation of LBT instruments: V-SHARK, SHARK-NIR and iLocater.

  11. Latest developments on the loop control system of AdOpt@TNG

    NASA Astrophysics Data System (ADS)

    Ghedina, Adriano; Gaessler, Wolfgang; Cecconi, Massimo; Ragazzoni, Roberto; Puglisi, Alfio T.; De Bonis, Fulvio

    2004-10-01

    The Adaptive Optics System of the Galileo Telescope (AdOpt@TNG) is the only adaptive optics system mounted on a telescope which uses a pyramid wavefront snesor and it has already shown on sky its potentiality. Recently AdOpt@TNG has undergone deep changes at the level of its higher orders control system. The CCD and the Real Time Computer (RTC) have been substituted as a whole. Instead of the VME based RTC, due to its frequent breakdowns, a dual pentium processor PC with Real-Time-Linux has been chosen. The WFS CCD, that feeds the images to the RTC, was changed to an off-the-shelf camera system from SciMeasure with an EEV39 80x80 pixels as detector. While the APD based Tip/Tilt loop has shown the quality on the sky at the TNG site and the ability of TNG to take advantage of this quality, up to the diffraction limit, the High-Order system has been fully re-developed and the performance of the closed loop is under evaluation to offer the system with the best performance to the astronomical community.

  12. Adaptive DFT-Based Interferometer Fringe Tracking

    NASA Astrophysics Data System (ADS)

    Wilson, Edward; Pedretti, Ettore; Bregman, Jesse; Mah, Robert W.; Traub, Wesley A.

    2005-12-01

    An automatic interferometer fringe tracking system has been developed, implemented, and tested at the Infrared Optical Telescope Array (IOTA) Observatory at Mount Hopkins, Arizona. The system can minimize the optical path differences (OPDs) for all three baselines of the Michelson stellar interferometer at IOTA. Based on sliding window discrete Fourier-transform (DFT) calculations that were optimized for computational efficiency and robustness to atmospheric disturbances, the algorithm has also been tested extensively on offline data. Implemented in ANSI C on the 266 MHz PowerPC processor running the VxWorks real-time operating system, the algorithm runs in approximately [InlineEquation not available: see fulltext.] milliseconds per scan (including all three interferograms), using the science camera and piezo scanners to measure and correct the OPDs. The adaptive DFT-based tracking algorithm should be applicable to other systems where there is a need to detect or track a signal with an approximately constant-frequency carrier pulse. One example of such an application might be to the field of thin-film measurement by ellipsometry, using a broadband light source and a Fourier-transform spectrometer to detect the resulting fringe patterns.

  13. Wide field/planetary camera optics study. [for the large space telescope

    NASA Technical Reports Server (NTRS)

    1979-01-01

    Design feasibility of the baseline optical design concept was established for the wide field/planetary camera (WF/PC) and will be used with the space telescope (ST) to obtain high angular resolution astronomical information over a wide field. The design concept employs internal optics to relay the ST image to a CCD detector system. Optical design performance predictions, sensitivity and tolerance analyses, manufacturability of the optical components, and acceptance testing of the two mirror Cassegrain relays are discussed.

  14. Coherent infrared imaging camera (CIRIC)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hutchinson, D.P.; Simpson, M.L.; Bennett, C.A.

    1995-07-01

    New developments in 2-D, wide-bandwidth HgCdTe (MCT) and GaAs quantum-well infrared photodetectors (QWIP) coupled with Monolithic Microwave Integrated Circuit (MMIC) technology are now making focal plane array coherent infrared (IR) cameras viable. Unlike conventional IR cameras which provide only thermal data about a scene or target, a coherent camera based on optical heterodyne interferometry will also provide spectral and range information. Each pixel of the camera, consisting of a single photo-sensitive heterodyne mixer followed by an intermediate frequency amplifier and illuminated by a separate local oscillator beam, constitutes a complete optical heterodyne receiver. Applications of coherent IR cameras are numerousmore » and include target surveillance, range detection, chemical plume evolution, monitoring stack plume emissions, and wind shear detection.« less

  15. Optical methods for the optimization of system SWaP-C using aspheric components and advanced optical polymers

    NASA Astrophysics Data System (ADS)

    Zelazny, Amy; Benson, Robert; Deegan, John; Walsh, Ken; Schmidt, W. David; Howe, Russell

    2013-06-01

    We describe the benefits to camera system SWaP-C associated with the use of aspheric molded glasses and optical polymers in the design and manufacture of optical components and elements. Both camera objectives and display eyepieces, typical for night vision man-portable EO/IR systems, are explored. We discuss optical trade-offs, system performance, and cost reductions associated with this approach in both visible and non-visible wavebands, specifically NIR and LWIR. Example optical models are presented, studied, and traded using this approach.

  16. An overview of instrumentation for the Large Binocular Telescope

    NASA Astrophysics Data System (ADS)

    Wagner, R. Mark

    2012-09-01

    An overview of instrumentation for the Large Binocular Telescope (LBT) is presented. Optical instrumentation includes the Large Binocular Camera (LBC), a pair of wide-field (27' x 27') mosaic CCD imagers at the prime focus, and the Multi-Object Double Spectrograph (MODS), a pair of dual-beam blue-red optimized long-slit spectrographs mounted at the left and right direct F/15 Gregorian foci incorporating multiple slit masks for multi-object spectroscopy over a 6' field and spectral resolutions of up to 2000. Infrared instrumentation includes the LBT Near-IR Spectroscopic Utility with Camera and Integral Field Unit for Extragalactic Research (LUCI), a modular near-infrared (0.9-2.5 μm) imager and spectrograph pair mounted at the left and right front bent F/15 Gregorian foci and designed for seeing-limited (FOV: 4' × 4') imaging, long-slit spectroscopy, and multiobject spectroscopy utilizing cooled slit masks and diffraction limited (FOV: 0'.5 × 0'.5) imaging and long-slit spectroscopy. Strategic instruments under development that can utilize the full 23-m baseline of the LBT include an interferometric cryogenic beam combiner with near-infrared and thermal-infrared instruments for Fizeau imaging and nulling interferometry (LBTI) and an optical bench near-infrared beam combiner utilizing multi-conjugate adaptive optics for high angular resolution and sensitivity (LINC-NIRVANA). LBTI is currently undergoing commissioning on the LBT and utilizing the installed adaptive secondary mirrors in both single- sided and two-sided beam combination modes. In addition, a fiber-fed bench spectrograph (PEPSI) capable of ultra high resolution spectroscopy and spectropolarimetry (R = 40,000-300,000) will be available as a principal investigator instrument. Over the past four years the LBC pair, LUCI1, and MODS1 have been commissioned and are now scheduled for routine partner science observations. The delivery of both LUCI2 and MODS2 is anticipated before the end of 2012. The availability of all these instruments mounted simultaneously on the LBT permits unique science, flexible scheduling, and improved operational support.

  17. C-RED One and C-RED2: SWIR high-performance cameras using Saphira e-APD and Snake InGaAs detectors

    NASA Astrophysics Data System (ADS)

    Gach, Jean-Luc; Feautrier, Philippe; Stadler, Eric; Clop, Fabien; Lemarchand, Stephane; Carmignani, Thomas; Wanwanscappel, Yann; Boutolleau, David

    2018-02-01

    After the development of the OCAM2 EMCCD fast visible camera dedicated to advanced adaptive optics wavefront sensing, First Light Imaging moved to the SWIR fast cameras with the development of the C-RED One and the C-RED 2 cameras. First Light Imaging's C-RED One infrared camera is capable of capturing up to 3500 full frames per second with a subelectron readout noise and very low background. C-RED One is based on the last version of the SAPHIRA detector developed by Leonardo UK. This breakthrough has been made possible thanks to the use of an e-APD infrared focal plane array which is a real disruptive technology in imagery. C-RED One is an autonomous system with an integrated cooling system and a vacuum regeneration system. It operates its sensor with a wide variety of read out techniques and processes video on-board thanks to an FPGA. We will show its performances and expose its main features. In addition to this project, First Light Imaging developed an InGaAs 640x512 fast camera with unprecedented performances in terms of noise, dark and readout speed based on the SNAKE SWIR detector from Sofradir. The camera was called C-RED 2. The C-RED 2 characteristics and performances will be described. The C-RED One project has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement N° 673944. The C-RED 2 development is supported by the "Investments for the future" program and the Provence Alpes Côte d'Azur Region, in the frame of the CPER.

  18. Modeling of digital information optical encryption system with spatially incoherent illumination

    NASA Astrophysics Data System (ADS)

    Bondareva, Alyona P.; Cheremkhin, Pavel A.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Starikov, Rostislav S.; Starikov, Sergey N.

    2015-10-01

    State of the art micromirror DMD spatial light modulators (SLM) offer unprecedented framerate up to 30000 frames per second. This, in conjunction with high speed digital camera, should allow to build high speed optical encryption system. Results of modeling of digital information optical encryption system with spatially incoherent illumination are presented. Input information is displayed with first SLM, encryption element - with second SLM. Factors taken into account are: resolution of SLMs and camera, holograms reconstruction noise, camera noise and signal sampling. Results of numerical simulation demonstrate high speed (several gigabytes per second), low bit error rate and high crypto-strength.

  19. HDR video synthesis for vision systems in dynamic scenes

    NASA Astrophysics Data System (ADS)

    Shopovska, Ivana; Jovanov, Ljubomir; Goossens, Bart; Philips, Wilfried

    2016-09-01

    High dynamic range (HDR) image generation from a number of differently exposed low dynamic range (LDR) images has been extensively explored in the past few decades, and as a result of these efforts a large number of HDR synthesis methods have been proposed. Since HDR images are synthesized by combining well-exposed regions of the input images, one of the main challenges is dealing with camera or object motion. In this paper we propose a method for the synthesis of HDR video from a single camera using multiple, differently exposed video frames, with circularly alternating exposure times. One of the potential applications of the system is in driver assistance systems and autonomous vehicles, involving significant camera and object movement, non- uniform and temporally varying illumination, and the requirement of real-time performance. To achieve these goals simultaneously, we propose a HDR synthesis approach based on weighted averaging of aligned radiance maps. The computational complexity of high-quality optical flow methods for motion compensation is still pro- hibitively high for real-time applications. Instead, we rely on more efficient global projective transformations to solve camera movement, while moving objects are detected by thresholding the differences between the trans- formed and brightness adapted images in the set. To attain temporal consistency of the camera motion in the consecutive HDR frames, the parameters of the perspective transformation are stabilized over time by means of computationally efficient temporal filtering. We evaluated our results on several reference HDR videos, on synthetic scenes, and using 14-bit raw images taken with a standard camera.

  20. Synchronizing Photography For High-Speed-Engine Research

    NASA Technical Reports Server (NTRS)

    Chun, K. S.

    1989-01-01

    Light flashes when shaft reaches predetermined angle. Synchronization system facilitates visualization of flow in high-speed internal-combustion engines. Designed for cinematography and holographic interferometry, system synchronizes camera and light source with predetermined rotational angle of engine shaft. 10-bit resolution of absolute optical shaft encoder adapted, and 2 to tenth power combinations of 10-bit binary data computed to corresponding angle values. Pre-computed angle values programmed into EPROM's (erasable programmable read-only memories) to use as angle lookup table. Resolves shaft angle to within 0.35 degree at rotational speeds up to 73,240 revolutions per minute.

  1. OVMS-plus at the LBT: disturbance compensation simplified

    NASA Astrophysics Data System (ADS)

    Böhm, Michael; Pott, Jörg-Uwe; Borelli, José; Hinz, Phil; Defrère, Denis; Downey, Elwood; Hill, John; Summers, Kellee; Conrad, Al; Kürster, Martin; Herbst, Tom; Sawodny, Oliver

    2016-07-01

    In this paper we will briefly revisit the optical vibration measurement system (OVMS) at the Large Binocular Telescope (LBT) and how these values are used for disturbance compensation and particularly for the LBT Interferometer (LBTI) and the LBT Interferometric Camera for Near-Infrared and Visible Adaptive Interferometry for Astronomy (LINC-NIRVANA). We present the now centralized software architecture, called OVMS+, on which our approach is based and illustrate several challenges faced during the implementation phase. Finally, we will present measurement results from LBTI proving the effectiveness of the approach and the ability to compensate for a large fraction of the telescope induced vibrations.

  2. Optical analysis of a compound quasi-microscope for planetary landers

    NASA Technical Reports Server (NTRS)

    Wall, S. D.; Burcher, E. E.; Huck, F. O.

    1974-01-01

    A quasi-microscope concept, consisting of facsimile camera augmented with an auxiliary lens as a magnifier, was introduced and analyzed. The performance achievable with this concept was primarily limited by a trade-off between resolution and object field; this approach leads to a limiting resolution of 20 microns when used with the Viking lander camera (which has an angular resolution of 0.04 deg). An optical system is analyzed which includes a field lens between camera and auxiliary lens to overcome this limitation. It is found that this system, referred to as a compound quasi-microscope, can provide improved resolution (to about 2 microns ) and a larger object field. However, this improvement is at the expense of increased complexity, special camera design requirements, and tighter tolerances on the distances between optical components.

  3. SHOK—The First Russian Wide-Field Optical Camera in Space

    NASA Astrophysics Data System (ADS)

    Lipunov, V. M.; Gorbovskoy, E. S.; Kornilov, V. G.; Panasyuk, M. I.; Amelushkin, A. M.; Petrov, V. L.; Yashin, I. V.; Svertilov, S. I.; Vedenkin, N. N.

    2018-02-01

    Onboard the spacecraft Lomonosov is established two fast, fixed, very wide-field cameras SHOK. The main goal of this experiment is the observation of GRB optical emission before, synchronously, and after the gamma-ray emission. The field of view of each of the cameras is placed in the gamma-ray burst detection area of other devices located onboard the "Lomonosov" spacecraft. SHOK provides measurements of optical emissions with a magnitude limit of ˜ 9-10m on a single frame with an exposure of 0.2 seconds. The device is designed for continuous sky monitoring at optical wavelengths in the very wide field of view (1000 square degrees each camera), detection and localization of fast time-varying (transient) optical sources on the celestial sphere, including provisional and synchronous time recording of optical emissions from the gamma-ray burst error boxes, detected by the BDRG device and implemented by a control signal (alert trigger) from the BDRG. The Lomonosov spacecraft has two identical devices, SHOK1 and SHOK2. The core of each SHOK device is a fast-speed 11-Megapixel CCD. Each of the SHOK devices represents a monoblock, consisting of a node observations of optical emission, the electronics node, elements of the mechanical construction, and the body.

  4. Coaxial fundus camera for opthalmology

    NASA Astrophysics Data System (ADS)

    de Matos, Luciana; Castro, Guilherme; Castro Neto, Jarbas C.

    2015-09-01

    A Fundus Camera for ophthalmology is a high definition device which needs to meet low light illumination of the human retina, high resolution in the retina and reflection free image1. Those constraints make its optical design very sophisticated, but the most difficult to comply with is the reflection free illumination and the final alignment due to the high number of non coaxial optical components in the system. Reflection of the illumination, both in the objective and at the cornea, mask image quality, and a poor alignment make the sophisticated optical design useless. In this work we developed a totally axial optical system for a non-midriatic Fundus Camera. The illumination is performed by a LED ring, coaxial with the optical system and composed of IR of visible LEDs. The illumination ring is projected by the objective lens in the cornea. The Objective, LED illuminator, CCD lens are coaxial making the final alignment easily to perform. The CCD + capture lens module is a CCTV camera with autofocus and Zoom built in, added to a 175 mm focal length doublet corrected for infinity, making the system easily operated and very compact.

  5. TOWARD PRECISION PHOTOMETRY FOR THE ELT ERA: THE DOUBLE SUBGIANT BRANCH OF NGC 1851 OBSERVED WITH THE GEMINI/GeMS MCAO SYSTEM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turri, P.; McConnachie, A. W.; Stetson, P. B.

    2015-10-01

    The Extremely Large Telescopes currently under construction have a collecting area that is an order of magnitude larger than the present largest optical telescopes. For seeing-limited observations the performance will scale as the collecting area, but with the successful use of adaptive optics (AO), for many applications it will scale as D{sup 4} (where D is the diameter of the primary mirror). Central to the success of the ELTs, therefore, is the successful use of multi-conjugate adaptive optics (MCAO) which applies a high degree of correction over a field of view larger than the few arcseconds that limits classical AOmore » systems. In this Letter, we report on the analysis of crowded field images taken on the central region of the galactic globular cluster NGC 1851 in the K{sub s} band using the Gemini Multi-conjugate Adaptive Optics System (GeMS) at the Gemini South Telescope, the only science-grade MCAO system in operation. We use this cluster as a benchmark to verify the ability to achieve precise near-infrared photometry by presenting the deepest K{sub s} photometry in crowded fields ever obtained from the ground. We construct a color–magnitude diagram in combination with the F606W band from the Hubble Space Telescope/Advanced Camera for Surveys. As well as detecting the “knee” in the lower main sequence at K{sub s} ≃ 20.5, we also detect the double subgiant branch of NGC 1851, which demonstrates the high photometric accuracy of GeMS in crowded fields.« less

  6. Development of the Optical Communications Telescope Laboratory: A Laser Communications Relay Demonstration Ground Station

    NASA Technical Reports Server (NTRS)

    Wilson, K. E.; Antsos, D.; Roberts, L. C. Jr.,; Piazzolla, S.; Clare, L. P.; Croonquist, A. P.

    2012-01-01

    The Laser Communications Relay Demonstration (LCRD) project will demonstrate high bandwidth space to ground bi-directional optical communications links between a geosynchronous satellite and two LCRD optical ground stations located in the southwestern United States. The project plans to operate for two years with a possible extension to five. Objectives of the demonstration include the development of operational strategies to prototype optical link and relay services for the next generation tracking and data relay satellites. Key technologies to be demonstrated include adaptive optics to correct for clear air turbulence-induced wave front aberrations on the downlink, and advanced networking concepts for assured and automated data delivery. Expanded link availability will be demonstrated by supporting operations at small sun-Earth-probe angles. Planned optical modulation formats support future concepts of near-Earth satellite user services to a maximum of 1.244 Gb/s differential phase shift keying modulation and pulse position modulations formats for deep space links at data rates up to 311 Mb/s. Atmospheric monitoring instruments that will characterize the optical channel during the link include a sun photometer to measure atmospheric transmittance, a solar scintillometer, and a cloud camera to measure the line of sight cloud cover. This paper describes the planned development of the JPL optical ground station.

  7. Novel computer-based endoscopic camera

    NASA Astrophysics Data System (ADS)

    Rabinovitz, R.; Hai, N.; Abraham, Martin D.; Adler, Doron; Nissani, M.; Fridental, Ron; Vitsnudel, Ilia

    1995-05-01

    We have introduced a computer-based endoscopic camera which includes (a) unique real-time digital image processing to optimize image visualization by reducing over exposed glared areas and brightening dark areas, and by accentuating sharpness and fine structures, and (b) patient data documentation and management. The image processing is based on i Sight's iSP1000TM digital video processor chip and Adaptive SensitivityTM patented scheme for capturing and displaying images with wide dynamic range of light, taking into account local neighborhood image conditions and global image statistics. It provides the medical user with the ability to view images under difficult lighting conditions, without losing details `in the dark' or in completely saturated areas. The patient data documentation and management allows storage of images (approximately 1 MB per image for a full 24 bit color image) to any storage device installed into the camera, or to an external host media via network. The patient data which is included with every image described essential information on the patient and procedure. The operator can assign custom data descriptors, and can search for the stored image/data by typing any image descriptor. The camera optics has extended zoom range of f equals 20 - 45 mm allowing control of the diameter of the field which is displayed on the monitor such that the complete field of view of the endoscope can be displayed on all the area of the screen. All these features provide versatile endoscopic camera with excellent image quality and documentation capabilities.

  8. Final Optical Design of PANIC, a Wide-Field Infrared Camera for CAHA

    NASA Astrophysics Data System (ADS)

    Cárdenas, M. C.; Gómez, J. Rodríguez; Lenzen, R.; Sánchez-Blanco, E.

    We present the Final Optical Design of PANIC (PAnoramic Near Infrared camera for Calar Alto), a wide-field infrared imager for the Ritchey-Chrtien focus of the Calar Alto 2.2 m telescope. This will be the first instrument built under the German-Spanish consortium that manages the Calar Alto observatory. The camera optical design is a folded single optical train that images the sky onto the focal plane with a plate scale of 0.45 arcsec per 18 μm pixel. The optical design produces a well defined internal pupil available to reducing the thermal background by a cryogenic pupil stop. A mosaic of four detectors Hawaii 2RG of 2 k ×2 k, made by Teledyne, will give a field of view of 31.9 arcmin ×31.9 arcmin.

  9. Design of the high resolution optical instrument for the Pleiades HR Earth observation satellites

    NASA Astrophysics Data System (ADS)

    Lamard, Jean-Luc; Gaudin-Delrieu, Catherine; Valentini, David; Renard, Christophe; Tournier, Thierry; Laherrere, Jean-Marc

    2017-11-01

    As part of its contribution to Earth observation from space, ALCATEL SPACE designed, built and tested the High Resolution cameras for the European intelligence satellites HELIOS I and II. Through these programmes, ALCATEL SPACE enjoys an international reputation. Its capability and experience in High Resolution instrumentation is recognised by the most customers. Coming after the SPOT program, it was decided to go ahead with the PLEIADES HR program. PLEIADES HR is the optical high resolution component of a larger optical and radar multi-sensors system : ORFEO, which is developed in cooperation between France and Italy for dual Civilian and Defense use. ALCATEL SPACE has been entrusted by CNES with the development of the high resolution camera of the Earth observation satellites PLEIADES HR. The first optical satellite of the PLEIADES HR constellation will be launched in mid-2008, the second will follow in 2009. To minimize the development costs, a mini satellite approach has been selected, leading to a compact concept for the camera design. The paper describes the design and performance budgets of this novel high resolution and large field of view optical instrument with emphasis on the technological features. This new generation of camera represents a breakthrough in comparison with the previous SPOT cameras owing to a significant step in on-ground resolution, which approaches the capabilities of aerial photography. Recent advances in detector technology, optical fabrication and electronics make it possible for the PLEIADES HR camera to achieve their image quality performance goals while staying within weight and size restrictions normally considered suitable only for much lower performance systems. This camera design delivers superior performance using an innovative low power, low mass, scalable architecture, which provides a versatile approach for a variety of imaging requirements and allows for a wide number of possibilities of accommodation with a mini-satellite class platform.

  10. A simple optical tweezers for trapping polystyrene particles

    NASA Astrophysics Data System (ADS)

    Shiddiq, Minarni; Nasir, Zulfa; Yogasari, Dwiyana

    2013-09-01

    Optical tweezers is an optical trap. For decades, it has become an optical tool that can trap and manipulate any particle from the very small size like DNA to the big one like bacteria. The trapping force comes from the radiation pressure of laser light which is focused to a group of particles. Optical tweezers has been used in many research areas such as atomic physics, medical physics, biophysics, and chemistry. Here, a simple optical tweezers has been constructed using a modified Leybold laboratory optical microscope. The ocular lens of the microscope has been removed for laser light and digital camera accesses. A laser light from a Coherent diode laser with wavelength λ = 830 nm and power 50 mW is sent through an immersion oil objective lens with magnification 100 × and NA 1.25 to a cell made from microscope slides containing polystyrene particles. Polystyrene particles with size 3 μm and 10 μm are used. A CMOS Thorlabs camera type DCC1545M with USB Interface and Thorlabs camera lens 35 mm are connected to a desktop and used to monitor the trapping and measure the stiffness of the trap. The camera is accompanied by camera software which makes able for the user to capture and save images. The images are analyzed using ImageJ and Scion macro. The polystyrene particles have been trapped successfully. The stiffness of the trap depends on the size of the particles and the power of the laser. The stiffness increases linearly with power and decreases as the particle size larger.

  11. Adaptive tracking control of a wheeled mobile robot via an uncalibrated camera system.

    PubMed

    Dixon, W E; Dawson, D M; Zergeroglu, E; Behal, A

    2001-01-01

    This paper considers the problem of position/orientation tracking control of wheeled mobile robots via visual servoing in the presence of parametric uncertainty associated with the mechanical dynamics and the camera system. Specifically, we design an adaptive controller that compensates for uncertain camera and mechanical parameters and ensures global asymptotic position/orientation tracking. Simulation and experimental results are included to illustrate the performance of the control law.

  12. Preliminary calibration results of the wide angle camera of the imaging instrument OSIRIS for the Rosetta mission

    NASA Astrophysics Data System (ADS)

    Da Deppo, V.; Naletto, G.; Nicolosi, P.; Zambolin, P.; De Cecco, M.; Debei, S.; Parzianello, G.; Ramous, P.; Zaccariotto, M.; Fornasier, S.; Verani, S.; Thomas, N.; Barthol, P.; Hviid, S. F.; Sebastian, I.; Meller, R.; Sierks, H.; Keller, H. U.; Barbieri, C.; Angrilli, F.; Lamy, P.; Rodrigo, R.; Rickman, H.; Wenzel, K. P.

    2017-11-01

    Rosetta is one of the cornerstone missions of the European Space Agency for having a rendezvous with the comet 67P/Churyumov-Gerasimenko in 2014. The imaging instrument on board the satellite is OSIRIS (Optical, Spectroscopic and Infrared Remote Imaging System), a cooperation among several European institutes, which consists of two cameras: a Narrow (NAC) and a Wide Angle Camera (WAC). The WAC optical design is an innovative one: it adopts an all reflecting, unvignetted and unobstructed two mirror configuration which allows to cover a 12° × 12° field of view with an F/5.6 aperture and gives a nominal contrast ratio of about 10-4. The flight model of this camera has been successfully integrated and tested in our laboratories, and finally has been integrated on the satellite which is now waiting to be launched in February 2004. In this paper we are going to describe the optical characteristics of the camera, and to summarize the results so far obtained with the preliminary calibration data. The analysis of the optical performance of this model shows a good agreement between theoretical performance and experimental results.

  13. MEMS compatible illumination and imaging micro-optical systems

    NASA Astrophysics Data System (ADS)

    Bräuer, A.; Dannberg, P.; Duparré, J.; Höfer, B.; Schreiber, P.; Scholles, M.

    2007-01-01

    The development of new MOEMS demands for cooperation between researchers in micromechanics, optoelectronics and microoptics at a very early state. Additionally, microoptical technologies being compatible with structured silicon have to be developed. The microoptical technologies used for two silicon based microsystems are described in the paper. First, a very small scanning laser projector with a volume of less than 2 cm 3, which operates with a directly modulated lasers collimated with a microlens, is shown. The laser radiation illuminates a 2D-MEMS scanning mirror. The optical design is optimized for high resolution (VGA). Thermomechanical stability is realized by design and using a structured ceramics motherboard. Secondly, an ultrathin CMOS-camera having an insect inspired imaging system has been realized. It is the first experimental realization of an artificial compound eye. Micro-optical design principles and technology is used. The overall thickness of the imaging system is only 320 μm, the diagonal field of view is 21°, and the f-number is 2.6. The monolithic device consists of an UV-replicated microlens array upon a thin silica substrate with a pinhole array in a metal layer on the back side. The pitch of the pinholes differs from that of the lens array to provide individual viewing angle for each channel. The imaging chip is directly glued to a CMOS sensor with adapted pitch. The whole camera is less than 1mm thick. New packaging methods for these systems are under development.

  14. Spatial calibration of an optical see-through head mounted display

    PubMed Central

    Gilson, Stuart J.; Fitzgibbon, Andrew W.; Glennerster, Andrew

    2010-01-01

    We present here a method for calibrating an optical see-through Head Mounted Display (HMD) using techniques usually applied to camera calibration (photogrammetry). Using a camera placed inside the HMD to take pictures simultaneously of a tracked object and features in the HMD display, we could exploit established camera calibration techniques to recover both the intrinsic and extrinsic properties of the HMD (width, height, focal length, optic centre and principal ray of the display). Our method gives low re-projection errors and, unlike existing methods, involves no time-consuming and error-prone human measurements, nor any prior estimates about the HMD geometry. PMID:18599125

  15. A small field of view camera for hybrid gamma and optical imaging

    NASA Astrophysics Data System (ADS)

    Lees, J. E.; Bugby, S. L.; Bhatia, B. S.; Jambi, L. K.; Alqahtani, M. S.; McKnight, W. R.; Ng, A. H.; Perkins, A. C.

    2014-12-01

    The development of compact low profile gamma-ray detectors has allowed the production of small field of view, hand held imaging devices for use at the patient bedside and in operating theatres. The combination of an optical and a gamma camera, in a co-aligned configuration, offers high spatial resolution multi-modal imaging giving a superimposed scintigraphic and optical image. This innovative introduction of hybrid imaging offers new possibilities for assisting surgeons in localising the site of uptake in procedures such as sentinel node detection. Recent improvements to the camera system along with results of phantom and clinical imaging are reported.

  16. [Computer optical topography: a study of the repeatability of the results of human body model examination].

    PubMed

    Sarnadskiĭ, V N

    2007-01-01

    The problem of repeatability of the results of examination of a plastic human body model is considered. The model was examined in 7 positions using an optical topograph for kyphosis diagnosis. The examination was performed under television camera monitoring. It was shown that variation of the model position in the camera view affected the repeatability of the results of topographic examination, especially if the model-to-camera distance was changed. A study of the repeatability of the results of optical topographic examination can help to increase the reliability of the topographic method, which is widely used for medical screening of children and adolescents.

  17. Study of a stereo electro-optical tracker system for the measurement of model deformations at the national transonic facility

    NASA Technical Reports Server (NTRS)

    Hertel, R. J.

    1979-01-01

    An electro-optical method to measure the aeroelastic deformations of wind tunnel models is examined. The multitarget tracking performance of one of the two electronic cameras comprising the stereo pair is modeled and measured. The properties of the targets at the model, the camera optics, target illumination, number of targets, acquisition time, target velocities, and tracker performance are considered. The electronic camera system is shown to be capable of locating, measuring, and following the positions of 5 to 50 targets attached to the model at measuring rates up to 5000 targets per second.

  18. Into the blue: AO science with MagAO in the visible

    NASA Astrophysics Data System (ADS)

    Close, Laird M.; Males, Jared R.; Follette, Katherine B.; Hinz, Phil; Morzinski, Katie; Wu, Ya-Lin; Kopon, Derek; Riccardi, Armando; Esposito, Simone; Puglisi, Alfio; Pinna, Enrico; Xompero, Marco; Briguglio, Runa; Quiros-Pacheco, Fernando

    2014-08-01

    We review astronomical results in the visible (λ<1μm) with adaptive optics. Other than a brief period in the early 1990s, there has been little astronomical science done in the visible with AO until recently. The most productive visible AO system to date is our 6.5m Magellan telescope AO system (MagAO). MagAO is an advanced Adaptive Secondary system at the Magellan 6.5m in Chile. This secondary has 585 actuators with < 1 msec response times (0.7 ms typically). We use a pyramid wavefront sensor. The relatively small actuator pitch (~23 cm/subap) allows moderate Strehls to be obtained in the visible (0.63-1.05 microns). We use a CCD AO science camera called "VisAO". On-sky long exposures (60s) achieve <30mas resolutions, 30% Strehls at 0.62 microns (r') with the VisAO camera in 0.5" seeing with bright R < 8 mag stars. These relatively high visible wavelength Strehls are made possible by our powerful combination of a next generation ASM and a Pyramid WFS with 378 controlled modes and 1000 Hz loop frequency. We'll review the key steps to having good performance in the visible and review the exciting new AO visible science opportunities and refereed publications in both broad-band (r,i,z,Y) and at Halpha for exoplanets, protoplanetary disks, young stars, and emission line jets. These examples highlight the power of visible AO to probe circumstellar regions/spatial resolutions that would otherwise require much larger diameter telescopes with classical infrared AO cameras.

  19. Label free measurement of retinal blood cell flux, velocity, hematocrit and capillary width in the living mouse eye

    PubMed Central

    Guevara-Torres, A.; Joseph, A.; Schallek, J. B.

    2016-01-01

    Measuring blood cell dynamics within the capillaries of the living eye provides crucial information regarding the health of the microvascular network. To date, the study of single blood cell movement in this network has been obscured by optical aberrations, hindered by weak optical contrast, and often required injection of exogenous fluorescent dyes to perform measurements. Here we present a new strategy to non-invasively image single blood cells in the living mouse eye without contrast agents. Eye aberrations were corrected with an adaptive optics camera coupled with a fast, 15 kHz scanned beam orthogonal to a capillary of interest. Blood cells were imaged as they flowed past a near infrared imaging beam to which the eye is relatively insensitive. Optical contrast of cells was optimized using differential scatter of blood cells in the split-detector imaging configuration. Combined, these strategies provide label-free, non-invasive imaging of blood cells in the retina as they travel in single file in capillaries, enabling determination of cell flux, morphology, class, velocity, and rheology at the single cell level. PMID:27867728

  20. Label free measurement of retinal blood cell flux, velocity, hematocrit and capillary width in the living mouse eye.

    PubMed

    Guevara-Torres, A; Joseph, A; Schallek, J B

    2016-10-01

    Measuring blood cell dynamics within the capillaries of the living eye provides crucial information regarding the health of the microvascular network. To date, the study of single blood cell movement in this network has been obscured by optical aberrations, hindered by weak optical contrast, and often required injection of exogenous fluorescent dyes to perform measurements. Here we present a new strategy to non-invasively image single blood cells in the living mouse eye without contrast agents. Eye aberrations were corrected with an adaptive optics camera coupled with a fast, 15 kHz scanned beam orthogonal to a capillary of interest. Blood cells were imaged as they flowed past a near infrared imaging beam to which the eye is relatively insensitive. Optical contrast of cells was optimized using differential scatter of blood cells in the split-detector imaging configuration. Combined, these strategies provide label-free, non-invasive imaging of blood cells in the retina as they travel in single file in capillaries, enabling determination of cell flux, morphology, class, velocity, and rheology at the single cell level.

  1. High-resolution imaging optomechatronics for precise liquid crystal display module bonding automated optical inspection

    NASA Astrophysics Data System (ADS)

    Ni, Guangming; Liu, Lin; Zhang, Jing; Liu, Juanxiu; Liu, Yong

    2018-01-01

    With the development of the liquid crystal display (LCD) module industry, LCD modules become more and more precise with larger sizes, which demands harsh imaging requirements for automated optical inspection (AOI). Here, we report a high-resolution and clearly focused imaging optomechatronics for precise LCD module bonding AOI inspection. It first presents and achieves high-resolution imaging for LCD module bonding AOI inspection using a line scan camera (LSC) triggered by a linear optical encoder, self-adaptive focusing for the whole large imaging region using LSC, and a laser displacement sensor, which reduces the requirements of machining, assembly, and motion control of AOI devices. Results show that this system can directly achieve clearly focused imaging for AOI inspection of large LCD module bonding with 0.8 μm image resolution, 2.65-mm scan imaging width, and no limited imaging width theoretically. All of these are significant for AOI inspection in the LCD module industry and other fields that require imaging large regions with high resolution.

  2. In Situ Optical Mapping of Voltage and Calcium in the Heart

    PubMed Central

    Ewart, Paul; Ashley, Euan A.; Loew, Leslie M.; Kohl, Peter; Bollensdorff, Christian; Woods, Christopher E.

    2012-01-01

    Electroanatomic mapping the interrelation of intracardiac electrical activation with anatomic locations has become an important tool for clinical assessment of complex arrhythmias. Optical mapping of cardiac electrophysiology combines high spatiotemporal resolution of anatomy and physiological function with fast and simultaneous data acquisition. If applied to the clinical setting, this could improve both diagnostic potential and therapeutic efficacy of clinical arrhythmia interventions. The aim of this study was to explore this utility in vivo using a rat model. To this aim, we present a single-camera imaging and multiple light-emitting-diode illumination system that reduces economic and technical implementation hurdles to cardiac optical mapping. Combined with a red-shifted calcium dye and a new near-infrared voltage-sensitive dye, both suitable for use in blood-perfused tissue, we demonstrate the feasibility of in vivo multi-parametric imaging of the mammalian heart. Our approach combines recording of electrophysiologically-relevant parameters with observation of structural substrates and is adaptable, in principle, to trans-catheter percutaneous approaches. PMID:22876327

  3. Improving accuracy of Plenoptic PIV using two light field cameras

    NASA Astrophysics Data System (ADS)

    Thurow, Brian; Fahringer, Timothy

    2017-11-01

    Plenoptic particle image velocimetry (PIV) has recently emerged as a viable technique for acquiring three-dimensional, three-component velocity field data using a single plenoptic, or light field, camera. The simplified experimental arrangement is advantageous in situations where optical access is limited and/or it is not possible to set-up the four or more cameras typically required in a tomographic PIV experiment. A significant disadvantage of a single camera plenoptic PIV experiment, however, is that the accuracy of the velocity measurement along the optical axis of the camera is significantly worse than in the two lateral directions. In this work, we explore the accuracy of plenoptic PIV when two plenoptic cameras are arranged in a stereo imaging configuration. It is found that the addition of a 2nd camera improves the accuracy in all three directions and nearly eliminates any differences between them. This improvement is illustrated using both synthetic and real experiments conducted on a vortex ring using both one and two plenoptic cameras.

  4. Coincidence velocity map imaging using Tpx3Cam, a time stamping optical camera with 1.5 ns timing resolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Arthur; van Beuzekom, Martin; Bouwens, Bram

    Here, we demonstrate a coincidence velocity map imaging apparatus equipped with a novel time-stamping fast optical camera, Tpx3Cam, whose high sensitivity and nanosecond timing resolution allow for simultaneous position and time-of-flight detection. This single detector design is simple, flexible, and capable of highly differential measurements. We show detailed characterization of the camera and its application in strong field ionization experiments.

  5. Coincidence velocity map imaging using Tpx3Cam, a time stamping optical camera with 1.5 ns timing resolution

    DOE PAGES

    Zhao, Arthur; van Beuzekom, Martin; Bouwens, Bram; ...

    2017-11-07

    Here, we demonstrate a coincidence velocity map imaging apparatus equipped with a novel time-stamping fast optical camera, Tpx3Cam, whose high sensitivity and nanosecond timing resolution allow for simultaneous position and time-of-flight detection. This single detector design is simple, flexible, and capable of highly differential measurements. We show detailed characterization of the camera and its application in strong field ionization experiments.

  6. Atmospherical wavefront phases using the plenoptic sensor (real data)

    NASA Astrophysics Data System (ADS)

    Rodríguez-Ramos, L. F.; Montilla, I.; Lüke, J. P.; López, R.; Marichal-Hernández, J. G.; Trujillo-Sevilla, J.; Femenía, B.; López, M.; Fernández-Valdivia, J. J.; Puga, M.; Rosa, F.; Rodríguez-Ramos, J. M.

    2012-06-01

    Plenoptic cameras have been developed the last years as a passive method for 3d scanning, allowing focal stack capture from a single shot. But data recorded by this kind of sensors can also be used to extract the wavefront phases associated to the atmospheric turbulence in an astronomical observation. The terrestrial atmosphere degrades the telescope images due to the diffraction index changes associated to the turbulence. Na artificial Laser Guide Stars (Na-LGS, 90km high) must be used to obtain the reference wavefront phase and the Optical Transfer Function of the system, but they are affected by defocus because of the finite distance to the telescope. Using the telescope as a plenoptic camera allows us to correct the defocus and to recover the wavefront phase tomographically, taking advantage of the two principal characteristics of the plenoptic sensors at the same time: 3D scanning and wavefront sensing. Then, the plenoptic sensors can be studied and used as an alternative wavefront sensor for Adaptive Optics, particularly relevant when Extremely Large Telescopes projects are being undertaken. In this paper, we will present the first observational wavefront phases extracted from real astronomical observations, using punctual and extended objects, and we show that the restored wavefronts match the Kolmogorov atmospheric turbulence.

  7. Miniature optical planar camera based on a wide-angle metasurface doublet corrected for monochromatic aberrations

    PubMed Central

    Arbabi, Amir; Arbabi, Ehsan; Kamali, Seyedeh Mahsa; Horie, Yu; Han, Seunghoon; Faraon, Andrei

    2016-01-01

    Optical metasurfaces are two-dimensional arrays of nano-scatterers that modify optical wavefronts at subwavelength spatial resolution. They are poised to revolutionize optics by enabling complex low-cost systems where multiple metasurfaces are lithographically stacked and integrated with electronics. For imaging applications, metasurface stacks can perform sophisticated image corrections and can be directly integrated with image sensors. Here we demonstrate this concept with a miniature flat camera integrating a monolithic metasurface lens doublet corrected for monochromatic aberrations, and an image sensor. The doublet lens, which acts as a fisheye photographic objective, has a small f-number of 0.9, an angle-of-view larger than 60° × 60°, and operates at 850 nm wavelength with 70% focusing efficiency. The camera exhibits nearly diffraction-limited image quality, which indicates the potential of this technology in the development of optical systems for microscopy, photography, and computer vision. PMID:27892454

  8. Optical pin apparatus for measuring the arrival time and velocity of shock waves and particles

    DOEpatents

    Benjamin, R.F.

    1983-10-18

    An apparatus for the detection of the arrival and for the determination of the velocity of disturbances such as shock-wave fronts and/or projectiles. Optical pins using fluid-filled microballoons as the light source and an optical fiber as a link to a photodetector have been used to investigate shock-waves and projectiles. A microballoon filled with a noble gas is affixed to one end of a fiber-optic cable, and the other end of the cable is attached to a high-speed streak camera. As the shock-front or projectile compresses the microballoon, the gas inside is heated and compressed producing a bright flash of light. The flash of light is transmitted via the optic cable to the streak camera where it is recorded. One image-converter streak camera is capable of recording information from more than 100 microballoon-cable combinations simultaneously.

  9. Miniature optical planar camera based on a wide-angle metasurface doublet corrected for monochromatic aberrations

    NASA Astrophysics Data System (ADS)

    Arbabi, Amir; Arbabi, Ehsan; Kamali, Seyedeh Mahsa; Horie, Yu; Han, Seunghoon; Faraon, Andrei

    2016-11-01

    Optical metasurfaces are two-dimensional arrays of nano-scatterers that modify optical wavefronts at subwavelength spatial resolution. They are poised to revolutionize optics by enabling complex low-cost systems where multiple metasurfaces are lithographically stacked and integrated with electronics. For imaging applications, metasurface stacks can perform sophisticated image corrections and can be directly integrated with image sensors. Here we demonstrate this concept with a miniature flat camera integrating a monolithic metasurface lens doublet corrected for monochromatic aberrations, and an image sensor. The doublet lens, which acts as a fisheye photographic objective, has a small f-number of 0.9, an angle-of-view larger than 60° × 60°, and operates at 850 nm wavelength with 70% focusing efficiency. The camera exhibits nearly diffraction-limited image quality, which indicates the potential of this technology in the development of optical systems for microscopy, photography, and computer vision.

  10. Optical pin apparatus for measuring the arrival time and velocity of shock waves and particles

    DOEpatents

    Benjamin, Robert F.

    1987-01-01

    An apparatus for the detection of the arrival and for the determination of the velocity of disturbances such as shock-wave fronts and/or projectiles. Optical pins using fluid-filled microballoons as the light source and an optical fiber as a link to a photodetector have been used to investigate shock-waves and projectiles. A microballoon filled with a noble gas is affixed to one end of a fiber-optic cable, and the other end of the cable is attached to a high-speed streak camera. As the shock-front or projectile compresses the microballoon, the gas inside is heated and compressed producing a bright flash of light. The flash of light is transmitted via the optic cable to the streak camera where it is recorded. One image-converter streak camera is capable of recording information from more than 100 microballoon-cable combinations simultaneously.

  11. Optical pin apparatus for measuring the arrival time and velocity of shock waves and particles

    DOEpatents

    Benjamin, R.F.

    1987-03-10

    An apparatus is disclosed for the detection of the arrival and for the determination of the velocity of disturbances such as shock-wave fronts and/or projectiles. Optical pins using fluid-filled microballoons as the light source and an optical fiber as a link to a photodetector have been used to investigate shock-waves and projectiles. A microballoon filled with a noble gas is affixed to one end of a fiber-optic cable, and the other end of the cable is attached to a high-speed streak camera. As the shock-front or projectile compresses the microballoon, the gas inside is heated and compressed producing a bright flash of light. The flash of light is transmitted via the optic cable to the streak camera where it is recorded. One image-converter streak camera is capable of recording information from more than 100 microballoon-cable combinations simultaneously. 3 figs.

  12. Image Mosaicking Approach for a Double-Camera System in the GaoFen2 Optical Remote Sensing Satellite Based on the Big Virtual Camera.

    PubMed

    Cheng, Yufeng; Jin, Shuying; Wang, Mi; Zhu, Ying; Dong, Zhipeng

    2017-06-20

    The linear array push broom imaging mode is widely used for high resolution optical satellites (HROS). Using double-cameras attached by a high-rigidity support along with push broom imaging is one method to enlarge the field of view while ensuring high resolution. High accuracy image mosaicking is the key factor of the geometrical quality of complete stitched satellite imagery. This paper proposes a high accuracy image mosaicking approach based on the big virtual camera (BVC) in the double-camera system on the GaoFen2 optical remote sensing satellite (GF2). A big virtual camera can be built according to the rigorous imaging model of a single camera; then, each single image strip obtained by each TDI-CCD detector can be re-projected to the virtual detector of the big virtual camera coordinate system using forward-projection and backward-projection to obtain the corresponding single virtual image. After an on-orbit calibration and relative orientation, the complete final virtual image can be obtained by stitching the single virtual images together based on their coordinate information on the big virtual detector image plane. The paper subtly uses the concept of the big virtual camera to obtain a stitched image and the corresponding high accuracy rational function model (RFM) for concurrent post processing. Experiments verified that the proposed method can achieve seamless mosaicking while maintaining the geometric accuracy.

  13. Monitoring lightning from space with TARANIS

    NASA Astrophysics Data System (ADS)

    Farges, T.; Blanc, E.; Pinçon, J.

    2010-12-01

    Some recent space experiments, e.g. OTD, LIS, show the large interest of lightning monitoring from space and the efficiency of optical measurement. Future instrumentations are now defined for the next generation of geostationary meteorology satellites. Calibration of these instruments requires ground truth events provided by lightning location networks, as NLDN in US, and EUCLID or LINET in Europe, using electromagnetic observations at a regional scale. One of the most challenging objectives is the continuous monitoring of the lightning activity over the tropical zone (Africa, America, and Indonesia). However, one difficulty is the lack of lightning location networks at regional scale in these areas to validate the data quality. TARANIS (Tool for the Analysis of Radiations from lightNings and Sprites) is a CNES micro satellite project. It is dedicated to the study of impulsive transfers of energy, between the Earth atmosphere and the space environment, from nadir observations of Transient Luminous Events (TLEs), Terrestrial Gamma ray Flashes (TGFs) and other possible associated emissions. Its orbit will be sun-synchronous at 10:30 local time; its altitude will be 700 km. Its lifetime will be nominally 2 years. Its payload is composed of several electromagnetic instruments in different wavelengths: X and gamma-ray detectors, optical cameras and photometers, electromagnetic wave sensors from DC to 30 MHz completed by high energy electron detectors. The optical instrument includes 2 cameras and 4 photometers. All sensors are equipped with filters for sprite and lightning differentiation. The filters of cameras are designed for sprite and lightning observations at 762 nm and 777 nm respectively. However, differently from OTD or LIS instruments, the filter bandwidth and the exposure time (respectively 10 nm and 91 ms) prevent lightning optical observations during daytime. The camera field of view is a square of 500 km at ground level with a spatial sampling frequency of about 1 km. One of the photometers will measure precisely the lightning radiance in a wide spectral range from 600 to 900 nm with a sampling frequency of 20 kHz. We suggest using the Event and mainly Survey mode of MCP instrument to monitor lightning activity and compare it to the geostationary satellite lightning mapper data. In the Event mode, data are recorded with their highest resolution. In the camera survey mode, every image is archived using a specific compression algorithm. The photometer Survey mode consists in decimating the data by a factor of 10 and to reduce the data dynamic. However, it remains well adapted to provide a good continuous characterization of lightning activity. The use of other instruments for example 0+ whistler detector will complete the lightning characterization.

  14. Simultaneous tracking and regulation visual servoing of wheeled mobile robots with uncalibrated extrinsic parameters

    NASA Astrophysics Data System (ADS)

    Lu, Qun; Yu, Li; Zhang, Dan; Zhang, Xuebo

    2018-01-01

    This paper presentsa global adaptive controller that simultaneously solves tracking and regulation for wheeled mobile robots with unknown depth and uncalibrated camera-to-robot extrinsic parameters. The rotational angle and the scaled translation between the current camera frame and the reference camera frame, as well as the ones between the desired camera frame and the reference camera frame can be calculated in real time by using the pose estimation techniques. A transformed system is first obtained, for which an adaptive controller is then designed to accomplish both tracking and regulation tasks, and the controller synthesis is based on Lyapunov's direct method. Finally, the effectiveness of the proposed method is illustrated by a simulation study.

  15. Comparison of standing volume estimates using optical dendrometers

    Treesearch

    Neil A. Clark; Stanley J. Zarnoch; Alexander Clark; Gregory A. Reams

    2001-01-01

    This study compared height and diameter measurements and volume estimates on 20 hardwood and 20 softwood stems using traditional optical dendrometers, an experimental camera instrument, and mechanical calipers. Multiple comparison tests showed significant differences among the means for lower stem diameters when the camera was used. There were no significant...

  16. Comparison of Standing Volume Estimates Using Optical Dendrometers

    Treesearch

    Neil A. Clark; Stanley J. Zarnoch; Alexander Clark; Gregory A. Reams

    2001-01-01

    This study compared height and diameter measurements and volume estimates on 20 hardwood and 20 softwood stems using traditional optical dendrometers, an experimental camera instrument, and mechanical calipers. Multiple comparison tests showed significant differences among the means for lower stem diameters when the camera was used. There were no significant...

  17. Performance Evaluations and Quality Validation System for Optical Gas Imaging Cameras That Visualize Fugitive Hydrocarbon Gas Emissions

    EPA Science Inventory

    Optical gas imaging (OGI) cameras have the unique ability to exploit the electromagnetic properties of fugitive chemical vapors to make invisible gases visible. This ability is extremely useful for industrial facilities trying to mitigate product losses from escaping gas and fac...

  18. Optical design of portable nonmydriatic fundus camera

    NASA Astrophysics Data System (ADS)

    Chen, Weilin; Chang, Jun; Lv, Fengxian; He, Yifan; Liu, Xin; Wang, Dajiang

    2016-03-01

    Fundus camera is widely used in screening and diagnosis of retinal disease. It is a simple, and widely used medical equipment. Early fundus camera expands the pupil with mydriatic to increase the amount of the incoming light, which makes the patients feel vertigo and blurred. Nonmydriatic fundus camera is a trend of fundus camera. Desktop fundus camera is not easy to carry, and only suitable to be used in the hospital. However, portable nonmydriatic retinal camera is convenient for patient self-examination or medical stuff visiting a patient at home. This paper presents a portable nonmydriatic fundus camera with the field of view (FOV) of 40°, Two kinds of light source are used, 590nm is used in imaging, while 808nm light is used in observing the fundus in high resolving power. Ring lights and a hollow mirror are employed to restrain the stray light from the cornea center. The focus of the camera is adjusted by reposition the CCD along the optical axis. The range of the diopter is between -20m-1 and 20m-1.

  19. FPGA-accelerated adaptive optics wavefront control

    NASA Astrophysics Data System (ADS)

    Mauch, S.; Reger, J.; Reinlein, C.; Appelfelder, M.; Goy, M.; Beckert, E.; Tünnermann, A.

    2014-03-01

    The speed of real-time adaptive optical systems is primarily restricted by the data processing hardware and computational aspects. Furthermore, the application of mirror layouts with increasing numbers of actuators reduces the bandwidth (speed) of the system and, thus, the number of applicable control algorithms. This burden turns out a key-impediment for deformable mirrors with continuous mirror surface and highly coupled actuator influence functions. In this regard, specialized hardware is necessary for high performance real-time control applications. Our approach to overcome this challenge is an adaptive optics system based on a Shack-Hartmann wavefront sensor (SHWFS) with a CameraLink interface. The data processing is based on a high performance Intel Core i7 Quadcore hard real-time Linux system. Employing a Xilinx Kintex-7 FPGA, an own developed PCie card is outlined in order to accelerate the analysis of a Shack-Hartmann Wavefront Sensor. A recently developed real-time capable spot detection algorithm evaluates the wavefront. The main features of the presented system are the reduction of latency and the acceleration of computation For example, matrix multiplications which in general are of complexity O(n3 are accelerated by using the DSP48 slices of the field-programmable gate array (FPGA) as well as a novel hardware implementation of the SHWFS algorithm. Further benefits are the Streaming SIMD Extensions (SSE) which intensively use the parallelization capability of the processor for further reducing the latency and increasing the bandwidth of the closed-loop. Due to this approach, up to 64 actuators of a deformable mirror can be handled and controlled without noticeable restriction from computational burdens.

  20. Single-Camera-Based Method for Step Length Symmetry Measurement in Unconstrained Elderly Home Monitoring.

    PubMed

    Cai, Xi; Han, Guang; Song, Xin; Wang, Jinkuan

    2017-11-01

    single-camera-based gait monitoring is unobtrusive, inexpensive, and easy-to-use to monitor daily gait of seniors in their homes. However, most studies require subjects to walk perpendicularly to camera's optical axis or along some specified routes, which limits its application in elderly home monitoring. To build unconstrained monitoring environments, we propose a method to measure step length symmetry ratio (a useful gait parameter representing gait symmetry without significant relationship with age) from unconstrained straight walking using a single camera, without strict restrictions on walking directions or routes. according to projective geometry theory, we first develop a calculation formula of step length ratio for the case of unconstrained straight-line walking. Then, to adapt to general cases, we propose to modify noncollinear footprints, and accordingly provide general procedure for step length ratio extraction from unconstrained straight walking. Our method achieves a mean absolute percentage error (MAPE) of 1.9547% for 15 subjects' normal and abnormal side-view gaits, and also obtains satisfactory MAPEs for non-side-view gaits (2.4026% for 45°-view gaits and 3.9721% for 30°-view gaits). The performance is much better than a well-established monocular gait measurement system suitable only for side-view gaits with a MAPE of 3.5538%. Independently of walking directions, our method can accurately estimate step length ratios from unconstrained straight walking. This demonstrates our method is applicable for elders' daily gait monitoring to provide valuable information for elderly health care, such as abnormal gait recognition, fall risk assessment, etc. single-camera-based gait monitoring is unobtrusive, inexpensive, and easy-to-use to monitor daily gait of seniors in their homes. However, most studies require subjects to walk perpendicularly to camera's optical axis or along some specified routes, which limits its application in elderly home monitoring. To build unconstrained monitoring environments, we propose a method to measure step length symmetry ratio (a useful gait parameter representing gait symmetry without significant relationship with age) from unconstrained straight walking using a single camera, without strict restrictions on walking directions or routes. according to projective geometry theory, we first develop a calculation formula of step length ratio for the case of unconstrained straight-line walking. Then, to adapt to general cases, we propose to modify noncollinear footprints, and accordingly provide general procedure for step length ratio extraction from unconstrained straight walking. Our method achieves a mean absolute percentage error (MAPE) of 1.9547% for 15 subjects' normal and abnormal side-view gaits, and also obtains satisfactory MAPEs for non-side-view gaits (2.4026% for 45°-view gaits and 3.9721% for 30°-view gaits). The performance is much better than a well-established monocular gait measurement system suitable only for side-view gaits with a MAPE of 3.5538%. Independently of walking directions, our method can accurately estimate step length ratios from unconstrained straight walking. This demonstrates our method is applicable for elders' daily gait monitoring to provide valuable information for elderly health care, such as abnormal gait recognition, fall risk assessment, etc.

  1. The Sydney University PAPA camera

    NASA Astrophysics Data System (ADS)

    Lawson, Peter R.

    1994-04-01

    The Precision Analog Photon Address (PAPA) camera is a photon-counting array detector that uses optical encoding to locate photon events on the output of a microchannel plate image intensifier. The Sydney University camera is a 256x256 pixel detector which can operate at speeds greater than 1 million photons per second and produce individual photon coordinates with a deadtime of only 300 ns. It uses a new Gray coded mask-plate which permits a simplified optical alignment and successfully guards against vignetting artifacts.

  2. ARNICA, the Arcetri near-infrared camera: Astronomical performance assessment.

    NASA Astrophysics Data System (ADS)

    Hunt, L. K.; Lisi, F.; Testi, L.; Baffa, C.; Borelli, S.; Maiolino, R.; Moriondo, G.; Stanga, R. M.

    1996-01-01

    The Arcetri near-infrared camera ARNICA was built as a users' instrument for the Infrared Telescope at Gornergrat (TIRGO), and is based on a 256x256 NICMOS 3 detector. In this paper, we discuss ARNICA's optical and astronomical performance at the TIRGO and at the William Herschel Telescope on La Palma. Optical performance is evaluated in terms of plate scale, distortion, point spread function, and ghosting. Astronomical performance is characterized by camera efficiency, sensitivity, and spatial uniformity of the photometry.

  3. 640x480 PtSi Stirling-cooled camera system

    NASA Astrophysics Data System (ADS)

    Villani, Thomas S.; Esposito, Benjamin J.; Davis, Timothy J.; Coyle, Peter J.; Feder, Howard L.; Gilmartin, Harvey R.; Levine, Peter A.; Sauer, Donald J.; Shallcross, Frank V.; Demers, P. L.; Smalser, P. J.; Tower, John R.

    1992-09-01

    A Stirling cooled 3 - 5 micron camera system has been developed. The camera employs a monolithic 640 X 480 PtSi-MOS focal plane array. The camera system achieves an NEDT equals 0.10 K at 30 Hz frame rate with f/1.5 optics (300 K background). At a spatial frequency of 0.02 cycles/mRAD the vertical and horizontal Minimum Resolvable Temperature are in the range of MRT equals 0.03 K (f/1.5 optics, 300 K background). The MOS focal plane array achieves a resolution of 480 TV lines per picture height independent of background level and position within the frame.

  4. Tracking the course of the manufacturing process in selective laser melting

    NASA Astrophysics Data System (ADS)

    Thombansen, U.; Gatej, A.; Pereira, M.

    2014-02-01

    An innovative optical train for a selective laser melting based manufacturing system (SLM) has been designed under the objective to track the course of the SLM process. In this, the thermal emission from the melt pool and the geometric properties of the interaction zone are addressed by applying a pyrometer and a camera system respectively. The optical system is designed such that all three radiations from processing laser, thermal emission and camera image are coupled coaxially and that they propagate on the same optical axis. As standard f-theta lenses for high power applications inevitably lead to aberrations and divergent optical axes for increasing deflection angles in combination with multiple wavelengths, a pre-focus system is used to implement a focusing unit which shapes the beam prior to passing the scanner. The sensor system records synchronously the current position of the laser beam, the current emission from the melt pool and an image of the interaction zone. Acquired data of the thermal emission is being visualized after processing which allows an instant evaluation of the course of the process at any position of each layer. As such, it provides a fully detailed history of the product This basic work realizes a first step towards self-optimization of the manufacturing process by providing information about quality relevant events during manufacture. The deviation from the planned course of the manufacturing process to the actual course of the manufacturing process can be used to adapt the manufacturing strategy from one layer to the next. In the current state, the system can be used to facilitate the setup of the manufacturing system as it allows identification of false machine settings without having to analyze the work piece.

  5. Digital Pinhole Camera

    ERIC Educational Resources Information Center

    Lancor, Rachael; Lancor, Brian

    2014-01-01

    In this article we describe how the classic pinhole camera demonstration can be adapted for use with digital cameras. Students can easily explore the effects of the size of the pinhole and its distance from the sensor on exposure time, magnification, and image quality. Instructions for constructing a digital pinhole camera and our method for…

  6. Optical Design of the Camera for Transiting Exoplanet Survey Satellite (TESS)

    NASA Technical Reports Server (NTRS)

    Chrisp, Michael; Clark, Kristin; Primeau, Brian; Dalpiaz, Michael; Lennon, Joseph

    2015-01-01

    The optical design of the wide field of view refractive camera, 34 degrees diagonal field, for the TESS payload is described. This fast f/1.4 cryogenic camera, operating at -75 C, has no vignetting for maximum light gathering within the size and weight constraints. Four of these cameras capture full frames of star images for photometric searches of planet crossings. The optical design evolution, from the initial Petzval design, took advantage of Forbes aspheres to develop a hybrid design form. This maximized the correction from the two aspherics resulting in a reduction of average spot size by sixty percent in the final design. An external long wavelength pass filter was replaced by an internal filter coating on a lens to save weight, and has been fabricated to meet the specifications. The stray light requirements were met by an extended lens hood baffle design, giving the necessary off-axis attenuation.

  7. Plenoptic Imager for Automated Surface Navigation

    NASA Technical Reports Server (NTRS)

    Zollar, Byron; Milder, Andrew; Milder, Andrew; Mayo, Michael

    2010-01-01

    An electro-optical imaging device is capable of autonomously determining the range to objects in a scene without the use of active emitters or multiple apertures. The novel, automated, low-power imaging system is based on a plenoptic camera design that was constructed as a breadboard system. Nanohmics proved feasibility of the concept by designing an optical system for a prototype plenoptic camera, developing simulated plenoptic images and range-calculation algorithms, constructing a breadboard prototype plenoptic camera, and processing images (including range calculations) from the prototype system. The breadboard demonstration included an optical subsystem comprised of a main aperture lens, a mechanical structure that holds an array of micro lenses at the focal distance from the main lens, and a structure that mates a CMOS imaging sensor the correct distance from the micro lenses. The demonstrator also featured embedded electronics for camera readout, and a post-processor executing image-processing algorithms to provide ranging information.

  8. Orbital-science investigation: Part C: photogrammetry of Apollo 15 photography

    USGS Publications Warehouse

    Wu, Sherman S.C.; Schafer, Francis J.; Jordan, Raymond; Nakata, Gary M.; Derick, James L.

    1972-01-01

    Mapping of large areas of the Moon by photogrammetric methods was not seriously considered until the Apollo 15 mission. In this mission, a mapping camera system and a 61-cm optical-bar high-resolution panoramic camera, as well as a laser altimeter, were used. The mapping camera system comprises a 7.6-cm metric terrain camera and a 7.6-cm stellar camera mounted in a fixed angular relationship (an angle of 96° between the two camera axes). The metric camera has a glass focal-plane plate with reseau grids. The ground-resolution capability from an altitude of 110 km is approximately 20 m. Because of the auxiliary stellar camera and the laser altimeter, the resulting metric photography can be used not only for medium- and small-scale cartographic or topographic maps, but it also can provide a basis for establishing a lunar geodetic network. The optical-bar panoramic camera has a 135- to 180-line resolution, which is approximately 1 to 2 m of ground resolution from an altitude of 110 km. Very large scale specialized topographic maps for supporting geologic studies of lunar-surface features can be produced from the stereoscopic coverage provided by this camera.

  9. A reference Pelton turbine - High speed visualization in the rotating frame

    NASA Astrophysics Data System (ADS)

    Solemslie, Bjørn W.; Dahlhaug, Ole G.

    2016-11-01

    To enable a detailed study the flow mechanisms effecting the flow within the reference Pelton runner designed at the Waterpower Laboratory (NTNLT) a flow visualization system has been developed. The system enables high speed filming of the hydraulic surface of a single bucket in the rotating frame of reference. It is built with an angular borescopes adapter entering the turbine along the rotational axis and a borescope embedded within a bucket. A stationary high speed camera located outside the turbine housing has been connected to the optical arrangement by a non-contact coupling. The view point of the system includes the whole hydraulic surface of one half of a bucket. The system has been designed to minimize the amount of vibrations and to ensure that the vibrations felt by the borescope are the same as those affecting the camera. The preliminary results captured with the system are promising and enable a detailed study of the flow within the turbine.

  10. Photorefraction Screens Millions for Vision Disorders

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Who would have thought that stargazing in the 1980s would lead to hundreds of thousands of schoolchildren seeing more clearly today? Collaborating with research ophthalmologists and optometrists, Marshall Space Flight Center scientists Joe Kerr and the late John Richardson adapted optics technology for eye screening methods using a process called photorefraction. Photorefraction consists of delivering a light beam into the eyes where it bends in the ocular media, hits the retina, and then reflects as an image back to a camera. A series of refinements and formal clinical studies followed their highly successful initial tests in the 1980s. Evaluating over 5,000 subjects in field tests, Kerr and Richardson used a camera system prototype with a specifically angled telephoto lens and flash to photograph a subject s eye. They then analyzed the image, the cornea and pupil in particular, for irregular reflective patterns. Early tests of the system with 1,657 Alabama children revealed that, while only 111 failed the traditional chart test, Kerr and Richardson s screening system found 507 abnormalities.

  11. Reconditioning of Cassini Narrow-Angle Camera

    NASA Image and Video Library

    2002-07-23

    These five images of single stars, taken at different times with the narrow-angle camera on NASA Cassini spacecraft, show the effects of haze collecting on the camera optics, then successful removal of the haze by warming treatments.

  12. Evaluation of modified portable digital camera for screening of diabetic retinopathy.

    PubMed

    Chalam, Kakarla V; Brar, Vikram S; Keshavamurthy, Ravi

    2009-01-01

    To describe a portable wide-field noncontact digital camera for posterior segment photography. The digital camera has a compound lens consisting of two optical elements (a 90-dpt and a 20-dpt lens) attached to a 7.2-megapixel camera. White-light-emitting diodes are used to illuminate the fundus and reduce source reflection. The camera settings are set to candlelight mode, the optic zoom standardized to x2.4 and the focus is manually set to 3.0 m. The new technique provides quality wide-angle digital images of the retina (60 degrees ) in patients with dilated pupils, at a fraction of the cost of established digital fundus photography. The modified digital camera is a useful alternative technique to acquire fundus images and provides a tool for screening posterior segment conditions, including diabetic retinopathy in a variety of clinical settings.

  13. Time-resolved X-ray excited optical luminescence using an optical streak camera

    NASA Astrophysics Data System (ADS)

    Ward, M. J.; Regier, T. Z.; Vogt, J. M.; Gordon, R. A.; Han, W.-Q.; Sham, T. K.

    2013-03-01

    We report the development of a time-resolved XEOL (TR-XEOL) system that employs an optical streak camera. We have conducted TR-XEOL experiments at the Canadian Light Source (CLS) operating in single bunch mode with a 570 ns dark gap and 35 ps electron bunch pulse, and at the Advanced Photon Source (APS) operating in top-up mode with a 153 ns dark gap and 33.5 ps electron bunch pulse. To illustrate the power of this technique we measured the TR-XEOL of solid-solution nanopowders of gallium nitride - zinc oxide, and for the first time have been able to resolve near-band-gap (NBG) optical luminescence emission from these materials. Herein we will discuss the development of the streak camera TR-XEOL technique and its application to the study of these novel materials.

  14. Exact optics - III. Schwarzschild's spectrograph camera revised

    NASA Astrophysics Data System (ADS)

    Willstrop, R. V.

    2004-03-01

    Karl Schwarzschild identified a system of two mirrors, each defined by conic sections, free of third-order spherical aberration, coma and astigmatism, and with a flat focal surface. He considered it impractical, because the field was too restricted. This system was rediscovered as a quadratic approximation to one of Lynden-Bell's `exact optics' designs which have wider fields. Thus the `exact optics' version has a moderate but useful field, with excellent definition, suitable for a spectrograph camera. The mirrors are strongly aspheric in both the Schwarzschild design and the exact optics version.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    van Dam, M A; Mignant, D L; Macintosh, B A

    In this paper, the adaptive optics (AO) system at the W.M. Keck Observatory is characterized. The authors calculate the error budget of the Keck AO system operating in natural guide star mode with a near infrared imaging camera. By modeling the control loops and recording residual centroids, the measurement noise and band-width errors are obtained. The error budget is consistent with the images obtained. Results of sky performance tests are presented: the AO system is shown to deliver images with average Strehl ratios of up to 0.37 at 1.58 {micro}m using a bright guide star and 0.19 for a magnitudemore » 12 star.« less

  16. New low noise CCD cameras for Pi-of-the-Sky project

    NASA Astrophysics Data System (ADS)

    Kasprowicz, G.; Czyrkowski, H.; Dabrowski, R.; Dominik, W.; Mankiewicz, L.; Pozniak, K.; Romaniuk, R.; Sitek, P.; Sokolowski, M.; Sulej, R.; Uzycki, J.; Wrochna, G.

    2006-10-01

    Modern research trends require observation of fainter and fainter astronomical objects on large areas of the sky. This implies usage of systems with high temporal and optical resolution with computer based data acquisition and processing. Therefore Charge Coupled Devices (CCD) became so popular. They offer quick picture conversion with much better quality than film based technologies. This work is theoretical and practical study of the CCD based picture acquisition system. The system was optimized for "Pi of The Sky" project. But it can be adapted to another professional astronomical researches. The work includes issue of picture conversion, signal acquisition, data transfer and mechanical construction of the device.

  17. Dual beam optical interferometer

    NASA Technical Reports Server (NTRS)

    Gutierrez, Roman C. (Inventor)

    2003-01-01

    A dual beam interferometer device is disclosed that enables moving an optics module in a direction, which changes the path lengths of two beams of light. The two beams reflect off a surface of an object and generate different speckle patterns detected by an element, such as a camera. The camera detects a characteristic of the surface.

  18. Robust Video Stabilization Using Particle Keypoint Update and l1-Optimized Camera Path

    PubMed Central

    Jeon, Semi; Yoon, Inhye; Jang, Jinbeum; Yang, Seungji; Kim, Jisung; Paik, Joonki

    2017-01-01

    Acquisition of stabilized video is an important issue for various type of digital cameras. This paper presents an adaptive camera path estimation method using robust feature detection to remove shaky artifacts in a video. The proposed algorithm consists of three steps: (i) robust feature detection using particle keypoints between adjacent frames; (ii) camera path estimation and smoothing; and (iii) rendering to reconstruct a stabilized video. As a result, the proposed algorithm can estimate the optimal homography by redefining important feature points in the flat region using particle keypoints. In addition, stabilized frames with less holes can be generated from the optimal, adaptive camera path that minimizes a temporal total variation (TV). The proposed video stabilization method is suitable for enhancing the visual quality for various portable cameras and can be applied to robot vision, driving assistant systems, and visual surveillance systems. PMID:28208622

  19. SPLASSH: Open source software for camera-based high-speed, multispectral in-vivo optical image acquisition

    PubMed Central

    Sun, Ryan; Bouchard, Matthew B.; Hillman, Elizabeth M. C.

    2010-01-01

    Camera-based in-vivo optical imaging can provide detailed images of living tissue that reveal structure, function, and disease. High-speed, high resolution imaging can reveal dynamic events such as changes in blood flow and responses to stimulation. Despite these benefits, commercially available scientific cameras rarely include software that is suitable for in-vivo imaging applications, making this highly versatile form of optical imaging challenging and time-consuming to implement. To address this issue, we have developed a novel, open-source software package to control high-speed, multispectral optical imaging systems. The software integrates a number of modular functions through a custom graphical user interface (GUI) and provides extensive control over a wide range of inexpensive IEEE 1394 Firewire cameras. Multispectral illumination can be incorporated through the use of off-the-shelf light emitting diodes which the software synchronizes to image acquisition via a programmed microcontroller, allowing arbitrary high-speed illumination sequences. The complete software suite is available for free download. Here we describe the software’s framework and provide details to guide users with development of this and similar software. PMID:21258475

  20. Image quality testing of assembled IR camera modules

    NASA Astrophysics Data System (ADS)

    Winters, Daniel; Erichsen, Patrik

    2013-10-01

    Infrared (IR) camera modules for the LWIR (8-12_m) that combine IR imaging optics with microbolometer focal plane array (FPA) sensors with readout electronics are becoming more and more a mass market product. At the same time, steady improvements in sensor resolution in the higher priced markets raise the requirement for imaging performance of objectives and the proper alignment between objective and FPA. This puts pressure on camera manufacturers and system integrators to assess the image quality of finished camera modules in a cost-efficient and automated way for quality control or during end-of-line testing. In this paper we present recent development work done in the field of image quality testing of IR camera modules. This technology provides a wealth of additional information in contrast to the more traditional test methods like minimum resolvable temperature difference (MRTD) which give only a subjective overall test result. Parameters that can be measured are image quality via the modulation transfer function (MTF) for broadband or with various bandpass filters on- and off-axis and optical parameters like e.g. effective focal length (EFL) and distortion. If the camera module allows for refocusing the optics, additional parameters like best focus plane, image plane tilt, auto-focus quality, chief ray angle etc. can be characterized. Additionally, the homogeneity and response of the sensor with the optics can be characterized in order to calculate the appropriate tables for non-uniformity correction (NUC). The technology can also be used to control active alignment methods during mechanical assembly of optics to high resolution sensors. Other important points that are discussed are the flexibility of the technology to test IR modules with different form factors, electrical interfaces and last but not least the suitability for fully automated measurements in mass production.

  1. Multimodal optical setup based on spectrometer and cameras combination for biological tissue characterization with spatially modulated illumination

    NASA Astrophysics Data System (ADS)

    Baruch, Daniel; Abookasis, David

    2017-04-01

    The application of optical techniques as tools for biomedical research has generated substantial interest for the ability of such methodologies to simultaneously measure biochemical and morphological parameters of tissue. Ongoing optimization of optical techniques may introduce such tools as alternative or complementary to conventional methodologies. The common approach shared by current optical techniques lies in the independent acquisition of tissue's optical properties (i.e., absorption and reduced scattering coefficients) from reflected or transmitted light. Such optical parameters, in turn, provide detailed information regarding both the concentrations of clinically relevant chromophores and macroscopic structural variations in tissue. We couple a noncontact optical setup with a simple analysis algorithm to obtain absorption and scattering coefficients of biological samples under test. Technically, a portable picoprojector projects serial sinusoidal patterns at low and high spatial frequencies, while a spectrometer and two independent CCD cameras simultaneously acquire the reflected diffuse light through a single spectrometer and two separate CCD cameras having different bandpass filters at nonisosbestic and isosbestic wavelengths in front of each. This configuration fills the gaps in each other's capabilities for acquiring optical properties of tissue at high spectral and spatial resolution. Experiments were performed on both tissue-mimicking phantoms as well as hands of healthy human volunteers to quantify their optical properties as proof of concept for the present technique. In a separate experiment, we derived the optical properties of the hand skin from the measured diffuse reflectance, based on a recently developed camera model. Additionally, oxygen saturation levels of tissue measured by the system were found to agree well with reference values. Taken together, the present results demonstrate the potential of this integrated setup for diagnostic and research applications.

  2. Hyperspectral imaging-based credit card verifier structure with adaptive learning.

    PubMed

    Sumriddetchkajorn, Sarun; Intaravanne, Yuttana

    2008-12-10

    We propose and experimentally demonstrate a hyperspectral imaging-based optical structure for verifying a credit card. Our key idea comes from the fact that the fine detail of the embossed hologram stamped on the credit card is hard to duplicate, and therefore its key color features can be used for distinguishing between the real and counterfeit ones. As the embossed hologram is a diffractive optical element, we shine a number of broadband light sources one at a time, each at a different incident angle, on the embossed hologram of the credit card in such a way that different color spectra per incident angle beam are diffracted and separated in space. In this way, the center of mass of the histogram on each color plane is investigated by using a feed-forward backpropagation neural-network configuration. Our experimental demonstration using two off-the-shelf broadband white light emitting diodes, one digital camera, and a three-layer neural network can effectively identify 38 genuine and 109 counterfeit credit cards with false rejection rates of 5.26% and 0.92%, respectively. Key features include low cost, simplicity, no moving parts, no need of an additional decoding key, and adaptive learning.

  3. Realtime system for GLAS on WHT

    NASA Astrophysics Data System (ADS)

    Skvarč, Jure; Tulloch, Simon; Myers, Richard M.

    2006-06-01

    The new ground layer adaptive optics system (GLAS) on the William Herschel Telescope (WHT) on La Palma will be based on the existing natural guide star adaptive optics system called NAOMI. A part of the new developments is a new control system for the tip-tilt mirror. Instead of the existing system, built around a custom built multiprocessor computer made of C40 DSPs, this system uses an ordinary PC machine and a Linux operating system. It is equipped with a high sensitivity L3 CCD camera with effective readout noise of nearly zero. The software design for the tip-tilt system is being completely redeveloped, in order to make a use of object oriented design which should facilitate easier integration with the rest of the observing system at the WHT. The modular design of the system allows incorporation of different centroiding and loop control methods. To test the system off-sky, we have built a laboratory bench using an artificial light source and a tip-tilt mirror. We present results of tip-tilt correction quality using different centroiding algorithms and different control loop methods at different light levels. This system will serve as a testing ground for a transition to a completely PC-based real-time control system.

  4. Demonstration of a vectorial optical field generator with adaptive close loop control.

    PubMed

    Chen, Jian; Kong, Lingjiang; Zhan, Qiwen

    2017-12-01

    We experimentally demonstrate a vectorial optical field generator (VOF-Gen) with an adaptive close loop control. The close loop control capability is illustrated with the calibration of polarization modulation of the system. To calibrate the polarization ratio modulation, we generate 45° linearly polarized beam and make it propagate through a linear analyzer whose transmission axis is orthogonal to the incident beam. For the retardation calibration, circularly polarized beam is employed and a circular polarization analyzer with the opposite chirality is placed in front of the CCD as the detector. In both cases, the close loop control automatically changes the value of the corresponding calibration parameters in the pre-set ranges to generate the phase patterns applied to the spatial light modulators and records the intensity distribution of the output beam by the CCD camera. The optimized calibration parameters are determined corresponding to the minimum total intensity in each case. Several typical kinds of vectorial optical beams are created with and without the obtained calibration parameters, and the full Stokes parameter measurements are carried out to quantitatively analyze the polarization distribution of the generated beams. The comparisons among these results clearly show that the obtained calibration parameters could remarkably improve the accuracy of the polarization modulation of the VOF-Gen, especially for generating elliptically polarized beam with large ellipticity, indicating the significance of the presented close loop in enhancing the performance of the VOF-Gen.

  5. Design, demonstration and testing of low F-number LWIR panoramic imaging relay optics

    NASA Astrophysics Data System (ADS)

    Furxhi, Orges; Frascati, Joe; Driggers, Ronald

    2018-04-01

    Panoramic imaging is inherently wide field of view. High sensitivity uncooled Long Wave Infrared (LWIR) imaging requires low F-number optics. These two requirements result in short back working distance designs that, in addition to being costly, are challenging to integrate with commercially available uncooled LWIR cameras and cores. Common challenges include the relocation of the shutter flag, custom calibration of the camera dynamic range and NUC tables, focusing, and athermalization. Solutions to these challenges add to the system cost and make panoramic uncooled LWIR cameras commercially unattractive. In this paper, we present the design of Panoramic Imaging Relay Optics (PIRO) and show imagery and test results with one of the first prototypes. PIRO designs use several reflective surfaces (generally two) to relay a panoramic scene onto a real, donut-shaped image. The PIRO donut is imaged on the focal plane of the camera using a commercially-off-the-shelf (COTS) low F-number lens. This approach results in low component cost and effortless integration with pre-calibrated commercially available cameras and lenses.

  6. Motionless active depth from defocus system using smart optics for camera autofocus applications

    NASA Astrophysics Data System (ADS)

    Amin, M. Junaid; Riza, Nabeel A.

    2016-04-01

    This paper describes a motionless active Depth from Defocus (DFD) system design suited for long working range camera autofocus applications. The design consists of an active illumination module that projects a scene illuminating coherent conditioned optical radiation pattern which maintains its sharpness over multiple axial distances allowing an increased DFD working distance range. The imager module of the system responsible for the actual DFD operation deploys an electronically controlled variable focus lens (ECVFL) as a smart optic to enable a motionless imager design capable of effective DFD operation. An experimental demonstration is conducted in the laboratory which compares the effectiveness of the coherent conditioned radiation module versus a conventional incoherent active light source, and demonstrates the applicability of the presented motionless DFD imager design. The fast response and no-moving-parts features of the DFD imager design are especially suited for camera scenarios where mechanical motion of lenses to achieve autofocus action is challenging, for example, in the tiny camera housings in smartphones and tablets. Applications for the proposed system include autofocus in modern day digital cameras.

  7. Image quality enhancement method for on-orbit remote sensing cameras using invariable modulation transfer function.

    PubMed

    Li, Jin; Liu, Zilong

    2017-07-24

    Remote sensing cameras in the visible/near infrared range are essential tools in Earth-observation, deep-space exploration, and celestial navigation. Their imaging performance, i.e. image quality here, directly determines the target-observation performance of a spacecraft, and even the successful completion of a space mission. Unfortunately, the camera itself, such as a optical system, a image sensor, and a electronic system, limits the on-orbit imaging performance. Here, we demonstrate an on-orbit high-resolution imaging method based on the invariable modulation transfer function (IMTF) of cameras. The IMTF, which is stable and invariable to the changing of ground targets, atmosphere, and environment on orbit or on the ground, depending on the camera itself, is extracted using a pixel optical focal-plane (PFP). The PFP produces multiple spatial frequency targets, which are used to calculate the IMTF at different frequencies. The resulting IMTF in combination with a constrained least-squares filter compensates for the IMTF, which represents the removal of the imaging effects limited by the camera itself. This method is experimentally confirmed. Experiments on an on-orbit panchromatic camera indicate that the proposed method increases 6.5 times of the average gradient, 3.3 times of the edge intensity, and 1.56 times of the MTF value compared to the case when IMTF is not used. This opens a door to push the limitation of a camera itself, enabling high-resolution on-orbit optical imaging.

  8. FPGA Based Adaptive Rate and Manifold Pattern Projection for Structured Light 3D Camera System †

    PubMed Central

    Lee, Sukhan

    2018-01-01

    The quality of the captured point cloud and the scanning speed of a structured light 3D camera system depend upon their capability of handling the object surface of a large reflectance variation in the trade-off of the required number of patterns to be projected. In this paper, we propose and implement a flexible embedded framework that is capable of triggering the camera single or multiple times for capturing single or multiple projections within a single camera exposure setting. This allows the 3D camera system to synchronize the camera and projector even for miss-matched frame rates such that the system is capable of projecting different types of patterns for different scan speed applications. This makes the system capturing a high quality of 3D point cloud even for the surface of a large reflectance variation while achieving a high scan speed. The proposed framework is implemented on the Field Programmable Gate Array (FPGA), where the camera trigger is adaptively generated in such a way that the position and the number of triggers are automatically determined according to camera exposure settings. In other words, the projection frequency is adaptive to different scanning applications without altering the architecture. In addition, the proposed framework is unique as it does not require any external memory for storage because pattern pixels are generated in real-time, which minimizes the complexity and size of the application-specific integrated circuit (ASIC) design and implementation. PMID:29642506

  9. Wafer defect detection by a polarization-insensitive external differential interference contrast module.

    PubMed

    Nativ, Amit; Feldman, Haim; Shaked, Natan T

    2018-05-01

    We present a system that is based on a new external, polarization-insensitive differential interference contrast (DIC) module specifically adapted for detecting defects in semiconductor wafers. We obtained defect signal enhancement relative to the surrounding wafer pattern when compared with bright-field imaging. The new DIC module proposed is based on a shearing interferometer that connects externally at the output port of an optical microscope and enables imaging thin samples, such as wafer defects. This module does not require polarization optics (such as Wollaston or Nomarski prisms) and is insensitive to polarization, unlike traditional DIC techniques. In addition, it provides full control of the DIC shear and orientation, which allows obtaining a differential phase image directly on the camera (with no further digital processing) while enhancing defect detection capabilities, even if the size of the defect is smaller than the resolution limit. Our technique has the potential of future integration into semiconductor production lines.

  10. The W. M. Keck Observatory Infrared Vortex Coronagraph and a First Image of HIP 79124 B

    NASA Astrophysics Data System (ADS)

    Serabyn, E.; Huby, E.; Matthews, K.; Mawet, D.; Absil, O.; Femenia, B.; Wizinowich, P.; Karlsson, M.; Bottom, M.; Campbell, R.; Carlomagno, B.; Defrère, D.; Delacroix, C.; Forsberg, P.; Gomez Gonzalez, C.; Habraken, S.; Jolivet, A.; Liewer, K.; Lilley, S.; Piron, P.; Reggiani, M.; Surdej, J.; Tran, H.; Vargas Catalán, E.; Wertz, O.

    2017-01-01

    An optical vortex coronagraph has been implemented within the NIRC2 camera on the Keck II telescope and used to carry out on-sky tests and observations. The development of this new L‧-band observational mode is described, and an initial demonstration of the new capability is presented: a resolved image of the low-mass companion to HIP 79124, which had previously been detected by means of interferometry. With HIP 79124 B at a projected separation of 186.5 mas, both the small inner working angle of the vortex coronagraph and the related imaging improvements were crucial in imaging this close companion directly. Due to higher Strehl ratios and more relaxed contrasts in L‧ band versus H band, this new coronagraphic capability will enable high-contrast, small-angle observations of nearby young exoplanets and disks on a par with those of shorter-wavelength extreme adaptive optics coronagraphs.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Serabyn, E.; Liewer, K.; Huby, E.

    An optical vortex coronagraph has been implemented within the NIRC2 camera on the Keck II telescope and used to carry out on-sky tests and observations. The development of this new L ′-band observational mode is described, and an initial demonstration of the new capability is presented: a resolved image of the low-mass companion to HIP 79124, which had previously been detected by means of interferometry. With HIP 79124 B at a projected separation of 186.5 mas, both the small inner working angle of the vortex coronagraph and the related imaging improvements were crucial in imaging this close companion directly. Duemore » to higher Strehl ratios and more relaxed contrasts in L ′ band versus H band, this new coronagraphic capability will enable high-contrast, small-angle observations of nearby young exoplanets and disks on a par with those of shorter-wavelength extreme adaptive optics coronagraphs.« less

  12. Diffraction phase microscopy realized with an automatic digital pinhole

    NASA Astrophysics Data System (ADS)

    Zheng, Cheng; Zhou, Renjie; Kuang, Cuifang; Zhao, Guangyuan; Zhang, Zhimin; Liu, Xu

    2017-12-01

    We report a novel approach to diffraction phase microscopy (DPM) with automatic pinhole alignment. The pinhole, which serves as a spatial low-pass filter to generate a uniform reference beam, is made out of a liquid crystal display (LCD) device that allows for electrical control. We have made DPM more accessible to users, while maintaining high phase measurement sensitivity and accuracy, through exploring low cost optical components and replacing the tedious pinhole alignment process with an automatic pinhole optical alignment procedure. Due to its flexibility in modifying the size and shape, this LCD device serves as a universal filter, requiring no future replacement. Moreover, a graphic user interface for real-time phase imaging has been also developed by using a USB CMOS camera. Experimental results of height maps of beads sample and live red blood cells (RBCs) dynamics are also presented, making this system ready for broad adaption to biological imaging and material metrology.

  13. Adaptive multiphoton endomicroscopy through a dynamically deformed multicore optical fiber using proximal detection.

    PubMed

    Warren, Sean C; Kim, Youngchan; Stone, James M; Mitchell, Claire; Knight, Jonathan C; Neil, Mark A A; Paterson, Carl; French, Paul M W; Dunsby, Chris

    2016-09-19

    This paper demonstrates multiphoton excited fluorescence imaging through a polarisation maintaining multicore fiber (PM-MCF) while the fiber is dynamically deformed using all-proximal detection. Single-shot proximal measurement of the relative optical path lengths of all the cores of the PM-MCF in double pass is achieved using a Mach-Zehnder interferometer read out by a scientific CMOS camera operating at 416 Hz. A non-linear least squares fitting procedure is then employed to determine the deformation-induced lateral shift of the excitation spot at the distal tip of the PM-MCF. An experimental validation of this approach is presented that compares the proximally measured deformation-induced lateral shift in focal spot position to an independent distally measured ground truth. The proximal measurement of deformation-induced shift in focal spot position is applied to correct for deformation-induced shifts in focal spot position during raster-scanning multiphoton excited fluorescence imaging.

  14. Adaptive DIT-Based Fringe Tracking and Prediction at IOTA

    NASA Technical Reports Server (NTRS)

    Wilson, Edward; Pedretti, Ettore; Bregman, Jesse; Mah, Robert W.; Traub, Wesley A.

    2004-01-01

    An automatic fringe tracking system has been developed and implemented at the Infrared Optical Telescope Array (IOTA). In testing during May 2002, the system successfully minimized the optical path differences (OPDs) for all three baselines at IOTA. Based on sliding window discrete Fourier transform (DFT) calculations that were optimized for computational efficiency and robustness to atmospheric disturbances, the algorithm has also been tested extensively on off-line data. Implemented in ANSI C on the 266 MHZ PowerPC processor running the VxWorks real-time operating system, the algorithm runs in approximately 2.0 milliseconds per scan (including all three interferograms), using the science camera and piezo scanners to measure and correct the OPDs. Preliminary analysis on an extension of this algorithm indicates a potential for predictive tracking, although at present, real-time implementation of this extension would require significantly more computational capacity.

  15. Smart lens: tunable liquid lens for laser tracking

    NASA Astrophysics Data System (ADS)

    Lin, Fan-Yi; Chu, Li-Yu; Juan, Yu-Shan; Pan, Sih-Ting; Fan, Shih-Kang

    2007-05-01

    A tracking system utilizing tunable liquid lens is proposed and demonstrated. Adapting the concept of EWOD (electrowetting-on-dielectric), the curvature of a droplet on a dielectric film can be controlled by varying the applied voltage. When utilizing the droplet as an optical lens, the focal length of this adaptive liquid lens can be adjusted as desired. Moreover, the light that passes through it can therefore be focused to different positions in space. In this paper, the tuning range of the curvature and focal length of the tunable liquid lens is investigated. Droplet transformation is observed and analyzed under a CCD camera. A tracking system combining the tunable liquid lens with a laser detection system is also proposed. With a feedback circuit that maximizing the returned signal by controlling the tunable lens, the laser beam can keep tracked on a distant reflected target while it is moving.

  16. Combined hostile fire and optics detection

    NASA Astrophysics Data System (ADS)

    Brännlund, Carl; Tidström, Jonas; Henriksson, Markus; Sjöqvist, Lars

    2013-10-01

    Snipers and other optically guided weapon systems are serious threats in military operations. We have studied a SWIR (Short Wave Infrared) camera-based system with capability to detect and locate snipers both before and after shot over a large field-of-view. The high frame rate SWIR-camera allows resolution of the temporal profile of muzzle flashes which is the infrared signature associated with the ejection of the bullet from the rifle. The capability to detect and discriminate sniper muzzle flashes with this system has been verified by FOI in earlier studies. In this work we have extended the system by adding a laser channel for optics detection. A laser diode with slit-shaped beam profile is scanned over the camera field-of-view to detect retro reflection from optical sights. The optics detection system has been tested at various distances up to 1.15 km showing the feasibility to detect rifle scopes in full daylight. The high speed camera gives the possibility to discriminate false alarms by analyzing the temporal data. The intensity variation, caused by atmospheric turbulence, enables discrimination of small sights from larger reflectors due to aperture averaging, although the targets only cover a single pixel. It is shown that optics detection can be integrated in combination with muzzle flash detection by adding a scanning rectangular laser slit. The overall optics detection capability by continuous surveillance of a relatively large field-of-view looks promising. This type of multifunctional system may become an important tool to detect snipers before and after shot.

  17. A rigid and thermally stable all ceramic optical support bench assembly for the LSST Camera

    NASA Astrophysics Data System (ADS)

    Kroedel, Matthias; Langton, J. Brian; Wahl, Bill

    2017-09-01

    This paper will present the ceramic design, fabrication and metrology results and assembly plan of the LSST camera optical bench structure which is using the unique manufacturing features of the HB-Cesic technology. The optical bench assembly consists of a rigid "Grid" fabrication supporting individual raft plates mounting sensor assemblies by way of a rigid kinematic support system to meet extreme stringent requirements for focal plane planarity and stability.

  18. New Optics See More With Less

    NASA Technical Reports Server (NTRS)

    Nabors, Sammy

    2015-01-01

    NASA offers companies an optical system that provides a unique panoramic perspective with a single camera. NASA's Marshall Space Flight Center has developed a technology that combines a panoramic refracting optic (PRO) lens with a unique detection system to acquire a true 360-degree field of view. Although current imaging systems can acquire panoramic images, they must use up to five cameras to obtain the full field of view. MSFC's technology obtains its panoramic images from one vantage point.

  19. Search for GRB related prompt optical emission and other fast varying objects with ``Pi of the Sky'' detector

    NASA Astrophysics Data System (ADS)

    Ćwiok, M.; Dominik, W.; Małek, K.; Mankiewicz, L.; Mrowca-Ciułacz, J.; Nawrocki, K.; Piotrowski, L. W.; Sitek, P.; Sokołowski, M.; Wrochna, G.; Żarnecki, A. F.

    2007-06-01

    Experiment “Pi of the Sky” is designed to search for prompt optical emission from GRB sources. 32 CCD cameras covering 2 steradians will monitor the sky continuously. The data will be analysed on-line in search for optical flashes. The prototype with 2 cameras operated at Las Campanas (Chile) since 2004 has recognised several outbursts of flaring stars and has given limits for a few GRB.

  20. A randomized comparison of laparoscopic, flexible endoscopic, and wired and wireless magnetic cameras on ex vivo and in vivo NOTES surgical performance.

    PubMed

    Chang, Victoria C; Tang, Shou-Jiang; Swain, C Paul; Bergs, Richard; Paramo, Juan; Hogg, Deborah C; Fernandez, Raul; Cadeddu, Jeffrey A; Scott, Daniel J

    2013-08-01

    The influence of endoscopic video camera (VC) image quality on surgical performance has not been studied. Flexible endoscopes are used as substitutes for laparoscopes in natural orifice translumenal endoscopic surgery (NOTES), but their optics are originally designed for intralumenal use. Manipulable wired or wireless independent VCs might offer advantages for NOTES but are still under development. To measure the optical characteristics of 4 VC systems and to compare their impact on the performance of surgical suturing tasks. VC systems included a laparoscope (Storz 10 mm), a flexible endoscope (Olympus GIF 160), and 2 prototype deployable cameras (magnetic anchoring and guidance system [MAGS] Camera and PillCam). In a randomized fashion, the 4 systems were evaluated regarding standardized optical characteristics and surgical manipulations of previously validated ex vivo (fundamentals of laparoscopic surgery model) and in vivo (live porcine Nissen model) tasks; objective metrics (time and errors/precision) and combined surgeon (n = 2) performance were recorded. Subtle differences were detected for color tests, and field of view was variable (65°-115°). Suitable resolution was detected up to 10 cm for the laparoscope and MAGS camera but only at closer distances for the endoscope and PillCam. Compared with the laparoscope, surgical suturing performances were modestly lower for the MAGS camera and significantly lower for the endoscope (ex vivo) and PillCam (ex vivo and in vivo). This study documented distinct differences in VC systems that may be used for NOTES in terms of both optical characteristics and surgical performance. Additional work is warranted to optimize cameras for NOTES. Deployable systems may be especially well suited for this purpose.

  1. 3D papillary image capturing by the stereo fundus camera system for clinical diagnosis on retina and optic nerve

    NASA Astrophysics Data System (ADS)

    Motta, Danilo A.; Serillo, André; de Matos, Luciana; Yasuoka, Fatima M. M.; Bagnato, Vanderlei S.; Carvalho, Luis A. V.

    2014-03-01

    Glaucoma is the second main cause of the blindness in the world and there is a tendency to increase this number due to the lifetime expectation raise of the population. Glaucoma is related to the eye conditions, which leads the damage to the optic nerve. This nerve carries visual information from eye to brain, then, if it has damage, it compromises the visual quality of the patient. In the majority cases the damage of the optic nerve is irreversible and it happens due to increase of intraocular pressure. One of main challenge for the diagnosis is to find out this disease, because any symptoms are not present in the initial stage. When is detected, it is already in the advanced stage. Currently the evaluation of the optic disc is made by sophisticated fundus camera, which is inaccessible for the majority of Brazilian population. The purpose of this project is to develop a specific fundus camera without fluorescein angiography and red-free system to accomplish 3D image of optic disc region. The innovation is the new simplified design of a stereo-optical system, in order to make capable the 3D image capture and in the same time quantitative measurements of excavation and topography of optic nerve; something the traditional fundus cameras do not do. The dedicated hardware and software is developed for this ophthalmic instrument, in order to permit quick capture and print of high resolution 3D image and videos of optic disc region (20° field-of-view) in the mydriatic and nonmydriatic mode.

  2. Sensitivity, accuracy, and precision issues in opto-electronic holography based on fiber optics and high-spatial- and high-digitial-resolution cameras

    NASA Astrophysics Data System (ADS)

    Furlong, Cosme; Yokum, Jeffrey S.; Pryputniewicz, Ryszard J.

    2002-06-01

    Sensitivity, accuracy, and precision characteristics in quantitative optical metrology techniques, and specifically in optoelectronic holography based on fiber optics and high-spatial and high-digital resolution cameras, are discussed in this paper. It is shown that sensitivity, accuracy, and precision dependent on both, the effective determination of optical phase and the effective characterization of the illumination-observation conditions. Sensitivity, accuracy, and precision are investigated with the aid of National Institute of Standards and Technology (NIST) traceable gages, demonstrating the applicability of quantitative optical metrology techniques to satisfy constantly increasing needs for the study and development of emerging technologies.

  3. Miniature optical planar camera based on a wide-angle metasurface doublet corrected for monochromatic aberrations

    DOE PAGES

    Arbabi, Amir; Arbabi, Ehsan; Kamali, Seyedeh Mahsa; ...

    2016-11-28

    Optical metasurfaces are two-dimensional arrays of nano-scatterers that modify optical wavefronts at subwavelength spatial resolution. They are poised to revolutionize optics by enabling complex low-cost systems where multiple metasurfaces are lithographically stacked and integrated with electronics. For imaging applications, metasurface stacks can perform sophisticated image corrections and can be directly integrated with image sensors. Here we demonstrate this concept with a miniature flat camera integrating a monolithic metasurface lens doublet corrected for monochromatic aberrations, and an image sensor. The doublet lens, which acts as a fisheye photographic objective, has a small f-number of 0.9, an angle-of-view larger than 60° ×more » 60°, and operates at 850 nm wavelength with 70% focusing efficiency. The camera exhibits nearly diffraction-limited image quality, which indicates the potential of this technology in the development of optical systems for microscopy, photography, and computer vision.« less

  4. Measuring the spatial resolution of an optical system in an undergraduate optics laboratory

    NASA Astrophysics Data System (ADS)

    Leung, Calvin; Donnelly, T. D.

    2017-06-01

    Two methods of quantifying the spatial resolution of a camera are described, performed, and compared, with the objective of designing an imaging-system experiment for students in an undergraduate optics laboratory. With the goal of characterizing the resolution of a typical digital single-lens reflex (DSLR) camera, we motivate, introduce, and show agreement between traditional test-target contrast measurements and the technique of using Fourier analysis to obtain the modulation transfer function (MTF). The advantages and drawbacks of each method are compared. Finally, we explore the rich optical physics at work in the camera system by calculating the MTF as a function of wavelength and f-number. For example, we find that the Canon 40D demonstrates better spatial resolution at short wavelengths, in accordance with scalar diffraction theory, but is not diffraction-limited, being significantly affected by spherical aberration. The experiment and data analysis routines described here can be built and written in an undergraduate optics lab setting.

  5. Design of a frequency domain instrument for simultaneous optical tomography and magnetic resonance imaging of small animals

    NASA Astrophysics Data System (ADS)

    Masciotti, James M.; Rahim, Shaheed; Grover, Jarrett; Hielscher, Andreas H.

    2007-02-01

    We present a design for frequency domain instrument that allows for simultaneous gathering of magnetic resonance and diffuse optical tomographic imaging data. This small animal imaging system combines the high anatomical resolution of magnetic resonance imaging (MRI) with the high temporal resolution and physiological information provided by diffuse optical tomography (DOT). The DOT hardware comprises laser diodes and an intensified CCD camera, which are modulated up to 1 GHz by radio frequency (RF) signal generators. An optical imaging head is designed to fit inside the 4 cm inner diameter of a 9.4 T MRI system. Graded index fibers are used to transfer light between the optical hardware and the imaging head within the RF coil. Fiducial markers are integrated into the imaging head to allow the determination of the positions of the source and detector fibers on the MR images and to permit co-registration of MR and optical tomographic images. Detector fibers are arranged compactly and focused through a camera lens onto the photocathode of the intensified CCD camera.

  6. A preliminary optical design for the JANUS camera of ESA's space mission JUICE

    NASA Astrophysics Data System (ADS)

    Greggio, D.; Magrin, D.; Ragazzoni, R.; Munari, M.; Cremonese, G.; Bergomi, M.; Dima, M.; Farinato, J.; Marafatto, L.; Viotto, V.; Debei, S.; Della Corte, V.; Palumbo, P.; Hoffmann, H.; Jaumann, R.; Michaelis, H.; Schmitz, N.; Schipani, P.; Lara, L.

    2014-08-01

    The JANUS (Jovis, Amorum ac Natorum Undique Scrutator) will be the on board camera of the ESA JUICE satellite dedicated to the study of Jupiter and its moons, in particular Ganymede and Europa. This optical channel will provide surface maps with plate scale of 15 microrad/pixel with both narrow and broad band filters in the spectral range between 0.35 and 1.05 micrometers over a Field of View 1.72 × 1.29 degrees2. The current optical design is based on TMA design, with on-axis pupil and off-axis field of view. The optical stop is located at the secondary mirror providing an effective collecting area of 7854 mm2 (100 mm entrance pupil diameter) and allowing a simple internal baffling for first order straylight rejection. The nominal optical performances are almost limited by the diffraction and assure a nominal MTF better than 63% all over the whole Field of View. We describe here the optical design of the camera adopted as baseline together with the trade-off that has led us to this solution.

  7. Quantitative optical metrology with CMOS cameras

    NASA Astrophysics Data System (ADS)

    Furlong, Cosme; Kolenovic, Ervin; Ferguson, Curtis F.

    2004-08-01

    Recent advances in laser technology, optical sensing, and computer processing of data, have lead to the development of advanced quantitative optical metrology techniques for high accuracy measurements of absolute shapes and deformations of objects. These techniques provide noninvasive, remote, and full field of view information about the objects of interest. The information obtained relates to changes in shape and/or size of the objects, characterizes anomalies, and provides tools to enhance fabrication processes. Factors that influence selection and applicability of an optical technique include the required sensitivity, accuracy, and precision that are necessary for a particular application. In this paper, sensitivity, accuracy, and precision characteristics in quantitative optical metrology techniques, and specifically in optoelectronic holography (OEH) based on CMOS cameras, are discussed. Sensitivity, accuracy, and precision are investigated with the aid of National Institute of Standards and Technology (NIST) traceable gauges, demonstrating the applicability of CMOS cameras in quantitative optical metrology techniques. It is shown that the advanced nature of CMOS technology can be applied to challenging engineering applications, including the study of rapidly evolving phenomena occurring in MEMS and micromechatronics.

  8. SAAO's new robotic telescope and WiNCam (Wide-field Nasmyth Camera)

    NASA Astrophysics Data System (ADS)

    Worters, Hannah L.; O'Connor, James E.; Carter, David B.; Loubser, Egan; Fourie, Pieter A.; Sickafoose, Amanda; Swanevelder, Pieter

    2016-08-01

    The South African Astronomical Observatory (SAAO) is designing and manufacturing a wide-field camera for use on two of its telescopes. The initial concept was of a Prime focus camera for the 74" telescope, an equatorial design made by Grubb Parsons, where it would employ a 61mmx61mm detector to cover a 23 arcmin diameter field of view. However, while in the design phase, SAAO embarked on the process of acquiring a bespoke 1-metre robotic alt-az telescope with a 43 arcmin field of view, which needs a homegrown instrument suite. The Prime focus camera design was thus adapted for use on either telescope, increasing the detector size to 92mmx92mm. Since the camera will be mounted on the Nasmyth port of the new telescope, it was dubbed WiNCam (Wide-field Nasmyth Camera). This paper describes both WiNCam and the new telescope. Producing an instrument that can be swapped between two very different telescopes poses some unique challenges. At the Nasmyth port of the alt-az telescope there is ample circumferential space, while on the 74 inch the available envelope is constrained by the optical footprint of the secondary, if further obscuration is to be avoided. This forces the design into a cylindrical volume of 600mm diameter x 250mm height. The back focal distance is tightly constrained on the new telescope, shoehorning the shutter, filter unit, guider mechanism, a 10mm thick window and a tip/tilt mechanism for the detector into 100mm depth. The iris shutter and filter wheel planned for prime focus could no longer be accommodated. Instead, a compact shutter with a thickness of less than 20mm has been designed in-house, using a sliding curtain mechanism to cover an aperture of 125mmx125mm, while the filter wheel has been replaced with 2 peripheral filter cartridges (6 filters each) and a gripper to move a filter into the beam. We intend using through-vacuum wall PCB technology across the cryostat vacuum interface, instead of traditional hermetic connector-based wiring. This has advantages in terms of space saving and improved performance. Measures are being taken to minimise the risk of damage during an instrument change. The detector is cooled by a Stirling cooler, which can be disconnected from the cooler unit without risking damage. Each telescope has a dedicated cooler unit into which the coolant hoses of WiNCam will plug. To overcome an inherent drawback of Stirling coolers, an active vibration damper is incorporated. During an instrument change, the autoguider remains on the telescope, and the filter magazines, shutter and detector package are removed as a single unit. The new alt-az telescope, manufactured by APM-Telescopes, is a 1-metre f/8 Ritchey-Chrétien with optics by LOMO. The field flattening optics were designed by Darragh O'Donoghue to have high UV throughput and uniform encircled energy over the 100mm diameter field. WiNCam will be mounted on one Nasmyth port, with the second port available for SHOC (Sutherland High-speed Optical Camera) and guest instrumentation. The telescope will be located in Sutherland, where an existing dome is being extensively renovated to accommodate it. Commissioning is planned for the second half of 2016.

  9. Super-resolution in a defocused plenoptic camera: a wave-optics-based approach.

    PubMed

    Sahin, Erdem; Katkovnik, Vladimir; Gotchev, Atanas

    2016-03-01

    Plenoptic cameras enable the capture of a light field with a single device. However, with traditional light field rendering procedures, they can provide only low-resolution two-dimensional images. Super-resolution is considered to overcome this drawback. In this study, we present a super-resolution method for the defocused plenoptic camera (Plenoptic 1.0), where the imaging system is modeled using wave optics principles and utilizing low-resolution depth information of the scene. We are particularly interested in super-resolution of in-focus and near in-focus scene regions, which constitute the most challenging cases. The simulation results show that the employed wave-optics model makes super-resolution possible for such regions as long as sufficiently accurate depth information is available.

  10. Virtual-stereo fringe reflection technique for specular free-form surface testing

    NASA Astrophysics Data System (ADS)

    Ma, Suodong; Li, Bo

    2016-11-01

    Due to their excellent ability to improve the performance of optical systems, free-form optics have attracted extensive interest in many fields, e.g. optical design of astronomical telescopes, laser beam expanders, spectral imagers, etc. However, compared with traditional simple ones, testing for such kind of optics is usually more complex and difficult which has been being a big barrier for the manufacture and the application of these optics. Fortunately, owing to the rapid development of electronic devices and computer vision technology, fringe reflection technique (FRT) with advantages of simple system structure, high measurement accuracy and large dynamic range is becoming a powerful tool for specular free-form surface testing. In order to obtain absolute surface shape distributions of test objects, two or more cameras are often required in the conventional FRT which makes the system structure more complex and the measurement cost much higher. Furthermore, high precision synchronization between each camera is also a troublesome issue. To overcome the aforementioned drawback, a virtual-stereo FRT for specular free-form surface testing is put forward in this paper. It is able to achieve absolute profiles with the help of only one single biprism and a camera meanwhile avoiding the problems of stereo FRT based on binocular or multi-ocular cameras. Preliminary experimental results demonstrate the feasibility of the proposed technique.

  11. Programmable 10 MHz optical fiducial system for hydrodiagnostic cameras

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huen, T.

    1987-07-01

    A solid state light control system was designed and fabricated for use with hydrodiagnostic streak cameras of the electro-optic type. With its use, the film containing the streak images will have on it two time scales simultaneously exposed with the signal. This allows timing and cross timing. The latter is achieved with exposure modulation marking onto the time tick marks. The purpose of using two time scales will be discussed. The design is based on a microcomputer, resulting in a compact and easy to use instrument. The light source is a small red light emitting diode. Time marking can bemore » programmed in steps of 0.1 microseconds, with a range of 255 steps. The time accuracy is based on a precision 100 MHz quartz crystal, giving a divided down 10 MHz system frequency. The light is guided by two small 100 micron diameter optical fibers, which facilitates light coupling onto the input slit of an electro-optic streak camera. Three distinct groups of exposure modulation of the time tick marks can be independently set anywhere onto the streak duration. This system has been successfully used in Fabry-Perot laser velocimeters for over four years in our Laboratory. The microcomputer control section is also being used in providing optical fids to mechanical rotor cameras.« less

  12. A USB 2.0 computer interface for the UCO/Lick CCD cameras

    NASA Astrophysics Data System (ADS)

    Wei, Mingzhi; Stover, Richard J.

    2004-09-01

    The new UCO/Lick Observatory CCD camera uses a 200 MHz fiber optic cable to transmit image data and an RS232 serial line for low speed bidirectional command and control. Increasingly RS232 is a legacy interface supported on fewer computers. The fiber optic cable requires either a custom interface board that is plugged into the mainboard of the image acquisition computer to accept the fiber directly or an interface converter that translates the fiber data onto a widely used standard interface. We present here a simple USB 2.0 interface for the UCO/Lick camera. A single USB cable connects to the image acquisition computer and the camera's RS232 serial and fiber optic cables plug into the USB interface. Since most computers now support USB 2.0 the Lick interface makes it possible to use the camera on essentially any modern computer that has the supporting software. No hardware modifications or additions to the computer are needed. The necessary device driver software has been written for the Linux operating system which is now widely used at Lick Observatory. The complete data acquisition software for the Lick CCD camera is running on a variety of PC style computers as well as an HP laptop.

  13. An overview of instrumentation for the Large Binocular Telescope

    NASA Astrophysics Data System (ADS)

    Wagner, R. Mark

    2006-06-01

    An overview of instrumentation for the Large Binocular Telescope is presented. Optical instrumentation includes the Large Binocular Camera (LBC), a pair of wide-field (27' × 27') mosaic CCD imagers at the prime focus, and the Multi-Object Double Spectrograph (MODS), a pair of dual-beam blue-red optimized long-slit spectrographs mounted at the straight-through F/15 Gregorian focus incorporating multiple slit masks for multi-object spectroscopy over a 6' field and spectral resolutions of up to 8000. Infrared instrumentation includes the LBT Near-IR Spectroscopic Utility with Camera and Integral Field Unit for Extragalactic Research (LUCIFER), a modular near-infrared (0.9-2.5 μm) imager and spectrograph pair mounted at a bent interior focal station and designed for seeing-limited (FOV: 4' × 4') imaging, long-slit spectroscopy, and multi-object spectroscopy utilizing cooled slit masks and diffraction limited (FOV: 0'.5 × 0'.5) imaging and long-slit spectroscopy. Strategic instruments under development for the remaining two combined focal stations include an interferometric cryogenic beam combiner with near-infrared and thermal-infrared instruments for Fizeau imaging and nulling interferometry (LBTI) and an optical bench near-infrared beam combiner utilizing multi-conjugate adaptive optics for high angular resolution and sensitivity (LINC-NIRVANA). In addition, a fiber-fed bench spectrograph (PEPSI) capable of ultra high resolution spectroscopy and spectropolarimetry (R = 40,000-300,000) will be available as a principal investigator instrument. The availability of all these instruments mounted simultaneously on the LBT permits unique science, flexible scheduling, and improved operational support.

  14. An overview of instrumentation for the Large Binocular Telescope

    NASA Astrophysics Data System (ADS)

    Wagner, R. Mark

    2004-09-01

    An overview of instrumentation for the Large Binocular Telescope is presented. Optical instrumentation includes the Large Binocular Camera (LBC), a pair of wide-field (27'x 27') UB/VRI optimized mosaic CCD imagers at the prime focus, and the Multi-Object Double Spectrograph (MODS), a pair of dual-beam blue-red optimized long-slit spectrographs mounted at the straight-through F/15 Gregorian focus incorporating multiple slit masks for multi-object spectroscopy over a 6\\arcmin\\ field and spectral resolutions of up to 8000. Infrared instrumentation includes the LBT Near-IR Spectroscopic Utility with Camera and Integral Field Unit for Extragalactic Research (LUCIFER), a modular near-infrared (0.9-2.5 μm) imager and spectrograph pair mounted at a bent interior focal station and designed for seeing-limited (FOV: 4'x 4') imaging, long-slit spectroscopy, and multi-object spectroscopy utilizing cooled slit masks and diffraction limited (FOV: 0'.5 x 0'.5) imaging and long-slit spectroscopy. Strategic instruments under development for the remaining two combined focal stations include an interferometric cryogenic beam combiner with near-infrared and thermal-infrared instruments for Fizeau imaging and nulling interferometry (LBTI) and an optical bench beam combiner with visible and near-infrared imagers utilizing multi-conjugate adaptive optics for high angular resolution and sensitivity (LINC/NIRVANA). In addition, a fiber-fed bench spectrograph (PEPSI) capable of ultra high resolution spectroscopy and spectropolarimetry (R = 40,000-300,000) will be available as a principal investigator instrument. The availability of all these instruments mounted simultaneously on the LBT permits unique science, flexible scheduling, and improved operational support.

  15. An overview of instrumentation for the Large Binocular Telescope

    NASA Astrophysics Data System (ADS)

    Wagner, R. Mark

    2008-07-01

    An overview of instrumentation for the Large Binocular Telescope is presented. Optical instrumentation includes the Large Binocular Camera (LBC), a pair of wide-field (27' × 27') mosaic CCD imagers at the prime focus, and the Multi-Object Double Spectrograph (MODS), a pair of dual-beam blue-red optimized long-slit spectrographs mounted at the straight-through F/15 Gregorian focus incorporating multiple slit masks for multi-object spectroscopy over a 6 field and spectral resolutions of up to 8000. Infrared instrumentation includes the LBT Near-IR Spectroscopic Utility with Camera and Integral Field Unit for Extragalactic Research (LUCIFER), a modular near-infrared (0.9-2.5 μm) imager and spectrograph pair mounted at a bent interior focal station and designed for seeing-limited (FOV: 4' × 4') imaging, long-slit spectroscopy, and multi-object spectroscopy utilizing cooled slit masks and diffraction limited (FOV: 0.5' × 0.5') imaging and long-slit spectroscopy. Strategic instruments under development for the remaining two combined focal stations include an interferometric cryogenic beam combiner with near-infrared and thermal-infrared instruments for Fizeau imaging and nulling interferometry (LBTI) and an optical bench near-infrared beam combiner utilizing multi-conjugate adaptive optics for high angular resolution and sensitivity (LINC-NIRVANA). In addition, a fiber-fed bench spectrograph (PEPSI) capable of ultra high resolution spectroscopy and spectropolarimetry (R = 40,000-300,000) will be available as a principal investigator instrument. The availability of all these instruments mounted simultaneously on the LBT permits unique science, flexible scheduling, and improved operational support.

  16. Curved CCD detector devices and arrays for multispectral astrophysical applications and terrestrial stereo panoramic cameras

    NASA Astrophysics Data System (ADS)

    Swain, Pradyumna; Mark, David

    2004-09-01

    The emergence of curved CCD detectors as individual devices or as contoured mosaics assembled to match the curved focal planes of astronomical telescopes and terrestrial stereo panoramic cameras represents a major optical design advancement that greatly enhances the scientific potential of such instruments. In altering the primary detection surface within the telescope"s optical instrumentation system from flat to curved, and conforming the applied CCD"s shape precisely to the contour of the telescope"s curved focal plane, a major increase in the amount of transmittable light at various wavelengths through the system is achieved. This in turn enables multi-spectral ultra-sensitive imaging with much greater spatial resolution necessary for large and very large telescope applications, including those involving infrared image acquisition and spectroscopy, conducted over very wide fields of view. For earth-based and space-borne optical telescopes, the advent of curved CCD"s as the principle detectors provides a simplification of the telescope"s adjoining optics, reducing the number of optical elements and the occurrence of optical aberrations associated with large corrective optics used to conform to flat detectors. New astronomical experiments may be devised in the presence of curved CCD applications, in conjunction with large format cameras and curved mosaics, including three dimensional imaging spectroscopy conducted over multiple wavelengths simultaneously, wide field real-time stereoscopic tracking of remote objects within the solar system at high resolution, and deep field survey mapping of distant objects such as galaxies with much greater multi-band spatial precision over larger sky regions. Terrestrial stereo panoramic cameras equipped with arrays of curved CCD"s joined with associative wide field optics will require less optical glass and no mechanically moving parts to maintain continuous proper stereo convergence over wider perspective viewing fields than their flat CCD counterparts, lightening the cameras and enabling faster scanning and 3D integration of objects moving within a planetary terrain environment. Preliminary experiments conducted at the Sarnoff Corporation indicate the feasibility of curved CCD imagers with acceptable electro-optic integrity. Currently, we are in the process of evaluating the electro-optic performance of a curved wafer scale CCD imager. Detailed ray trace modeling and experimental electro-optical data performance obtained from the curved imager will be presented at the conference.

  17. Adaptive-Repetitive Visual-Servo Control of Low-Flying Aerial Robots via Uncalibrated High-Flying Cameras

    NASA Astrophysics Data System (ADS)

    Guo, Dejun; Bourne, Joseph R.; Wang, Hesheng; Yim, Woosoon; Leang, Kam K.

    2017-08-01

    This paper presents the design and implementation of an adaptive-repetitive visual-servo control system for a moving high-flying vehicle (HFV) with an uncalibrated camera to monitor, track, and precisely control the movements of a low-flying vehicle (LFV) or mobile ground robot. Applications of this control strategy include the use of high-flying unmanned aerial vehicles (UAVs) with computer vision for monitoring, controlling, and coordinating the movements of lower altitude agents in areas, for example, where GPS signals may be unreliable or nonexistent. When deployed, a remote operator of the HFV defines the desired trajectory for the LFV in the HFV's camera frame. Due to the circular motion of the HFV, the resulting motion trajectory of the LFV in the image frame can be periodic in time, thus an adaptive-repetitive control system is exploited for regulation and/or trajectory tracking. The adaptive control law is able to handle uncertainties in the camera's intrinsic and extrinsic parameters. The design and stability analysis of the closed-loop control system is presented, where Lyapunov stability is shown. Simulation and experimental results are presented to demonstrate the effectiveness of the method for controlling the movement of a low-flying quadcopter, demonstrating the capabilities of the visual-servo control system for localization (i.e.,, motion capturing) and trajectory tracking control. In fact, results show that the LFV can be commanded to hover in place as well as track a user-defined flower-shaped closed trajectory, while the HFV and camera system circulates above with constant angular velocity. On average, the proposed adaptive-repetitive visual-servo control system reduces the average RMS tracking error by over 77% in the image plane and over 71% in the world frame compared to using just the adaptive visual-servo control law.

  18. Review of adaptive optics OCT (AO-OCT): principles and applications for retinal imaging [Invited

    PubMed Central

    Pircher, Michael; Zawadzki, Robert J

    2017-01-01

    In vivo imaging of the human retina with a resolution that allows visualization of cellular structures has proven to be essential to broaden our knowledge about the physiology of this precious and very complex neural tissue that enables the first steps in vision. Many pathologic changes originate from functional and structural alterations on a cellular scale, long before any degradation in vision can be noted. Therefore, it is important to investigate these tissues with a sufficient level of detail in order to better understand associated disease development or the effects of therapeutic intervention. Optical retinal imaging modalities rely on the optical elements of the eye itself (mainly the cornea and lens) to produce retinal images and are therefore affected by the specific arrangement of these elements and possible imperfections in curvature. Thus, aberrations are introduced to the imaging light and image quality is degraded. To compensate for these aberrations, adaptive optics (AO), a technology initially developed in astronomy, has been utilized. However, the axial sectioning provided by retinal AO-based fundus cameras and scanning laser ophthalmoscope instruments is limited to tens of micrometers because of the rather small available numerical aperture of the eye. To overcome this limitation and thus achieve much higher axial sectioning in the order of 2-5µm, AO has been combined with optical coherence tomography (OCT) into AO-OCT. This enabled for the first time in vivo volumetric retinal imaging with high isotropic resolution. This article summarizes the technical aspects of AO-OCT and provides an overview on its various implementations and some of its clinical applications. In addition, latest developments in the field, such as computational AO-OCT and wavefront sensor less AO-OCT, are covered. PMID:28663890

  19. Review of adaptive optics OCT (AO-OCT): principles and applications for retinal imaging [Invited].

    PubMed

    Pircher, Michael; Zawadzki, Robert J

    2017-05-01

    In vivo imaging of the human retina with a resolution that allows visualization of cellular structures has proven to be essential to broaden our knowledge about the physiology of this precious and very complex neural tissue that enables the first steps in vision. Many pathologic changes originate from functional and structural alterations on a cellular scale, long before any degradation in vision can be noted. Therefore, it is important to investigate these tissues with a sufficient level of detail in order to better understand associated disease development or the effects of therapeutic intervention. Optical retinal imaging modalities rely on the optical elements of the eye itself (mainly the cornea and lens) to produce retinal images and are therefore affected by the specific arrangement of these elements and possible imperfections in curvature. Thus, aberrations are introduced to the imaging light and image quality is degraded. To compensate for these aberrations, adaptive optics (AO), a technology initially developed in astronomy, has been utilized. However, the axial sectioning provided by retinal AO-based fundus cameras and scanning laser ophthalmoscope instruments is limited to tens of micrometers because of the rather small available numerical aperture of the eye. To overcome this limitation and thus achieve much higher axial sectioning in the order of 2-5µm, AO has been combined with optical coherence tomography (OCT) into AO-OCT. This enabled for the first time in vivo volumetric retinal imaging with high isotropic resolution. This article summarizes the technical aspects of AO-OCT and provides an overview on its various implementations and some of its clinical applications. In addition, latest developments in the field, such as computational AO-OCT and wavefront sensor less AO-OCT, are covered.

  20. Evaluation of multispectral plenoptic camera

    NASA Astrophysics Data System (ADS)

    Meng, Lingfei; Sun, Ting; Kosoglow, Rich; Berkner, Kathrin

    2013-01-01

    Plenoptic cameras enable capture of a 4D lightfield, allowing digital refocusing and depth estimation from data captured with a compact portable camera. Whereas most of the work on plenoptic camera design has been based a simplistic geometric-optics-based characterization of the optical path only, little work has been done of optimizing end-to-end system performance for a specific application. Such design optimization requires design tools that need to include careful parameterization of main lens elements, as well as microlens array and sensor characteristics. In this paper we are interested in evaluating the performance of a multispectral plenoptic camera, i.e. a camera with spectral filters inserted into the aperture plane of the main lens. Such a camera enables single-snapshot spectral data acquisition.1-3 We first describe in detail an end-to-end imaging system model for a spectrally coded plenoptic camera that we briefly introduced in.4 Different performance metrics are defined to evaluate the spectral reconstruction quality. We then present a prototype which is developed based on a modified DSLR camera containing a lenslet array on the sensor and a filter array in the main lens. Finally we evaluate the spectral reconstruction performance of a spectral plenoptic camera based on both simulation and measurements obtained from the prototype.

  1. Space telescope phase B definition study. Volume 2A: Science instruments, f48/96 planetary camera

    NASA Technical Reports Server (NTRS)

    Grosso, R. P.; Mccarthy, D. J.

    1976-01-01

    The analysis and preliminary design of the f48/96 planetary camera for the space telescope are discussed. The camera design is for application to the axial module position of the optical telescope assembly.

  2. SpUpNIC (Spectrograph Upgrade: Newly Improved Cassegrain) on the South African Astronomical Observatory's 74-inch telescope

    NASA Astrophysics Data System (ADS)

    Crause, Lisa A.; Carter, Dave; Daniels, Alroy; Evans, Geoff; Fourie, Piet; Gilbank, David; Hendricks, Malcolm; Koorts, Willie; Lategan, Deon; Loubser, Egan; Mouries, Sharon; O'Connor, James E.; O'Donoghue, Darragh E.; Potter, Stephen; Sass, Craig; Sickafoose, Amanda A.; Stoffels, John; Swanevelder, Pieter; Titus, Keegan; van Gend, Carel; Visser, Martin; Worters, Hannah L.

    2016-08-01

    SpUpNIC (Spectrograph Upgrade: Newly Improved Cassegrain) is the extensively upgraded Cassegrain Spectrograph on the South African Astronomical Observatory's 74-inch (1.9-m) telescope. The inverse-Cassegrain collimator mirrors and woefully inefficient Maksutov-Cassegrain camera optics have been replaced, along with the CCD and SDSU controller. All moving mechanisms are now governed by a programmable logic controller, allowing remote configuration of the instrument via an intuitive new graphical user interface. The new collimator produces a larger beam to match the optically faster Folded-Schmidt camera design and nine surface-relief diffraction gratings offer various wavelength ranges and resolutions across the optical domain. The new camera optics (a fused silica Schmidt plate, a slotted fold flat and a spherically figured primary mirror, both Zerodur, and a fused silica field-flattener lens forming the cryostat window) reduce the camera's central obscuration to increase the instrument throughput. The physically larger and more sensitive CCD extends the available wavelength range; weak arc lines are now detectable down to 325 nm and the red end extends beyond one micron. A rear-of-slit viewing camera has streamlined the observing process by enabling accurate target placement on the slit and facilitating telescope focus optimisation. An interactive quick-look data reduction tool further enhances the user-friendliness of SpUpNI

  3. Focus adjustment method for CBERS 3 and 4 satellites Mux camera to be performed in air condition and its experimental verification for best performance in orbital vacuum condition

    NASA Astrophysics Data System (ADS)

    Scaduto, Lucimara C. N.; Malavolta, Alexandre T.; Modugno, Rodrigo G.; Vales, Luiz F.; Carvalho, Erica G.; Evangelista, Sérgio; Stefani, Mario A.; de Castro Neto, Jarbas C.

    2017-11-01

    The first Brazilian remote sensing multispectral camera (MUX) is currently under development at Opto Eletronica S.A. It consists of a four-spectral-band sensor covering a 450nm to 890nm wavelength range. This camera will provide images within a 20m ground resolution at nadir. The MUX camera is part of the payload of the upcoming Sino-Brazilian satellites CBERS 3&4 (China-Brazil Earth Resource Satellite). The preliminary alignment between the optical system and the CCD sensor, which is located at the focal plane assembly, was obtained in air condition, clean room environment. A collimator was used for the performance evaluation of the camera. The preliminary performance evaluation of the optical channel was registered by compensating the collimator focus position due to changes in the test environment, as an air-to-vacuum environment transition leads to a defocus process in this camera. Therefore, it is necessary to confirm that the alignment of the camera must always be attained ensuring that its best performance is reached for an orbital vacuum condition. For this reason and as a further step on the development process, the MUX camera Qualification Model was tested and evaluated inside a thermo-vacuum chamber and submitted to an as-orbit vacuum environment. In this study, the influence of temperature fields was neglected. This paper reports on the performance evaluation and discusses the results for this camera when operating within those mentioned test conditions. The overall optical tests and results show that the "in air" adjustment method was suitable to be performed, as a critical activity, to guarantee the equipment according to its design requirements.

  4. A surgical navigation system for non-contact diffuse optical tomography and intraoperative cone-beam CT

    NASA Astrophysics Data System (ADS)

    Daly, Michael J.; Muhanna, Nidal; Chan, Harley; Wilson, Brian C.; Irish, Jonathan C.; Jaffray, David A.

    2014-02-01

    A freehand, non-contact diffuse optical tomography (DOT) system has been developed for multimodal imaging with intraoperative cone-beam CT (CBCT) during minimally-invasive cancer surgery. The DOT system is configured for near-infrared fluorescence imaging with indocyanine green (ICG) using a collimated 780 nm laser diode and a nearinfrared CCD camera (PCO Pixelfly USB). Depending on the intended surgical application, the camera is coupled to either a rigid 10 mm diameter endoscope (Karl Storz) or a 25 mm focal length lens (Edmund Optics). A prototype flatpanel CBCT C-Arm (Siemens Healthcare) acquires low-dose 3D images with sub-mm spatial resolution. A 3D mesh is extracted from CBCT for finite-element DOT implementation in NIRFAST (Dartmouth College), with the capability for soft/hard imaging priors (e.g., segmented lymph nodes). A stereoscopic optical camera (NDI Polaris) provides real-time 6D localization of reflective spheres mounted to the laser and camera. Camera calibration combined with tracking data is used to estimate intrinsic (focal length, principal point, non-linear distortion) and extrinsic (translation, rotation) lens parameters. Source/detector boundary data is computed from the tracked laser/camera positions using radiometry models. Target registration errors (TRE) between real and projected boundary points are ~1-2 mm for typical acquisition geometries. Pre-clinical studies using tissue phantoms are presented to characterize 3D imaging performance. This translational research system is under investigation for clinical applications in head-and-neck surgery including oral cavity tumour resection, lymph node mapping, and free-flap perforator assessment.

  5. Digital holographic interferometry applied to the investigation of ignition process.

    PubMed

    Pérez-Huerta, J S; Saucedo-Anaya, Tonatiuh; Moreno, I; Ariza-Flores, D; Saucedo-Orozco, B

    2017-06-12

    We use the digital holographic interferometry (DHI) technique to display the early ignition process for a butane-air mixture flame. Because such an event occurs in a short time (few milliseconds), a fast CCD camera is used to study the event. As more detail is required for monitoring the temporal evolution of the process, less light coming from the combustion is captured by the CCD camera, resulting in a deficient and underexposed image. Therefore, the CCD's direct observation of the combustion process is limited (down to 1000 frames per second). To overcome this drawback, we propose the use of DHI along with a high power laser in order to supply enough light to increase the speed capture, thus improving the visualization of the phenomenon in the initial moments. An experimental optical setup based on DHI is used to obtain a large sequence of phase maps that allows us to observe two transitory stages in the ignition process: a first explosion which slightly emits visible light, and a second stage induced by variations in temperature when the flame is emerging. While the last stage can be directly monitored by the CCD camera, the first stage is hardly detected by direct observation, and DHI clearly evidences this process. Furthermore, our method can be easily adapted for visualizing other types of fast processes.

  6. Recent technology and usage of plastic lenses in image taking objectives

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Susumu; Sato, Hiroshi; Mori, Nobuyoshi; Kiriki, Toshihiko

    2005-09-01

    Recently, plastic lenses produced by injection molding are widely used in image taking objectives for digital cameras, camcorders, and mobile phone cameras, because of their suitability for volume production and ease of obtaining an advantage of aspherical surfaces. For digital camera and camcorder objectives, it is desirable that there is no image point variation with the temperature change in spite of employing several plastic lenses. At the same time, due to the shrinking pixel size of solid-state image sensor, there is now a requirement to assemble lenses with high accuracy. In order to satisfy these requirements, we have developed 16 times compact zoom objective for camcorder and 3 times class folded zoom objectives for digital camera, incorporating cemented plastic doublet consisting of a positive lens and a negative lens. Over the last few years, production volumes of camera-equipped mobile phones have increased substantially. Therefore, for mobile phone cameras, the consideration of productivity is more important than ever. For this application, we have developed a 1.3-mega pixels compact camera module with macro function utilizing the advantage of a plastic lens that can be given mechanically functional shape to outer flange part. Its objective consists of three plastic lenses and all critical dimensions related to optical performance can be determined by high precise optical elements. Therefore this camera module is manufactured without optical adjustment in automatic assembling line, and achieves both high productivity and high performance. Reported here are the constructions and the technical topics of image taking objectives described above.

  7. Advanced imaging research and development at DARPA

    NASA Astrophysics Data System (ADS)

    Dhar, Nibir K.; Dat, Ravi

    2012-06-01

    Advances in imaging technology have huge impact on our daily lives. Innovations in optics, focal plane arrays (FPA), microelectronics and computation have revolutionized camera design. As a result, new approaches to camera design and low cost manufacturing is now possible. These advances are clearly evident in visible wavelength band due to pixel scaling, improvements in silicon material and CMOS technology. CMOS cameras are available in cell phones and many other consumer products. Advances in infrared imaging technology have been slow due to market volume and many technological barriers in detector materials, optics and fundamental limits imposed by the scaling laws of optics. There is of course much room for improvements in both, visible and infrared imaging technology. This paper highlights various technology development projects at DARPA to advance the imaging technology for both, visible and infrared. Challenges and potentials solutions are highlighted in areas related to wide field-of-view camera design, small pitch pixel, broadband and multiband detectors and focal plane arrays.

  8. Optical touch sensing: practical bounds for design and performance

    NASA Astrophysics Data System (ADS)

    Bläßle, Alexander; Janbek, Bebart; Liu, Lifeng; Nakamura, Kanna; Nolan, Kimberly; Paraschiv, Victor

    2013-02-01

    Touch sensitive screens are used in many applications ranging in size from smartphones and tablets to display walls and collaborative surfaces. In this study, we consider optical touch sensing, a technology best suited for large-scale touch surfaces. Optical touch sensing utilizes cameras and light sources placed along the edge of the display. Within this framework, we first find a sufficient number of cameras necessary for identifying a convex polygon touching the screen, using a continuous light source on the boundary of a circular domain. We then find the number of cameras necessary to distinguish between two circular objects in a circular or rectangular domain. Finally, we use Matlab to simulate the polygonal mesh formed from distributing cameras and light sources on a circular domain. Using this, we compute the number of polygons in the mesh and the maximum polygon area to give us information about the accuracy of the configuration. We close with summary and conclusions, and pointers to possible future research directions.

  9. UTILIZATION OF THE WAVEFRONT SENSOR AND SHORT-EXPOSURE IMAGES FOR SIMULTANEOUS ESTIMATION OF QUASI-STATIC ABERRATION AND EXOPLANET INTENSITY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frazin, Richard A., E-mail: rfrazin@umich.edu

    2013-04-10

    Heretofore, the literature on exoplanet detection with coronagraphic telescope systems has paid little attention to the information content of short exposures and methods of utilizing the measurements of adaptive optics wavefront sensors. This paper provides a framework for the incorporation of the wavefront sensor measurements in the context of observing modes in which the science camera takes millisecond exposures. In this formulation, the wavefront sensor measurements provide a means to jointly estimate the static speckle and the planetary signal. The ability to estimate planetary intensities in as little as a few seconds has the potential to greatly improve the efficiencymore » of exoplanet search surveys. For simplicity, the mathematical development assumes a simple optical system with an idealized Lyot coronagraph. Unlike currently used methods, in which increasing the observation time beyond a certain threshold is useless, this method produces estimates whose error covariances decrease more quickly than inversely proportional to the observation time. This is due to the fact that the estimates of the quasi-static aberrations are informed by a new random (but approximately known) wavefront every millisecond. The method can be extended to include angular (due to diurnal field rotation) and spectral diversity. Numerical experiments are performed with wavefront data from the AEOS Adaptive Optics System sensing at 850 nm. These experiments assume a science camera wavelength {lambda} of 1.1 {mu}, that the measured wavefronts are exact, and a Gaussian approximation of shot-noise. The effects of detector read-out noise and other issues are left to future investigations. A number of static aberrations are introduced, including one with a spatial frequency exactly corresponding the planet location, which was at a distance of Almost-Equal-To 3{lambda}/D from the star. Using only 4 s of simulated observation time, a planetary intensity, of Almost-Equal-To 1 photon ms{sup -1}, and a stellar intensity of Almost-Equal-To 10{sup 5} photons ms{sup -1} (contrast ratio 10{sup 5}), the short-exposure estimation method recovers the amplitudes' static aberrations with 1% accuracy, and the planet brightness with 20% accuracy.« less

  10. Towards designing an optical-flow based colonoscopy tracking algorithm: a comparative study

    NASA Astrophysics Data System (ADS)

    Liu, Jianfei; Subramanian, Kalpathi R.; Yoo, Terry S.

    2013-03-01

    Automatic co-alignment of optical and virtual colonoscopy images can supplement traditional endoscopic procedures, by providing more complete information of clinical value to the gastroenterologist. In this work, we present a comparative analysis of our optical flow based technique for colonoscopy tracking, in relation to current state of the art methods, in terms of tracking accuracy, system stability, and computational efficiency. Our optical-flow based colonoscopy tracking algorithm starts with computing multi-scale dense and sparse optical flow fields to measure image displacements. Camera motion parameters are then determined from optical flow fields by employing a Focus of Expansion (FOE) constrained egomotion estimation scheme. We analyze the design choices involved in the three major components of our algorithm: dense optical flow, sparse optical flow, and egomotion estimation. Brox's optical flow method,1 due to its high accuracy, was used to compare and evaluate our multi-scale dense optical flow scheme. SIFT6 and Harris-affine features7 were used to assess the accuracy of the multi-scale sparse optical flow, because of their wide use in tracking applications; the FOE-constrained egomotion estimation was compared with collinear,2 image deformation10 and image derivative4 based egomotion estimation methods, to understand the stability of our tracking system. Two virtual colonoscopy (VC) image sequences were used in the study, since the exact camera parameters(for each frame) were known; dense optical flow results indicated that Brox's method was superior to multi-scale dense optical flow in estimating camera rotational velocities, but the final tracking errors were comparable, viz., 6mm vs. 8mm after the VC camera traveled 110mm. Our approach was computationally more efficient, averaging 7.2 sec. vs. 38 sec. per frame. SIFT and Harris affine features resulted in tracking errors of up to 70mm, while our sparse optical flow error was 6mm. The comparison among egomotion estimation algorithms showed that our FOE-constrained egomotion estimation method achieved the optimal balance between tracking accuracy and robustness. The comparative study demonstrated that our optical-flow based colonoscopy tracking algorithm maintains good accuracy and stability for routine use in clinical practice.

  11. On-sky performance of the tip-tilt correction system for GLAS using an EMCCD camera

    NASA Astrophysics Data System (ADS)

    Skvarč, Jure; Tulloch, Simon

    2008-07-01

    Adaptive optics systems based on laser guide stars still need a natural guide star (NGS) to correct for the image motion caused by the atmosphere and by imperfect telescope tracking. The ability to properly compensate for this motion using a faint NGS is critical to achieve large sky coverage. For the laser guide system (GLAS) on the 4.2 m William Herschel Telescope we designed and tested in the laboratory and on-sky a tip-tilt correction system based on a PC running Linux and an EMCCD technology camera. The control software allows selection of different centroiding algorithms and loop control methods as well as the control parameters. Parameter analysis has been performed using tip-tilt only correction before the laser commissioning and the selected sets of parameters were then used during commissioning of the laser guide star system. We have established the SNR of the guide star as a function of magnitude, depending on the image sampling frequency and on the dichroic used in the optical system; achieving a measurable improvement using full AO correction with NGSes down to magnitude range R=16.5 to R=18. A minimum SNR of about 10 was established to be necessary for a useful correction. The system was used to produce 0.16 arcsecond images in H band using bright NGS and laser correction during GLAS commissioning runs.

  12. Ultimate turbulence experiment: simultaneous measurements of Cn2 near the ground using six devices and eight methods

    NASA Astrophysics Data System (ADS)

    Yatcheva, Lydia; Barros, Rui; Segel, Max; Sprung, Detlev; Sucher, Erik; Eisele, Christian; Gladysz, Szymon

    2015-10-01

    We have performed a series of experiments in order to simultaneously validate several devices and methods for measurement of the path-averaged refractive index structure constant ( 𝐶𝑛 2). The experiments were carried out along a horizontal urban path near the ground. Measuring turbulence in this layer is particularly important because of the prospect of using adaptive optics for free-space optical communications in an urban environment. On one hand, several commercial sensors were used: SLS20, a laser scintillometer from Scintec AG, BLS900, a largeaperture scintillometer, also from Scintec, and a 3D sonic anemometer from Thies GmbH. On the other hand, we measured turbulence strength with new approaches and devices developed in-house. Firstly, an LED array combined with a high-speed camera allowed for measurement of 𝐶𝑛 2 from raw- and differential image motion, and secondly a two-part system comprising a laser source, a Shack-Hartmann sensor and a PSF camera recoded turbulent modulation transfer functions, Zernike variances and angle-of-arrival structure functions, yielding three independent estimates of 𝐶𝑛 2. We compare the measured values yielded simultaneously by commercial and in-house developed devices and show very good agreement between 𝐶𝑛 2 values for all the methods. Limitations of each experimental method are also discussed.

  13. Holographic motion picture camera with Doppler shift compensation

    NASA Technical Reports Server (NTRS)

    Kurtz, R. L. (Inventor)

    1976-01-01

    A holographic motion picture camera is reported for producing three dimensional images by employing an elliptical optical system. There is provided in one of the beam paths (the object or reference beam path) a motion compensator which enables the camera to photograph faster moving objects.

  14. Standard design for National Ignition Facility x-ray streak and framing cameras.

    PubMed

    Kimbrough, J R; Bell, P M; Bradley, D K; Holder, J P; Kalantar, D K; MacPhee, A G; Telford, S

    2010-10-01

    The x-ray streak camera and x-ray framing camera for the National Ignition Facility were redesigned to improve electromagnetic pulse hardening, protect high voltage circuits from pressure transients, and maximize the use of common parts and operational software. Both instruments use the same PC104 based controller, interface, power supply, charge coupled device camera, protective hermetically sealed housing, and mechanical interfaces. Communication is over fiber optics with identical facility hardware for both instruments. Each has three triggers that can be either fiber optic or coax. High voltage protection consists of a vacuum sensor to enable the high voltage and pulsed microchannel plate phosphor voltage. In the streak camera, the high voltage is removed after the sweep. Both rely on the hardened aluminum box and a custom power supply to reduce electromagnetic pulse/electromagnetic interference (EMP/EMI) getting into the electronics. In addition, the streak camera has an EMP/EMI shield enclosing the front of the streak tube.

  15. Bifocal liquid lens zoom objective for mobile phone applications

    NASA Astrophysics Data System (ADS)

    Wippermann, F. C.; Schreiber, P.; Bräuer, A.; Craen, P.

    2007-02-01

    Miniaturized camera systems are an integral part of today's mobile phones which recently possess auto focus functionality. Commercially available solutions without moving parts have been developed using the electrowetting technology. Here, the contact angle of a drop of a conductive or polar liquid placed on an insulating substrate can be influenced by an electric field. Besides the compensation of the axial image shift due to different object distances, mobile phones with zoom functionality are desired as a next evolutionary step. In classical mechanically compensated zoom lenses two independently driven actuators combined with precision guides are needed leading to a delicate, space consuming and expansive opto-mechanical setup. Liquid lens technology based on the electrowetting effect gives the opportunity to built adaptive lenses without moving parts thus simplifying the mechanical setup. However, with the recent commercially available liquid lens products a completely motionless and continuously adaptive zoom system with market relevant optical performance is not feasible. This is due to the limited change in optical power the liquid lenses can provide and the dispersion of the used materials. As an intermediate step towards a continuously adjustable and motionless zoom lens we propose a bifocal system sufficient for toggling between two effective focal lengths without any moving parts. The system has its mechanical counterpart in a bifocal zoom lens where only one lens group has to be moved. In a liquid lens bifocal zoom two groups of adaptable liquid lenses are required for adjusting the effective focal length and keeping the image location constant. In order to overcome the difficulties in achromatizing the lens we propose a sequential image acquisition algorithm. Here, the full color image is obtained from a sequence of monochrome images (red, green, blue) leading to a simplified optical setup.

  16. Computational photography with plenoptic camera and light field capture: tutorial.

    PubMed

    Lam, Edmund Y

    2015-11-01

    Photography is a cornerstone of imaging. Ever since cameras became consumer products more than a century ago, we have witnessed great technological progress in optics and recording mediums, with digital sensors replacing photographic films in most instances. The latest revolution is computational photography, which seeks to make image reconstruction computation an integral part of the image formation process; in this way, there can be new capabilities or better performance in the overall imaging system. A leading effort in this area is called the plenoptic camera, which aims at capturing the light field of an object; proper reconstruction algorithms can then adjust the focus after the image capture. In this tutorial paper, we first illustrate the concept of plenoptic function and light field from the perspective of geometric optics. This is followed by a discussion on early attempts and recent advances in the construction of the plenoptic camera. We will then describe the imaging model and computational algorithms that can reconstruct images at different focus points, using mathematical tools from ray optics and Fourier optics. Last, but not least, we will consider the trade-off in spatial resolution and highlight some research work to increase the spatial resolution of the resulting images.

  17. Optical aberration correction for simple lenses via sparse representation

    NASA Astrophysics Data System (ADS)

    Cui, Jinlin; Huang, Wei

    2018-04-01

    Simple lenses with spherical surfaces are lightweight, inexpensive, highly flexible, and can be easily processed. However, they suffer from optical aberrations that lead to limitations in high-quality photography. In this study, we propose a set of computational photography techniques based on sparse signal representation to remove optical aberrations, thereby allowing the recovery of images captured through a single-lens camera. The primary advantage of the proposed method is that many prior point spread functions calibrated at different depths are successfully used for restoring visual images in a short time, which can be generally applied to nonblind deconvolution methods for solving the problem of the excessive processing time caused by the number of point spread functions. The optical software CODE V is applied for examining the reliability of the proposed method by simulation. The simulation results reveal that the suggested method outperforms the traditional methods. Moreover, the performance of a single-lens camera is significantly enhanced both qualitatively and perceptually. Particularly, the prior information obtained by CODE V can be used for processing the real images of a single-lens camera, which provides an alternative approach to conveniently and accurately obtain point spread functions of single-lens cameras.

  18. Preliminary Design of a Lightning Optical Camera and ThundEr (LOCATE) Sensor

    NASA Technical Reports Server (NTRS)

    Phanord, Dieudonne D.; Koshak, William J.; Rybski, Paul M.; Arnold, James E. (Technical Monitor)

    2001-01-01

    The preliminary design of an optical/acoustical instrument is described for making highly accurate real-time determinations of the location of cloud-to-ground (CG) lightning. The instrument, named the Lightning Optical Camera And ThundEr (LOCATE) sensor, will also image the clear and cloud-obscured lightning channel produced from CGs and cloud flashes, and will record the transient optical waveforms produced from these discharges. The LOCATE sensor will consist of a full (360 degrees) field-of-view optical camera for obtaining CG channel image and azimuth, a sensitive thunder microphone for obtaining CG range, and a fast photodiode system for time-resolving the lightning optical waveform. The optical waveform data will be used to discriminate CGs from cloud flashes. Together, the optical azimuth and thunder range is used to locate CGs and it is anticipated that a network of LOCATE sensors would determine CG source location to well within 100 meters. All of this would be accomplished for a relatively inexpensive cost compared to present RF lightning location technologies, but of course the range detection is limited and will be quantified in the future. The LOCATE sensor technology would have practical applications for electric power utility companies, government (e.g. NASA Kennedy Space Center lightning safety and warning), golf resort lightning safety, telecommunications, and other industries.

  19. Mechanical Design of the LSST Camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nordby, Martin; Bowden, Gordon; Foss, Mike

    2008-06-13

    The LSST camera is a tightly packaged, hermetically-sealed system that is cantilevered into the main beam of the LSST telescope. It is comprised of three refractive lenses, on-board storage for five large filters, a high-precision shutter, and a cryostat that houses the 3.2 giga-pixel CCD focal plane along with its support electronics. The physically large optics and focal plane demand large structural elements to support them, but the overall size of the camera and its components must be minimized to reduce impact on the image stability. Also, focal plane and optics motions must be minimized to reduce systematic errors inmore » image reconstruction. Design and analysis for the camera body and cryostat will be detailed.« less

  20. Optical design of the SuMIRe/PFS spectrograph

    NASA Astrophysics Data System (ADS)

    Pascal, Sandrine; Vives, Sébastien; Barkhouser, Robert; Gunn, James E.

    2014-07-01

    The SuMIRe Prime Focus Spectrograph (PFS), developed for the 8-m class SUBARU telescope, will consist of four identical spectrographs, each receiving 600 fibers from a 2394 fiber robotic positioner at the telescope prime focus. Each spectrograph includes three spectral channels to cover the wavelength range [0.38-1.26] um with a resolving power ranging between 2000 and 4000. A medium resolution mode is also implemented to reach a resolving power of 5000 at 0.8 um. Each spectrograph is made of 4 optical units: the entrance unit which produces three corrected collimated beams and three camera units (one per spectral channel: "blue, "red", and "NIR"). The beam is split by using two large dichroics; and in each arm, the light is dispersed by large VPH gratings (about 280x280mm). The proposed optical design was optimized to achieve the requested image quality while simplifying the manufacturing of the whole optical system. The camera design consists in an innovative Schmidt camera observing a large field-of-view (10 degrees) with a very fast beam (F/1.09). To achieve such a performance, the classical spherical mirror is replaced by a catadioptric mirror (i.e meniscus lens with a reflective surface on the rear side of the glass, like a Mangin mirror). This article focuses on the optical architecture of the PFS spectrograph and the perfornance achieved. We will first described the global optical design of the spectrograph. Then, we will focus on the Mangin-Schmidt camera design. The analysis of the optical performance and the results obtained are presented in the last section.

  1. Intraocular camera for retinal prostheses: Refractive and diffractive lens systems

    NASA Astrophysics Data System (ADS)

    Hauer, Michelle Christine

    The focus of this thesis is on the design and analysis of refractive, diffractive, and hybrid refractive/diffractive lens systems for a miniaturized camera that can be surgically implanted in the crystalline lens sac and is designed to work in conjunction with current and future generation retinal prostheses. The development of such an intraocular camera (IOC) would eliminate the need for an external head-mounted or eyeglass-mounted camera. Placing the camera inside the eye would allow subjects to use their natural eye movements for foveation (attention) instead of more cumbersome head tracking, would notably aid in personal navigation and mobility, and would also be significantly more psychologically appealing from the standpoint of personal appearances. The capability for accommodation with no moving parts or feedback control is incorporated by employing camera designs that exhibit nearly infinite depth of field. Such an ultracompact optical imaging system requires a unique combination of refractive and diffractive optical elements and relaxed system constraints derived from human psychophysics. This configuration necessitates an extremely compact, short focal-length lens system with an f-number close to unity. Initially, these constraints appear highly aggressive from an optical design perspective. However, after careful analysis of the unique imaging requirements of a camera intended to work in conjunction with the relatively low pixellation levels of a retinal microstimulator array, it becomes clear that such a design is not only feasible, but could possibly be implemented with a single lens system.

  2. A near-Infrared SETI Experiment: Alignment and Astrometric precision

    NASA Astrophysics Data System (ADS)

    Duenas, Andres; Maire, Jerome; Wright, Shelley; Drake, Frank D.; Marcy, Geoffrey W.; Siemion, Andrew; Stone, Remington P. S.; Tallis, Melisa; Treffers, Richard R.; Werthimer, Dan

    2016-06-01

    Beginning in March 2015, a Near-InfraRed Optical SETI (NIROSETI) instrument aiming to search for fast nanosecond laser pulses, has been commissioned on the Nickel 1m-telescope at Lick Observatory. The NIROSETI instrument makes use of an optical guide camera, SONY ICX694 CCD from PointGrey, to align our selected sources into two 200µm near-infrared Avalanche Photo Diodes (APD) with a field-of-view of 2.5"x2.5" each. These APD detectors operate at very fast bandwidths and are able to detect pulse widths extending down into the nanosecond range. Aligning sources onto these relatively small detectors requires characterizing the guide camera plate scale, static optical distortion solution, and relative orientation with respect to the APD detectors. We determined the guide camera plate scale as 55.9+- 2.7 milli-arcseconds/pixel and magnitude limit of 18.15mag (+1.07/-0.58) in V-band. We will present the full distortion solution of the guide camera, orientation, and our alignment method between the camera and the two APDs, and will discuss target selection within the NIROSETI observational campaign, including coordination with Breakthrough Listen.

  3. Optical vortices with starlight

    NASA Astrophysics Data System (ADS)

    Anzolin, G.; Tamburini, F.; Bianchini, A.; Umbriaco, G.; Barbieri, C.

    2008-09-01

    Aims: In this paper we present our first observations at the Asiago 122 cm telescope of ℓ = 1 optical vortices generated with starlight beams. Methods: We used a fork-hologram blazed at the first diffraction order as a phase modifying device. The multiple system Rasalgethi (α Herculis) in white light and the single star Arcturus (α Bootis) through a 300 Å bandpass were observed using a fast CCD camera. In the first case we could adopt the Lucky Imaging approach to partially correct for seeing effects. Results: For both stars, the optical vortices could be clearly detected above the smearing caused by the mediocre seeing conditions. The profiles of the optical vortices produced by the beams of the two main components of the α Her system are consistent with numerically simulated on-axis and off-axis optical vortices. The optical vortices produced by α Boo can also be reproduced by numerical simulations. Our experiments confirm that the ratio between the intensity peaks of an optical vortex can be extremely sensitive to off-axis displacements of the beam. Conclusions: Our results give insights for future astronomical applications of optical vortices both for space telescopes and ground-based telescopes with good seeing conditions and adaptive optics devices. The properties of optical vortices can be used to perform high precision astrometry and tip/tilt correction of the isoplanatic field. We are now designing a ℓ = 2 optical vortex coronagraph around a continuous spiral phase plate. We also point out that optical vortices could find extremely interesting applications also in the infrared and radio wavelengths.

  4. Visualization of explosion phenomena using a high-speed video camera with an uncoupled objective lens by fiber-optic

    NASA Astrophysics Data System (ADS)

    Tokuoka, Nobuyuki; Miyoshi, Hitoshi; Kusano, Hideaki; Hata, Hidehiro; Hiroe, Tetsuyuki; Fujiwara, Kazuhito; Yasushi, Kondo

    2008-11-01

    Visualization of explosion phenomena is very important and essential to evaluate the performance of explosive effects. The phenomena, however, generate blast waves and fragments from cases. We must protect our visualizing equipment from any form of impact. In the tests described here, the front lens was separated from the camera head by means of a fiber-optic cable in order to be able to use the camera, a Shimadzu Hypervision HPV-1, for tests in severe blast environment, including the filming of explosions. It was possible to obtain clear images of the explosion that were not inferior to the images taken by the camera with the lens directly coupled to the camera head. It could be confirmed that this system is very useful for the visualization of dangerous events, e.g., at an explosion site, and for visualizations at angles that would be unachievable under normal circumstances.

  5. Fiber optic TV direct

    NASA Technical Reports Server (NTRS)

    Kassak, John E.

    1991-01-01

    The objective of the operational television (OTV) technology was to develop a multiple camera system (up to 256 cameras) for NASA Kennedy installations where camera video, synchronization, control, and status data are transmitted bidirectionally via a single fiber cable at distances in excess of five miles. It is shown that the benefits (such as improved video performance, immunity from electromagnetic interference and radio frequency interference, elimination of repeater stations, and more system configuration flexibility) can be realized if application of the proven fiber optic transmission concept is used. The control system will marry the lens, pan and tilt, and camera control functions into a modular based Local Area Network (LAN) control network. Such a system does not exist commercially at present since the Television Broadcast Industry's current practice is to divorce the positional controls from the camera control system. The application software developed for this system will have direct applicability to similar systems in industry using LAN based control systems.

  6. Metrological analysis of the human foot: 3D multisensor exploration

    NASA Astrophysics Data System (ADS)

    Muñoz Potosi, A.; Meneses Fonseca, J.; León Téllez, J.

    2011-08-01

    In the podiatry field, many of the foot dysfunctions are mainly generated due to: Congenital malformations, accidents or misuse of footwear. For the treatment or prevention of foot disorders, the podiatrist diagnoses prosthesis or specific adapted footwear, according to the real dimension of foot. Therefore, it is necessary to acquire 3D information of foot with 360 degrees of observation. As alternative solution, it was developed and implemented an optical system of threedimensional reconstruction based in the principle of laser triangulation. The system is constituted by an illumination unit that project a laser plane into the foot surface, an acquisition unit with 4 CCD cameras placed around of axial foot axis, an axial moving unit that displaces the illumination and acquisition units in the axial axis direction and a processing and exploration unit. The exploration software allows the extraction of distances on three-dimensional image, taking into account the topography of foot. The optical system was tested and their metrological performances were evaluated in experimental conditions. The optical system was developed to acquire 3D information in order to design and make more appropriate footwear.

  7. The Raptor Real-Time Processing Architecture

    NASA Astrophysics Data System (ADS)

    Galassi, M.; Starr, D.; Wozniak, P.; Brozdin, K.

    The primary goal of Raptor is ambitious: to identify interesting optical transients from very wide field of view telescopes in real time, and then to quickly point the higher resolution Raptor ``fovea'' cameras and spectrometer to the location of the optical transient. The most interesting of Raptor's many applications is the real-time search for orphan optical counterparts of Gamma Ray Bursts. The sequence of steps (data acquisition, basic calibration, source extraction, astrometry, relative photometry, the smarts of transient identification and elimination of false positives, telescope pointing feedback, etc.) is implemented with a ``component'' approach. All basic elements of the pipeline functionality have been written from scratch or adapted (as in the case of SExtractor for source extraction) to form a consistent modern API operating on memory resident images and source lists. The result is a pipeline which meets our real-time requirements and which can easily operate as a monolithic or distributed processing system. Finally, the Raptor architecture is entirely based on free software (sometimes referred to as ``open source'' software). In this paper we also discuss the interplay between various free software technologies in this type of astronomical problem.

  8. Raptor -- Mining the Sky in Real Time

    NASA Astrophysics Data System (ADS)

    Galassi, M.; Borozdin, K.; Casperson, D.; McGowan, K.; Starr, D.; White, R.; Wozniak, P.; Wren, J.

    2004-06-01

    The primary goal of Raptor is ambitious: to identify interesting optical transients from very wide field of view telescopes in real time, and then to quickly point the higher resolution Raptor ``fovea'' cameras and spectrometer to the location of the optical transient. The most interesting of Raptor's many applications is the real-time search for orphan optical counterparts of Gamma Ray Bursts. The sequence of steps (data acquisition, basic calibration, source extraction, astrometry, relative photometry, the smarts of transient identification and elimination of false positives, telescope pointing feedback...) is implemented with a ``component'' aproach. All basic elements of the pipeline functionality have been written from scratch or adapted (as in the case of SExtractor for source extraction) to form a consistent modern API operating on memory resident images and source lists. The result is a pipeline which meets our real-time requirements and which can easily operate as a monolithic or distributed processing system. Finally: the Raptor architecture is entirely based on free software (sometimes referred to as "open source" software). In this paper we also discuss the interplay between various free software technologies in this type of astronomical problem.

  9. Large-format platinum silicide microwave kinetic inductance detectors for optical to near-IR astronomy.

    PubMed

    Szypryt, P; Meeker, S R; Coiffard, G; Fruitwala, N; Bumble, B; Ulbricht, G; Walter, A B; Daal, M; Bockstiegel, C; Collura, G; Zobrist, N; Lipartito, I; Mazin, B A

    2017-10-16

    We have fabricated and characterized 10,000 and 20,440 pixel Microwave Kinetic Inductance Detector (MKID) arrays for the Dark-speckle Near-IR Energy-resolved Superconducting Spectrophotometer (DARKNESS) and the MKID Exoplanet Camera (MEC). These instruments are designed to sit behind adaptive optics systems with the goal of directly imaging exoplanets in a 800-1400 nm band. Previous large optical and near-IR MKID arrays were fabricated using substoichiometric titanium nitride (TiN) on a silicon substrate. These arrays, however, suffered from severe non-uniformities in the TiN critical temperature, causing resonances to shift away from their designed values and lowering usable detector yield. We have begun fabricating DARKNESS and MEC arrays using platinum silicide (PtSi) on sapphire instead of TiN. Not only do these arrays have much higher uniformity than the TiN arrays, resulting in higher pixel yields, they have demonstrated better spectral resolution than TiN MKIDs of similar design. PtSi MKIDs also do not display the hot pixel effects seen when illuminating TiN on silicon MKIDs with photons with wavelengths shorter than 1 µm.

  10. Optical stereo video signal processor

    NASA Technical Reports Server (NTRS)

    Craig, G. D. (Inventor)

    1985-01-01

    An otpical video signal processor is described which produces a two-dimensional cross-correlation in real time of images received by a stereo camera system. The optical image of each camera is projected on respective liquid crystal light valves. The images on the liquid crystal valves modulate light produced by an extended light source. This modulated light output becomes the two-dimensional cross-correlation when focused onto a video detector and is a function of the range of a target with respect to the stereo camera. Alternate embodiments utilize the two-dimensional cross-correlation to determine target movement and target identification.

  11. Adaptive DOF for plenoptic cameras

    NASA Astrophysics Data System (ADS)

    Oberdörster, Alexander; Lensch, Hendrik P. A.

    2013-03-01

    Plenoptic cameras promise to provide arbitrary re-focusing through a scene after the capture. In practice, however, the refocusing range is limited by the depth of field (DOF) of the plenoptic camera. For the focused plenoptic camera, this range is given by the range of object distances for which the microimages are in focus. We propose a technique of recording light fields with an adaptive depth of focus. Between multiple exposures { or multiple recordings of the light field { the distance between the microlens array (MLA) and the image sensor is adjusted. The depth and quality of focus is chosen by changing the number of exposures and the spacing of the MLA movements. In contrast to traditional cameras, extending the DOF does not necessarily lead to an all-in-focus image. Instead, the refocus range is extended. There is full creative control about the focus depth; images with shallow or selective focus can be generated.

  12. A randomized comparison of laparoscopic, magnetically anchored, and flexible endoscopic cameras in performance and workload between laparoscopic and single-incision surgery.

    PubMed

    Arain, Nabeel A; Cadeddu, Jeffrey A; Best, Sara L; Roshek, Thomas; Chang, Victoria; Hogg, Deborah C; Bergs, Richard; Fernandez, Raul; Webb, Erin M; Scott, Daniel J

    2012-04-01

    This study aimed to evaluate the surgeon performance and workload of a next-generation magnetically anchored camera compared with laparoscopic and flexible endoscopic imaging systems for laparoscopic and single-site laparoscopy (SSL) settings. The cameras included a 5-mm 30° laparoscope (LAP), a magnetically anchored (MAGS) camera, and a flexible endoscope (ENDO). The three camera systems were evaluated using standardized optical characteristic tests. Each system was used in random order for visualization during performance of a standardized suturing task by four surgeons. Each participant performed three to five consecutive repetitions as a surgeon and also served as a camera driver for other surgeons. Ex vivo testing was conducted in a laparoscopic multiport and SSL layout using a box trainer. In vivo testing was performed only in the multiport configuration and used a previously validated live porcine Nissen model. Optical testing showed superior resolution for MAGS at 5 and 10 cm compared with LAP or ENDO. The field of view ranged from 39 to 99°. The depth of focus was almost three times greater for MAGS (6-270 mm) than for LAP (2-88 mm) or ENDO (1-93 mm). Both ex vivo and in vivo multiport combined surgeon performance was significantly better for LAP than for ENDO, but no significant differences were detected for MAGS. For multiport testing, workload ratings were significantly less ex vivo for LAP and MAGS than for ENDO and less in vivo for LAP than for MAGS or ENDO. For ex vivo SSL, no significant performance differences were detected, but camera drivers rated the workload significantly less for MAGS than for LAP or ENDO. The data suggest that the improved imaging element of the next-generation MAGS camera has optical and performance characteristics that meet or exceed those of the LAP or ENDO systems and that the MAGS camera may be especially useful for SSL. Further refinements of the MAGS camera are encouraged.

  13. The GCT camera for the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Lapington, J. S.; Abchiche, A.; Allan, D.; Amans, J.-P.; Armstrong, T. P.; Balzer, A.; Berge, D.; Boisson, C.; Bousquet, J.-J.; Bose, R.; Brown, A. M.; Bryan, M.; Buchholtz, G.; Buckley, J.; Chadwick, P. M.; Costantini, H.; Cotter, G.; Daniel, M. K.; De Franco, A.; De Frondat, F.; Dournaux, J.-L.; Dumas, D.; Ernenwein, J.-P.; Fasola, G.; Funk, S.; Gironnet, J.; Graham, J. A.; Greenshaw, T.; Hervet, O.; Hidaka, N.; Hinton, J. A.; Huet, J.-M.; Jankowsky, D.; Jegouzo, I.; Jogler, T.; Kawashima, T.; Kraus, M.; Laporte, P.; Leach, S.; Lefaucheur, J.; Markoff, S.; Melse, T.; Minaya, I. A.; Mohrmann, L.; Molyneux, P.; Moore, P.; Nolan, S. J.; Okumura, A.; Osborne, J. P.; Parsons, R. D.; Rosen, S.; Ross, D.; Rowell, G.; Rulten, C. B.; Sato, Y.; Sayede, F.; Schmoll, J.; Schoorlemmer, H.; Servillat, M.; Sol, H.; Stamatescu, V.; Stephan, M.; Stuik, R.; Sykes, J.; Tajima, H.; Thornhill, J.; Tibaldo, L.; Trichard, C.; Varner, G.; Vink, J.; Watson, J. J.; White, R.; Yamane, N.; Zech, A.; Zink, A.; Zorn, J.; CTA Consortium

    2017-12-01

    The Gamma Cherenkov Telescope (GCT) is one of the designs proposed for the Small Sized Telescope (SST) section of the Cherenkov Telescope Array (CTA). The GCT uses dual-mirror optics, resulting in a compact telescope with good image quality and a large field of view with a smaller, more economical, camera than is achievable with conventional single mirror solutions. The photon counting GCT camera is designed to record the flashes of atmospheric Cherenkov light from gamma and cosmic ray initiated cascades, which last only a few tens of nanoseconds. The GCT optics require that the camera detectors follow a convex surface with a radius of curvature of 1 m and a diameter of 35 cm, which is approximated by tiling the focal plane with 32 modules. The first camera prototype is equipped with multi-anode photomultipliers, each comprising an 8×8 array of 6×6 mm2 pixels to provide the required angular scale, adding up to 2048 pixels in total. Detector signals are shaped, amplified and digitised by electronics based on custom ASICs that provide digitisation at 1 GSample/s. The camera is self-triggering, retaining images where the focal plane light distribution matches predefined spatial and temporal criteria. The electronics are housed in the liquid-cooled, sealed camera enclosure. LED flashers at the corners of the focal plane provide a calibration source via reflection from the secondary mirror. The first GCT camera prototype underwent preliminary laboratory tests last year. In November 2015, the camera was installed on a prototype GCT telescope (SST-GATE) in Paris and was used to successfully record the first Cherenkov light of any CTA prototype, and the first Cherenkov light seen with such a dual-mirror optical system. A second full-camera prototype based on Silicon Photomultipliers is under construction. Up to 35 GCTs are envisaged for CTA.

  14. Omnidirectional Underwater Camera Design and Calibration

    PubMed Central

    Bosch, Josep; Gracias, Nuno; Ridao, Pere; Ribas, David

    2015-01-01

    This paper presents the development of an underwater omnidirectional multi-camera system (OMS) based on a commercially available six-camera system, originally designed for land applications. A full calibration method is presented for the estimation of both the intrinsic and extrinsic parameters, which is able to cope with wide-angle lenses and non-overlapping cameras simultaneously. This method is valid for any OMS in both land or water applications. For underwater use, a customized housing is required, which often leads to strong image distortion due to refraction among the different media. This phenomena makes the basic pinhole camera model invalid for underwater cameras, especially when using wide-angle lenses, and requires the explicit modeling of the individual optical rays. To address this problem, a ray tracing approach has been adopted to create a field-of-view (FOV) simulator for underwater cameras. The simulator allows for the testing of different housing geometries and optics for the cameras to ensure a complete hemisphere coverage in underwater operation. This paper describes the design and testing of a compact custom housing for a commercial off-the-shelf OMS camera (Ladybug 3) and presents the first results of its use. A proposed three-stage calibration process allows for the estimation of all of the relevant camera parameters. Experimental results are presented, which illustrate the performance of the calibration method and validate the approach. PMID:25774707

  15. Concave Surround Optics for Rapid Multi-View Imaging

    DTIC Science & Technology

    2006-11-01

    thus is amenable to capturing dynamic events avoiding the need to construct and calibrate an array of cameras. We demonstrate the system with a high...hard to assemble and calibrate . In this paper we present an optical system capable of rapidly moving the viewpoint around a scene. Our system...flexibility, large camera arrays are typically expensive and require significant effort to calibrate temporally, geometrically and chromatically

  16. Time-resolved optical measurements of the post-detonation combustion of aluminized explosives

    NASA Astrophysics Data System (ADS)

    Carney, Joel R.; Miller, J. Scott; Gump, Jared C.; Pangilinan, G. I.

    2006-06-01

    The dynamic observation and characterization of light emission following the detonation and subsequent combustion of an aluminized explosive is described. The temporal, spatial, and spectral specificity of the light emission are achieved using a combination of optical diagnostics. Aluminum and aluminum monoxide emission peaks are monitored as a function of time and space using streak camera based spectroscopy in a number of light collection configurations. Peak areas of selected aluminum containing species are tracked as a function of time to ascertain the relative kinetics (growth and decay of emitting species) during the energetic event. At the chosen streak camera sensitivity, aluminum emission is observed for 10μs following the detonation of a confined 20g charge of PBXN-113, while aluminum monoxide emission persists longer than 20μs. A broadband optical emission gauge, shock velocity gauge, and fast digital framing camera are used as supplemental optical diagnostics. In-line, collimated detection is determined to be the optimum light collection geometry because it is independent of distance between the optics and the explosive charge. The chosen optical configuration also promotes a constant cylindrical collection volume that should facilitate future modeling efforts.

  17. Center for Adaptive Optics | Home

    Science.gov Websites

    Center for Adaptive Optics A University of California Science and Technology Center Adaptive distortions in optical systems ... Announcements: The CfAO Summer School on Adaptive Optics 2018 will be held mission of the UC Center for Adaptive Optics is to develop, apply, and disseminate adaptive optics science

  18. Research into a Single-aperture Light Field Camera System to Obtain Passive Ground-based 3D Imagery of LEO Objects

    NASA Astrophysics Data System (ADS)

    Bechis, K.; Pitruzzello, A.

    2014-09-01

    This presentation describes our ongoing research into using a ground-based light field camera to obtain passive, single-aperture 3D imagery of LEO objects. Light field cameras are an emerging and rapidly evolving technology for passive 3D imaging with a single optical sensor. The cameras use an array of lenslets placed in front of the camera focal plane, which provides angle of arrival information for light rays originating from across the target, allowing range to target and 3D image to be obtained from a single image using monocular optics. The technology, which has been commercially available for less than four years, has the potential to replace dual-sensor systems such as stereo cameras, dual radar-optical systems, and optical-LIDAR fused systems, thus reducing size, weight, cost, and complexity. We have developed a prototype system for passive ranging and 3D imaging using a commercial light field camera and custom light field image processing algorithms. Our light field camera system has been demonstrated for ground-target surveillance and threat detection applications, and this paper presents results of our research thus far into applying this technology to the 3D imaging of LEO objects. The prototype 3D imaging camera system developed by Northrop Grumman uses a Raytrix R5 C2GigE light field camera connected to a Windows computer with an nVidia graphics processing unit (GPU). The system has a frame rate of 30 Hz, and a software control interface allows for automated camera triggering and light field image acquisition to disk. Custom image processing software then performs the following steps: (1) image refocusing, (2) change detection, (3) range finding, and (4) 3D reconstruction. In Step (1), a series of 2D images are generated from each light field image; the 2D images can be refocused at up to 100 different depths. Currently, steps (1) through (3) are automated, while step (4) requires some user interaction. A key requirement for light field camera operation is that the target must be within the near-field (Fraunhofer distance) of the collecting optics. For example, in visible light the near-field of a 1-m telescope extends out to about 3,500 km, while the near-field of the AEOS telescope extends out over 46,000 km. For our initial proof of concept, we have integrated our light field camera with a 14-inch Meade LX600 advanced coma-free telescope, to image various surrogate ground targets at up to tens of kilometers range. Our experiments with the 14-inch telescope have assessed factors and requirements that are traceable and scalable to a larger-aperture system that would have the near-field distance needed to obtain 3D images of LEO objects. The next step would be to integrate a light field camera with a 1-m or larger telescope and evaluate its 3D imaging capability against LEO objects. 3D imaging of LEO space objects with light field camera technology can potentially provide a valuable new tool for space situational awareness, especially for those situations where laser or radar illumination of the target objects is not feasible.

  19. PRISM Spectrograph Optical Design

    NASA Technical Reports Server (NTRS)

    Chipman, Russell A.

    1995-01-01

    The objective of this contract is to explore optical design concepts for the PRISM spectrograph and produce a preliminary optical design. An exciting optical configuration has been developed which will allow both wavelength bands to be imaged onto the same detector array. At present the optical design is only partially complete because PRISM will require a fairly elaborate optical system to meet its specification for throughput (area*solid angle). The most complex part of the design, the spectrograph camera, is complete, providing proof of principle that a feasible design is attainable. This camera requires 3 aspheric mirrors to fit inside the 20x60 cm cross-section package. A complete design with reduced throughput (1/9th) has been prepared. The design documents the optical configuration concept. A suitable dispersing prism material, CdTe, has been identified for the prism spectrograph, after a comparison of many materials.

  20. Design and fabrication of a CCD camera for use with relay optics in solar X-ray astronomy

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Configured as a subsystem of a sounding rocket experiment, a camera system was designed to record and transmit an X-ray image focused on a charge coupled device. The camera consists of a X-ray sensitive detector and the electronics for processing and transmitting image data. The design and operation of the camera are described. Schematics are included.

  1. Optical design and development of a snapshot light-field laryngoscope

    NASA Astrophysics Data System (ADS)

    Zhu, Shuaishuai; Jin, Peng; Liang, Rongguang; Gao, Liang

    2018-02-01

    The convergence of recent advances in optical fabrication and digital processing yields a generation of imaging technology-light-field (LF) cameras which bridge the realms of applied mathematics, optics, and high-performance computing. Herein for the first time, we introduce the paradigm of LF imaging into laryngoscopy. The resultant probe can image the three-dimensional shape of vocal folds within a single camera exposure. Furthermore, to improve the spatial resolution, we developed an image fusion algorithm, providing a simple solution to a long-standing problem in LF imaging.

  2. A novel optical system design of light field camera

    NASA Astrophysics Data System (ADS)

    Wang, Ye; Li, Wenhua; Hao, Chenyang

    2016-01-01

    The structure of main lens - Micro Lens Array (MLA) - imaging sensor is usually adopted in optical system of light field camera, and the MLA is the most important part in the optical system, which has the function of collecting and recording the amplitude and phase information of the field light. In this paper, a novel optical system structure is proposed. The novel optical system is based on the 4f optical structure, and the micro-aperture array (MAA) is used to instead of the MLA for realizing the information acquisition of the 4D light field. We analyze the principle that the novel optical system could realize the information acquisition of the light field. At the same time, a simple MAA, line grating optical system, is designed by ZEMAX software in this paper. The novel optical system is simulated by a line grating optical system, and multiple images are obtained in the image plane. The imaging quality of the novel optical system is analyzed.

  3. Cheap streak camera based on the LD-S-10 intensifier tube

    NASA Astrophysics Data System (ADS)

    Dashevsky, Boris E.; Krutik, Mikhail I.; Surovegin, Alexander L.

    1992-01-01

    Basic properties of a new streak camera and its test results are reported. To intensify images on its screen, we employed modular G1 tubes, the LD-A-1.0 and LD-A-0.33, enabling magnification of 1.0 and 0.33, respectively. If necessary, the LD-A-0.33 tube may be substituted by any other image intensifier of the LDA series, the choice to be determined by the size of the CCD matrix with fiber-optical windows. The reported camera employs a 12.5- mm-long CCD strip consisting of 1024 pixels, each 12 X 500 micrometers in size. Registered radiation was imaged on a 5 X 0.04 mm slit diaphragm tightly connected with the LD-S- 10 fiber-optical input window. Electrons escaping the cathode are accelerated in a 5 kV electric field and focused onto a phosphor screen covering a fiber-optical plate as they travel between deflection plates. Sensitivity of the latter was 18 V/mm, which implies that the total deflecting voltage was 720 V per 40 mm of the screen surface, since reversed-polarity scan pulses +360 V and -360 V were applied across the deflection plate. The streak camera provides full scan times over the screen of 15, 30, 50, 100, 250, and 500 ns. Timing of the electrically or optically driven camera was done using a 10 ns step-controlled-delay (0 - 500 ns) circuit.

  4. An Innovative Procedure for Calibration of Strapdown Electro-Optical Sensors Onboard Unmanned Air Vehicles

    PubMed Central

    Fasano, Giancarmine; Accardo, Domenico; Moccia, Antonio; Rispoli, Attilio

    2010-01-01

    This paper presents an innovative method for estimating the attitude of airborne electro-optical cameras with respect to the onboard autonomous navigation unit. The procedure is based on the use of attitude measurements under static conditions taken by an inertial unit and carrier-phase differential Global Positioning System to obtain accurate camera position estimates in the aircraft body reference frame, while image analysis allows line-of-sight unit vectors in the camera based reference frame to be computed. The method has been applied to the alignment of the visible and infrared cameras installed onboard the experimental aircraft of the Italian Aerospace Research Center and adopted for in-flight obstacle detection and collision avoidance. Results show an angular uncertainty on the order of 0.1° (rms). PMID:22315559

  5. Center for Adaptive Optics | Publications

    Science.gov Websites

    Text-Only Version Adaptive Optics, Center for Home Page CfAO Logo Search The Center Adaptive Optics for Adaptive Optics | Search | Sitemap | The Center | Adaptive Optics | Research | Education/HR

  6. Computational imaging through a fiber-optic bundle

    NASA Astrophysics Data System (ADS)

    Lodhi, Muhammad A.; Dumas, John Paul; Pierce, Mark C.; Bajwa, Waheed U.

    2017-05-01

    Compressive sensing (CS) has proven to be a viable method for reconstructing high-resolution signals using low-resolution measurements. Integrating CS principles into an optical system allows for higher-resolution imaging using lower-resolution sensor arrays. In contrast to prior works on CS-based imaging, our focus in this paper is on imaging through fiber-optic bundles, in which manufacturing constraints limit individual fiber spacing to around 2 μm. This limitation essentially renders fiber-optic bundles as low-resolution sensors with relatively few resolvable points per unit area. These fiber bundles are often used in minimally invasive medical instruments for viewing tissue at macro and microscopic levels. While the compact nature and flexibility of fiber bundles allow for excellent tissue access in-vivo, imaging through fiber bundles does not provide the fine details of tissue features that is demanded in some medical situations. Our hypothesis is that adapting existing CS principles to fiber bundle-based optical systems will overcome the resolution limitation inherent in fiber-bundle imaging. In a previous paper we examined the practical challenges involved in implementing a highly parallel version of the single-pixel camera while focusing on synthetic objects. This paper extends the same architecture for fiber-bundle imaging under incoherent illumination and addresses some practical issues associated with imaging physical objects. Additionally, we model the optical non-idealities in the system to get lower modelling errors.

  7. A method of camera calibration with adaptive thresholding

    NASA Astrophysics Data System (ADS)

    Gao, Lei; Yan, Shu-hua; Wang, Guo-chao; Zhou, Chun-lei

    2009-07-01

    In order to calculate the parameters of the camera correctly, we must figure out the accurate coordinates of the certain points in the image plane. Corners are the important features in the 2D images. Generally speaking, they are the points that have high curvature and lie in the junction of different brightness regions of images. So corners detection has already widely used in many fields. In this paper we use the pinhole camera model and SUSAN corner detection algorithm to calibrate the camera. When using the SUSAN corner detection algorithm, we propose an approach to retrieve the gray difference threshold, adaptively. That makes it possible to pick up the right chessboard inner comers in all kinds of gray contrast. The experiment result based on this method was proved to be feasible.

  8. The sequence measurement system of the IR camera

    NASA Astrophysics Data System (ADS)

    Geng, Ai-hui; Han, Hong-xia; Zhang, Hai-bo

    2011-08-01

    Currently, the IR cameras are broadly used in the optic-electronic tracking, optic-electronic measuring, fire control and optic-electronic countermeasure field, but the output sequence of the most presently applied IR cameras in the project is complex and the giving sequence documents from the leave factory are not detailed. Aiming at the requirement that the continuous image transmission and image procession system need the detailed sequence of the IR cameras, the sequence measurement system of the IR camera is designed, and the detailed sequence measurement way of the applied IR camera is carried out. The FPGA programming combined with the SignalTap online observation way has been applied in the sequence measurement system, and the precise sequence of the IR camera's output signal has been achieved, the detailed document of the IR camera has been supplied to the continuous image transmission system, image processing system and etc. The sequence measurement system of the IR camera includes CameraLink input interface part, LVDS input interface part, FPGA part, CameraLink output interface part and etc, thereinto the FPGA part is the key composed part in the sequence measurement system. Both the video signal of the CmaeraLink style and the video signal of LVDS style can be accepted by the sequence measurement system, and because the image processing card and image memory card always use the CameraLink interface as its input interface style, the output signal style of the sequence measurement system has been designed into CameraLink interface. The sequence measurement system does the IR camera's sequence measurement work and meanwhile does the interface transmission work to some cameras. Inside the FPGA of the sequence measurement system, the sequence measurement program, the pixel clock modification, the SignalTap file configuration and the SignalTap online observation has been integrated to realize the precise measurement to the IR camera. Te sequence measurement program written by the verilog language combining the SignalTap tool on line observation can count the line numbers in one frame, pixel numbers in one line and meanwhile account the line offset and row offset of the image. Aiming at the complex sequence of the IR camera's output signal, the sequence measurement system of the IR camera accurately measures the sequence of the project applied camera, supplies the detailed sequence document to the continuous system such as image processing system and image transmission system and gives out the concrete parameters of the fval, lval, pixclk, line offset and row offset. The experiment shows that the sequence measurement system of the IR camera can get the precise sequence measurement result and works stably, laying foundation for the continuous system.

  9. Smartphone Fundus Photography.

    PubMed

    Nazari Khanamiri, Hossein; Nakatsuka, Austin; El-Annan, Jaafar

    2017-07-06

    Smartphone fundus photography is a simple technique to obtain ocular fundus pictures using a smartphone camera and a conventional handheld indirect ophthalmoscopy lens. This technique is indispensable when picture documentation of optic nerve, retina, and retinal vessels is necessary but a fundus camera is not available. The main advantage of this technique is the widespread availability of smartphones that allows documentation of macula and optic nerve changes in many settings that was not previously possible. Following the well-defined steps detailed here, such as proper alignment of the phone camera, handheld lens, and the patient's pupil, is the key for obtaining a clear retina picture with no interfering light reflections and aberrations. In this paper, the optical principles of indirect ophthalmoscopy and fundus photography will be reviewed first. Then, the step-by-step method to record a good quality retinal image using a smartphone will be explained.

  10. Geometric and Optic Characterization of a Hemispherical Dome Port for Underwater Photogrammetry

    PubMed Central

    Menna, Fabio; Nocerino, Erica; Fassi, Francesco; Remondino, Fabio

    2016-01-01

    The popularity of automatic photogrammetric techniques has promoted many experiments in underwater scenarios leading to quite impressive visual results, even by non-experts. Despite these achievements, a deep understanding of camera and lens behaviors as well as optical phenomena involved in underwater operations is fundamental to better plan field campaigns and anticipate the achievable results. The paper presents a geometric investigation of a consumer grade underwater camera housing, manufactured by NiMAR and equipped with a 7′′ dome port. After a review of flat and dome ports, the work analyzes, using simulations and real experiments, the main optical phenomena involved when operating a camera underwater. Specific aspects which deal with photogrammetric acquisitions are considered with some tests in laboratory and in a swimming pool. Results and considerations are shown and commented. PMID:26729133

  11. Designing the optimal semi-warm NIR spectrograph for SALT via detailed thermal analysis

    NASA Astrophysics Data System (ADS)

    Wolf, Marsha J.; Sheinis, Andrew I.; Mulligan, Mark P.; Wong, Jeffrey P.; Rogers, Allen

    2008-07-01

    The near infrared (NIR) upgrade to the Robert Stobie Spectrograph (RSS) on the Southern African Large Telescope (SALT), RSS/NIR, extends the spectral coverage of all modes of the optical spectrograph. The RSS/NIR is a low to medium resolution spectrograph with broadband, spectropolarimetric, and Fabry-Perot imaging capabilities. The optical and NIR arms can be used simultaneously to extend spectral coverage from 3200 Å to approximately 1.6 μm. Both arms utilize high efficiency volume phase holographic gratings via articulating gratings and cameras. The NIR camera incorporates a HAWAII-2RG detector with an Epps optical design consisting of 6 spherical elements and providing subpixel rms image sizes of 7.5 +/- 1.0 μm over all wavelengths and field angles. The NIR spectrograph is semi-warm, sharing a common slit plane and partial collimator with the optical arm. A pre-dewar, cooled to below ambient temperature, houses the final NIR collimator optic, the grating/Fabry-Perot etalon, the polarizing beam splitter, and the first three camera optics. The last three camera elements, blocking filters, and detector are housed in a cryogenically cooled dewar. The semi-warm design concept has long been proposed as an economical way to extend optical instruments into the NIR, however, success has been very limited. A major portion of our design effort entails a detailed thermal analysis using non-sequential ray tracing to interactively guide the mechanical design and determine a truly realizable long wavelength cutoff over which astronomical observations will be sky-limited. In this paper we describe our thermal analysis, design concepts for the staged cooling scheme, and results to be incorporated into the overall mechanical design and baffling.

  12. Optomechanical stability design of space optical mapping camera

    NASA Astrophysics Data System (ADS)

    Li, Fuqiang; Cai, Weijun; Zhang, Fengqin; Li, Na; Fan, Junjie

    2018-01-01

    According to the interior orientation elements and imaging quality requirements of mapping application to mapping camera and combined with off-axis three-mirror anastigmat(TMA) system, high optomechanical stability design of a space optical mapping camera is introduced in this paper. The configuration is a coaxial TMA system used in off-axis situation. Firstly, the overall optical arrangement is described., and an overview of the optomechanical packaging is provided. Zerodurglass, carbon fiber composite and carbon-fiber reinforced silicon carbon (C/SiC) are widely used in the optomechanical structure, because their low coefficient of thermal expansion (CTE) can reduce the thermal sensitivity of the mirrors and focal plane. Flexible and unloading support are used in reflector and camera supporting structure. Epoxy structural adhesives is used for bonding optics to metal structure is also introduced in this paper. The primary mirror is mounted by means of three-point ball joint flexures system, which is attach to the back of the mirror. Then, In order to predict flexural displacements due to gravity, static finite element analysis (FEA) is performed on the primary mirror. The optical performance peak-to-valley (PV) and root-mean-square (RMS) wavefront errors are detected before and after assemble. Also, the dynamic finite element analysis(FEA) of the whole optical arrangement is carried out as to investigate the performance of optomechanical. Finally, in order to evaluate the stability of the design, the thermal vacuum test and vibration test are carried out and the Modulation Transfer Function (MTF) and elements of interior orientation are presented as the evaluation index. Before and after the thermal vacuum test and vibration test, the MTF, focal distance and position of the principal point of optical system are measured and the result is as expected.

  13. A near-infrared tip-tilt sensor for the Keck I laser guide star adaptive optics system

    NASA Astrophysics Data System (ADS)

    Wizinowich, Peter; Smith, Roger; Biasi, Roberto; Cetre, Sylvain; Dekany, Richard; Femenia-Castella, Bruno; Fucik, Jason; Hale, David; Neyman, Chris; Pescoller, Dietrich; Ragland, Sam; Stomski, Paul; Andrighettoni, Mario; Bartos, Randy; Bui, Khanh; Cooper, Andrew; Cromer, John; van Dam, Marcos; Hess, Michael; James, Ean; Lyke, Jim; Rodriguez, Hector; Stalcup, Thomas

    2014-07-01

    The sky coverage and performance of laser guide star (LGS) adaptive optics (AO) systems is limited by the natural guide star (NGS) used for low order correction. This limitation can be dramatically reduced by measuring the tip and tilt of the NGS in the near-infrared where the NGS is partially corrected by the LGS AO system and where stars are generally several magnitudes brighter than at visible wavelengths. We present the design of a near-infrared tip-tilt sensor that has recently been integrated with the Keck I telescope's LGS AO system along with some initial on-sky results. The implementation involved modifications to the AO bench, real-time control system, and higher level controls and operations software that will also be discussed. The tip-tilt sensor is a H2RG-based near-infrared camera with 0.05 arc second pixels. Low noise at high sample rates is achieved by only reading a small region of interest, from 2×2 to 16×16 pixels, centered on an NGS anywhere in the 100 arc second diameter field. The sensor operates at either Ks or H-band using light reflected by a choice of dichroic beamsplitters located in front of the OSIRIS integral field spectrograph.

  14. The opto-cryo-mechanical design of the short wavelength camera for the CCAT Observatory

    NASA Astrophysics Data System (ADS)

    Parshley, Stephen C.; Adams, Joseph; Nikola, Thomas; Stacey, Gordon J.

    2014-07-01

    The CCAT observatory is a 25-m class Gregorian telescope designed for submillimeter observations that will be deployed at Cerro Chajnantor (~5600 m) in the high Atacama Desert region of Chile. The Short Wavelength Camera (SWCam) for CCAT is an integral part of the observatory, enabling the study of star formation at high and low redshifts. SWCam will be a facility instrument, available at first light and operating in the telluric windows at wavelengths of 350, 450, and 850 μm. In order to trace the large curvature of the CCAT focal plane, and to suit the available instrument space, SWCam is divided into seven sub-cameras, each configured to a particular telluric window. A fully refractive optical design in each sub-camera will produce diffraction-limited images. The material of choice for the optical elements is silicon, due to its excellent transmission in the submillimeter and its high index of refraction, enabling thin lenses of a given power. The cryostat's vacuum windows double as the sub-cameras' field lenses and are ~30 cm in diameter. The other lenses are mounted at 4 K. The sub-cameras will share a single cryostat providing thermal intercepts at 80, 15, 4, 1 and 0.1 K, with cooling provided by pulse tube cryocoolers and a dilution refrigerator. The use of the intermediate temperature stage at 15 K minimizes the load at 4 K and reduces operating costs. We discuss our design requirements, specifications, key elements and expected performance of the optical, thermal and mechanical design for the short wavelength camera for CCAT.

  15. SPADAS: a high-speed 3D single-photon camera for advanced driver assistance systems

    NASA Astrophysics Data System (ADS)

    Bronzi, D.; Zou, Y.; Bellisai, S.; Villa, F.; Tisa, S.; Tosi, A.; Zappa, F.

    2015-02-01

    Advanced Driver Assistance Systems (ADAS) are the most advanced technologies to fight road accidents. Within ADAS, an important role is played by radar- and lidar-based sensors, which are mostly employed for collision avoidance and adaptive cruise control. Nonetheless, they have a narrow field-of-view and a limited ability to detect and differentiate objects. Standard camera-based technologies (e.g. stereovision) could balance these weaknesses, but they are currently not able to fulfill all automotive requirements (distance range, accuracy, acquisition speed, and frame-rate). To this purpose, we developed an automotive-oriented CMOS single-photon camera for optical 3D ranging based on indirect time-of-flight (iTOF) measurements. Imagers based on Single-photon avalanche diode (SPAD) arrays offer higher sensitivity with respect to CCD/CMOS rangefinders, have inherent better time resolution, higher accuracy and better linearity. Moreover, iTOF requires neither high bandwidth electronics nor short-pulsed lasers, hence allowing the development of cost-effective systems. The CMOS SPAD sensor is based on 64 × 32 pixels, each able to process both 2D intensity-data and 3D depth-ranging information, with background suppression. Pixel-level memories allow fully parallel imaging and prevents motion artefacts (skew, wobble, motion blur) and partial exposure effects, which otherwise would hinder the detection of fast moving objects. The camera is housed in an aluminum case supporting a 12 mm F/1.4 C-mount imaging lens, with a 40°×20° field-of-view. The whole system is very rugged and compact and a perfect solution for vehicle's cockpit, with dimensions of 80 mm × 45 mm × 70 mm, and less that 1 W consumption. To provide the required optical power (1.5 W, eye safe) and to allow fast (up to 25 MHz) modulation of the active illumination, we developed a modular laser source, based on five laser driver cards, with three 808 nm lasers each. We present the full characterization of the 3D automotive system, operated both at night and during daytime, in both indoor and outdoor, in real traffic, scenario. The achieved long-range (up to 45m), high dynamic-range (118 dB), highspeed (over 200 fps) 3D depth measurement, and high precision (better than 90 cm at 45 m), highlight the excellent performance of this CMOS SPAD camera for automotive applications.

  16. The darkest EMCCD ever

    NASA Astrophysics Data System (ADS)

    Daigle, Olivier; Quirion, Pierre-Olivier; Lessard, Simon

    2010-07-01

    EMCCDs are devices capable of sub-electron read-out noise at high pixel rate, together with a high quantum efficiency (QE). However, they are plagued by an excess noise factor (ENF) which has the same effect on photometric measurement as if the QE would be halved. In order to get rid of the ENF, the photon counting (PC) operation is mandatory, with the drawback of counting only one photon per pixel per frame. The high frame rate capability of the EMCCDs comes to the rescue, at the price of increased clock induced charges (CIC), which dominates the noise budget of the EMCCD. The CIC can be greatly reduced with an appropriate clocking, which renders the PC operation of the EMCCD very efficient for faint flux photometry or spectroscopy, adaptive optics, ultrafast imaging and Lucky Imaging. This clocking is achievable with a new EMCCD controller: CCCP, the CCD Controller for Counting Photons. This new controller, which is now commercialized by Nüvü cameras inc., was integrated into an EMCCD camera and tested at the observatoire du mont-M'egantic. The results are presented in this paper.

  17. Automatic detection and recognition of signs from natural scenes.

    PubMed

    Chen, Xilin; Yang, Jie; Zhang, Jing; Waibel, Alex

    2004-01-01

    In this paper, we present an approach to automatic detection and recognition of signs from natural scenes, and its application to a sign translation task. The proposed approach embeds multiresolution and multiscale edge detection, adaptive searching, color analysis, and affine rectification in a hierarchical framework for sign detection, with different emphases at each phase to handle the text in different sizes, orientations, color distributions and backgrounds. We use affine rectification to recover deformation of the text regions caused by an inappropriate camera view angle. The procedure can significantly improve text detection rate and optical character recognition (OCR) accuracy. Instead of using binary information for OCR, we extract features from an intensity image directly. We propose a local intensity normalization method to effectively handle lighting variations, followed by a Gabor transform to obtain local features, and finally a linear discriminant analysis (LDA) method for feature selection. We have applied the approach in developing a Chinese sign translation system, which can automatically detect and recognize Chinese signs as input from a camera, and translate the recognized text into English.

  18. Robotic Vehicle Communications Interoperability

    DTIC Science & Technology

    1988-08-01

    starter (cold start) X X Fire suppression X Fording control X Fuel control X Fuel tank selector X Garage toggle X Gear selector X X X X Hazard warning...optic Sensors Sensor switch Video Radar IR Thermal imaging system Image intensifier Laser ranger Video camera selector Forward Stereo Rear Sensor control...optic sensors Sensor switch Video Radar IR Thermal imaging system Image intensifier Laser ranger Video camera selector Forward Stereo Rear Sensor

  19. Single-Fiber Optical Link For Video And Control

    NASA Technical Reports Server (NTRS)

    Galloway, F. Houston

    1993-01-01

    Single optical fiber carries control signals to remote television cameras and video signals from cameras. Fiber replaces multiconductor copper cable, with consequent reduction in size. Repeaters not needed. System works with either multimode- or single-mode fiber types. Nonmetallic fiber provides immunity to electromagnetic interference at suboptical frequencies and much less vulnerable to electronic eavesdropping and lightning strikes. Multigigahertz bandwidth more than adequate for high-resolution television signals.

  20. Preliminary optical design of PANIC, a wide-field infrared camera for CAHA

    NASA Astrophysics Data System (ADS)

    Cárdenas, M. C.; Rodríguez Gómez, J.; Lenzen, R.; Sánchez-Blanco, E.

    2008-07-01

    In this paper, we present the preliminary optical design of PANIC (PAnoramic Near Infrared camera for Calar Alto), a wide-field infrared imager for the Calar Alto 2.2 m telescope. The camera optical design is a folded single optical train that images the sky onto the focal plane with a plate scale of 0.45 arcsec per 18 μm pixel. A mosaic of four Hawaii 2RG of 2k x 2k made by Teledyne is used as detector and will give a field of view of 31.9 arcmin x 31.9 arcmin. This cryogenic instrument has been optimized for the Y, J, H and K bands. Special care has been taken in the selection of the standard IR materials used for the optics in order to maximize the instrument throughput and to include the z band. The main challenges of this design are: to produce a well defined internal pupil which allows reducing the thermal background by a cryogenic pupil stop; the correction of off-axis aberrations due to the large field available; the correction of chromatic aberration because of the wide spectral coverage; and the capability of introduction of narrow band filters (~1%) in the system minimizing the degradation in the filter passband without a collimated stage in the camera. We show the optomechanical error budget and compensation strategy that allows our as built design to met the performances from an optical point of view. Finally, we demonstrate the flexibility of the design showing the performances of PANIC at the CAHA 3.5m telescope.

  1. Multiple-aperture optical design for micro-level cameras using 3D-printing method

    NASA Astrophysics Data System (ADS)

    Peng, Wei-Jei; Hsu, Wei-Yao; Cheng, Yuan-Chieh; Lin, Wen-Lung; Yu, Zong-Ru; Chou, Hsiao-Yu; Chen, Fong-Zhi; Fu, Chien-Chung; Wu, Chong-Syuan; Huang, Chao-Tsung

    2018-02-01

    The design of the ultra miniaturized camera using 3D-printing technology directly printed on to the complementary metal-oxide semiconductor (CMOS) imaging sensor is presented in this paper. The 3D printed micro-optics is manufactured using the femtosecond two-photon direct laser writing, and the figure error which could achieve submicron accuracy is suitable for the optical system. Because the size of the micro-level camera is approximately several hundreds of micrometers, the resolution is reduced much and highly limited by the Nyquist frequency of the pixel pitch. For improving the reduced resolution, one single-lens can be replaced by multiple-aperture lenses with dissimilar field of view (FOV), and then stitching sub-images with different FOV can achieve a high resolution within the central region of the image. The reason is that the angular resolution of the lens with smaller FOV is higher than that with larger FOV, and then the angular resolution of the central area can be several times than that of the outer area after stitching. For the same image circle, the image quality of the central area of the multi-lens system is significantly superior to that of a single-lens. The foveated image using stitching FOV breaks the limitation of the resolution for the ultra miniaturized imaging system, and then it can be applied such as biomedical endoscopy, optical sensing, and machine vision, et al. In this study, the ultra miniaturized camera with multi-aperture optics is designed and simulated for the optimum optical performance.

  2. Development of biostereometric experiments. [stereometric camera system

    NASA Technical Reports Server (NTRS)

    Herron, R. E.

    1978-01-01

    The stereometric camera was designed for close-range techniques in biostereometrics. The camera focusing distance of 360 mm to infinity covers a broad field of close-range photogrammetry. The design provides for a separate unit for the lens system and interchangeable backs on the camera for the use of single frame film exposure, roll-type film cassettes, or glass plates. The system incorporates the use of a surface contrast optical projector.

  3. Electronographic cameras for space astronomy.

    NASA Technical Reports Server (NTRS)

    Carruthers, G. R.; Opal, C. B.

    1972-01-01

    Magnetically-focused electronographic cameras have been under development at the Naval Research Laboratory for use in far-ultraviolet imagery and spectrography, primarily in astronomical and optical-geophysical observations from sounding rockets and space vehicles. Most of this work has been with cameras incorporating internal optics of the Schmidt or wide-field all-reflecting types. More recently, we have begun development of electronographic spectrographs incorporating an internal concave grating, operating at normal or grazing incidence. We also are developing electronographic image tubes of the conventional end-window-photo-cathode type, for far-ultraviolet imagery at the focus of a large space telescope, with image formats up to 120 mm in diameter.

  4. A compact high-speed pnCCD camera for optical and x-ray applications

    NASA Astrophysics Data System (ADS)

    Ihle, Sebastian; Ordavo, Ivan; Bechteler, Alois; Hartmann, Robert; Holl, Peter; Liebel, Andreas; Meidinger, Norbert; Soltau, Heike; Strüder, Lothar; Weber, Udo

    2012-07-01

    We developed a camera with a 264 × 264 pixel pnCCD of 48 μm size (thickness 450 μm) for X-ray and optical applications. It has a high quantum efficiency and can be operated up to 400 / 1000 Hz (noise≍ 2:5 ° ENC / ≍4:0 ° ENC). High-speed astronomical observations can be performed with low light levels. Results of test measurements will be presented. The camera is well suitable for ground based preparation measurements for future X-ray missions. For X-ray single photons, the spatial position can be determined with significant sub-pixel resolution.

  5. An overview of instrumentation for the Large Binocular Telescope

    NASA Astrophysics Data System (ADS)

    Wagner, R. Mark

    2010-07-01

    An overview of instrumentation for the Large Binocular Telescope is presented. Optical instrumentation includes the Large Binocular Camera (LBC), a pair of wide-field (27 × 27) mosaic CCD imagers at the prime focus, and the Multi-Object Double Spectrograph (MODS), a pair of dual-beam blue-red optimized long-slit spectrographs mounted at the straight-through F/15 Gregorian focus incorporating multiple slit masks for multi-object spectroscopy over a 6 field and spectral resolutions of up to 8000. Infrared instrumentation includes the LBT Near-IR Spectroscopic Utility with Camera and Integral Field Unit for Extragalactic Research (LUCIFER), a modular near-infrared (0.9-2.5 μm) imager and spectrograph pair mounted at a bent interior focal station and designed for seeing-limited (FOV: 4 × 4) imaging, long-slit spectroscopy, and multi-object spectroscopy utilizing cooled slit masks and diffraction limited (FOV: 0.5 × 0.5) imaging and long-slit spectroscopy. Strategic instruments under development for the remaining two combined focal stations include an interferometric cryogenic beam combiner with near-infrared and thermal-infrared instruments for Fizeau imaging and nulling interferometry (LBTI) and an optical bench near-infrared beam combiner utilizing multi-conjugate adaptive optics for high angular resolution and sensitivity (LINC-NIRVANA). In addition, a fiber-fed bench spectrograph (PEPSI) capable of ultra high resolution spectroscopy and spectropolarimetry (R = 40,000-300,000) will be available as a principal investigator instrument. The availability of all these instruments mounted simultaneously on the LBT permits unique science, flexible scheduling, and improved operational support. Over the past two years the LBC and the first LUCIFER instrument have been brought into routine scientific operation and MODS1 commissioning is set to begin in the fall of 2010.

  6. Observation of development of breast cancer cell lines in real time by fluorescence microscopy under simulated microgravity

    NASA Astrophysics Data System (ADS)

    Lavan, David; Valdivia-Silva, Julio E.; Sanabria, Gabriela; Orihuela, Diego; Suarez, Juan; Quispe, Marco; Chuchon, Mariano; Martin, David; Maroto, Marcos; Egea, Javier

    2016-07-01

    This project consist in the implementation of a fluorescence microscope for the in real time monitoring of biological labeled samples by several fluorophores in microgravity conditions keeping the temperature, humidity, and (CO)2 controlled by an electronic platform. The system (fluorescence microscope and incubator) is integrated to a microgravity simulator machine which was presented on the "30th Annual American Society for Gravitation and Space Research Meeting" October 2014 in Pasadena, CA, USA. Currently, we have the microgravity machine biologically validated by genetic expression studies in pupal stage of Drosophila melanogaster. The fluorescence microscope has a platform designed to hold a culture flask, and a fluorescence camera (Leica DFC3000 G) connected to an optical system (Fluorescence Light source Leica EL6000, optic fiber, fiber adapter, and fluorescence filter) in order to take images in real time. The mechanical system of the fluorescence microsc ope is designed to allow the displacement of the fluorescence camera through a parallel plane to the culture flask's plane and also the movement of the platform through a perpendicular axis to the culture flask in order to focus the samples to the optical system. The mechanical system is propelled by four DC moto-reductors with encoder (A-max 26 Maxon motor, GP 32S screw and MR encoder) that generate displacements in the order of micrometers. The angular position control of the DC motoreductor's shaft of all the DC moto-reductors is done by PWM signals based on the interpretation of the signals provided by the encoders during the movement. The system is remotely operated by a graphic interface installed on a personal computer or any mobile device (smartphone, laptop or tablet) by using the internet. Acknowledgments: Grant of INNOVATE PERU (Formerly FINCYT)

  7. Space telescope phase B definition study. Volume 2A: Science instruments, f24 field camera

    NASA Technical Reports Server (NTRS)

    Grosso, R. P.; Mccarthy, D. J.

    1976-01-01

    The analysis and design of the F/24 field camera for the space telescope are discussed. The camera was designed for application to the radial bay of the optical telescope assembly and has an on axis field of view of 3 arc-minutes by 3 arc-minutes.

  8. Studying Upper-Limb Amputee Prosthesis Use to Inform Device Design

    DTIC Science & Technology

    2015-10-01

    the study. This equipment has included a modified GoPro head-mounted camera and a Vicon 13-camera optical motion capture system, which was not part...also completed for relevant members of the study team. 4. The head-mounted camera setup has been established (a modified GoPro Hero 3 with external

  9. Wavefront measurement of plastic lenses for mobile-phone applications

    NASA Astrophysics Data System (ADS)

    Huang, Li-Ting; Cheng, Yuan-Chieh; Wang, Chung-Yen; Wang, Pei-Jen

    2016-08-01

    In camera lenses for mobile-phone applications, all lens elements have been designed with aspheric surfaces because of the requirements in minimal total track length of the lenses. Due to the diffraction-limited optics design with precision assembly procedures, element inspection and lens performance measurement have become cumbersome in the production of mobile-phone cameras. Recently, wavefront measurements based on Shack-Hartmann sensors have been successfully implemented on injection-molded plastic lens with aspheric surfaces. However, the applications of wavefront measurement on small-sized plastic lenses have yet to be studied both theoretically and experimentally. In this paper, both an in-house-built and a commercial wavefront measurement system configured on two optics structures have been investigated with measurement of wavefront aberrations on two lens elements from a mobile-phone camera. First, the wet-cell method has been employed for verifications of aberrations due to residual birefringence in an injection-molded lens. Then, two lens elements of a mobile-phone camera with large positive and negative power have been measured with aberrations expressed in Zernike polynomial to illustrate the effectiveness in wavefront measurement for troubleshooting defects in optical performance.

  10. Design of microcontroller based system for automation of streak camera.

    PubMed

    Joshi, M J; Upadhyay, J; Deshpande, P P; Sharma, M L; Navathe, C P

    2010-08-01

    A microcontroller based system has been developed for automation of the S-20 optical streak camera, which is used as a diagnostic tool to measure ultrafast light phenomenon. An 8 bit MCS family microcontroller is employed to generate all control signals for the streak camera. All biasing voltages required for various electrodes of the tubes are generated using dc-to-dc converters. A high voltage ramp signal is generated through a step generator unit followed by an integrator circuit and is applied to the camera's deflecting plates. The slope of the ramp can be changed by varying values of the capacitor and inductor. A programmable digital delay generator has been developed for synchronization of ramp signal with the optical signal. An independent hardwired interlock circuit has been developed for machine safety. A LABVIEW based graphical user interface has been developed which enables the user to program the settings of the camera and capture the image. The image is displayed with intensity profiles along horizontal and vertical axes. The streak camera was calibrated using nanosecond and femtosecond lasers.

  11. Design of microcontroller based system for automation of streak camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joshi, M. J.; Upadhyay, J.; Deshpande, P. P.

    2010-08-15

    A microcontroller based system has been developed for automation of the S-20 optical streak camera, which is used as a diagnostic tool to measure ultrafast light phenomenon. An 8 bit MCS family microcontroller is employed to generate all control signals for the streak camera. All biasing voltages required for various electrodes of the tubes are generated using dc-to-dc converters. A high voltage ramp signal is generated through a step generator unit followed by an integrator circuit and is applied to the camera's deflecting plates. The slope of the ramp can be changed by varying values of the capacitor and inductor.more » A programmable digital delay generator has been developed for synchronization of ramp signal with the optical signal. An independent hardwired interlock circuit has been developed for machine safety. A LABVIEW based graphical user interface has been developed which enables the user to program the settings of the camera and capture the image. The image is displayed with intensity profiles along horizontal and vertical axes. The streak camera was calibrated using nanosecond and femtosecond lasers.« less

  12. Design framework for a spectral mask for a plenoptic camera

    NASA Astrophysics Data System (ADS)

    Berkner, Kathrin; Shroff, Sapna A.

    2012-01-01

    Plenoptic cameras are designed to capture different combinations of light rays from a scene, sampling its lightfield. Such camera designs capturing directional ray information enable applications such as digital refocusing, rotation, or depth estimation. Only few address capturing spectral information of the scene. It has been demonstrated that by modifying a plenoptic camera with a filter array containing different spectral filters inserted in the pupil plane of the main lens, sampling of the spectral dimension of the plenoptic function is performed. As a result, the plenoptic camera is turned into a single-snapshot multispectral imaging system that trades-off spatial with spectral information captured with a single sensor. Little work has been performed so far on analyzing diffraction effects and aberrations of the optical system on the performance of the spectral imager. In this paper we demonstrate simulation of a spectrally-coded plenoptic camera optical system via wave propagation analysis, evaluate quality of the spectral measurements captured at the detector plane, and demonstrate opportunities for optimization of the spectral mask for a few sample applications.

  13. Completely optical orientation determination for an unstabilized aerial three-line camera

    NASA Astrophysics Data System (ADS)

    Wohlfeil, Jürgen

    2010-10-01

    Aerial line cameras allow the fast acquisition of high-resolution images at low costs. Unfortunately the measurement of the camera's orientation with the necessary rate and precision is related with large effort, unless extensive camera stabilization is used. But also stabilization implicates high costs, weight, and power consumption. This contribution shows that it is possible to completely derive the absolute exterior orientation of an unstabilized line camera from its images and global position measurements. The presented approach is based on previous work on the determination of the relative orientation of subsequent lines using optical information from the remote sensing system. The relative orientation is used to pre-correct the line images, in which homologous points can reliably be determined using the SURF operator. Together with the position measurements these points are used to determine the absolute orientation from the relative orientations via bundle adjustment of a block of overlapping line images. The approach was tested at a flight with the DLR's RGB three-line camera MFC. To evaluate the precision of the resulting orientation the measurements of a high-end navigation system and ground control points are used.

  14. [A Method for Selecting Self-Adoptive Chromaticity of the Projected Markers].

    PubMed

    Zhao, Shou-bo; Zhang, Fu-min; Qu, Xing-hua; Zheng, Shi-wei; Chen, Zhe

    2015-04-01

    The authors designed a self-adaptive projection system which is composed of color camera, projector and PC. In detail, digital micro-mirror device (DMD) as a spatial light modulator for the projector was introduced in the optical path to modulate the illuminant spectrum based on red, green and blue light emitting diodes (LED). However, the color visibility of active markers is affected by the screen which has unknown reflective spectrum as well. Here active markers are projected spot array. And chromaticity feature of markers is sometimes submerged in similar spectral screen. In order to enhance the color visibility of active markers relative to screen, a method for selecting self-adaptive chromaticity of the projected markers in 3D scanning metrology is described. Color camera with 3 channels limits the accuracy of device characterization. For achieving interconversion of device-independent color space and device-dependent color space, high-dimensional linear model of reflective spectrum was built. Prior training samples provide additional constraints to yield high-dimensional linear model with more than three degrees of freedom. Meanwhile, spectral power distribution of ambient light was estimated. Subsequently, markers' chromaticity in CIE color spaces was selected via maximization principle of Euclidean distance. The setting values of RGB were easily estimated via inverse transform. Finally, we implemented a typical experiment to show the performance of the proposed approach. An 24 Munsell Color Checker was used as projective screen. Color difference in the chromaticity coordinates between the active marker and the color patch was utilized to evaluate the color visibility of active markers relative to the screen. The result comparison between self-adaptive projection system and traditional diode-laser light projector was listed and discussed to highlight advantage of our proposed method.

  15. High-Resolution Imaging of Parafoveal Cones in Different Stages of Diabetic Retinopathy Using Adaptive Optics Fundus Camera.

    PubMed

    Soliman, Mohamed Kamel; Sadiq, Mohammad Ali; Agarwal, Aniruddha; Sarwar, Salman; Hassan, Muhammad; Hanout, Mostafa; Graf, Frank; High, Robin; Do, Diana V; Nguyen, Quan Dong; Sepah, Yasir J

    2016-01-01

    To assess cone density as a marker of early signs of retinopathy in patients with type II diabetes mellitus. An adaptive optics (AO) retinal camera (rtx1™; Imagine Eyes, Orsay, France) was used to acquire images of parafoveal cones from patients with type II diabetes mellitus with or without retinopathy and from healthy controls with no known systemic or ocular disease. Cone mosaic was captured at 0° and 2°eccentricities along the horizontal and vertical meridians. The density of the parafoveal cones was calculated within 100×100-μm squares located at 500-μm from the foveal center along the orthogonal meridians. Manual corrections of the automated counting were then performed by 2 masked graders. Cone density measurements were evaluated with ANOVA that consisted of one between-subjects factor, stage of retinopathy and the within-subject factors. The ANOVA model included a complex covariance structure to account for correlations between the levels of the within-subject factors. Ten healthy participants (20 eyes) and 25 patients (29 eyes) with type II diabetes mellitus were recruited in the study. The mean (± standard deviation [SD]) age of the healthy participants (Control group), patients with diabetes without retinopathy (No DR group), and patients with diabetic retinopathy (DR group) was 55 ± 8, 53 ± 8, and 52 ± 9 years, respectively. The cone density was significantly lower in the moderate nonproliferative diabetic retinopathy (NPDR) and severe NPDR/proliferative DR groups compared to the Control, No DR, and mild NPDR groups (P < 0.05). No correlation was found between cone density and the level of hemoglobin A1c (HbA1c) or the duration of diabetes. The extent of photoreceptor loss on AO imaging may correlate positively with severity of DR in patients with type II diabetes mellitus. Photoreceptor loss may be more pronounced among patients with advanced stages of DR due to higher risk of macular edema and its sequelae.

  16. Micro-optical system based 3D imaging for full HD depth image capturing

    NASA Astrophysics Data System (ADS)

    Park, Yong-Hwa; Cho, Yong-Chul; You, Jang-Woo; Park, Chang-Young; Yoon, Heesun; Lee, Sang-Hun; Kwon, Jong-Oh; Lee, Seung-Wan

    2012-03-01

    20 Mega-Hertz-switching high speed image shutter device for 3D image capturing and its application to system prototype are presented. For 3D image capturing, the system utilizes Time-of-Flight (TOF) principle by means of 20MHz high-speed micro-optical image modulator, so called 'optical shutter'. The high speed image modulation is obtained using the electro-optic operation of the multi-layer stacked structure having diffractive mirrors and optical resonance cavity which maximizes the magnitude of optical modulation. The optical shutter device is specially designed and fabricated realizing low resistance-capacitance cell structures having small RC-time constant. The optical shutter is positioned in front of a standard high resolution CMOS image sensor and modulates the IR image reflected from the object to capture a depth image. Suggested novel optical shutter device enables capturing of a full HD depth image with depth accuracy of mm-scale, which is the largest depth image resolution among the-state-of-the-arts, which have been limited up to VGA. The 3D camera prototype realizes color/depth concurrent sensing optical architecture to capture 14Mp color and full HD depth images, simultaneously. The resulting high definition color/depth image and its capturing device have crucial impact on 3D business eco-system in IT industry especially as 3D image sensing means in the fields of 3D camera, gesture recognition, user interface, and 3D display. This paper presents MEMS-based optical shutter design, fabrication, characterization, 3D camera system prototype and image test results.

  17. Interplanetary approach optical navigation with applications

    NASA Technical Reports Server (NTRS)

    Jerath, N.

    1978-01-01

    The use of optical data from onboard television cameras for the navigation of interplanetary spacecraft during the planet approach phase is investigated. Three optical data types were studied: the planet limb with auxiliary celestial references, the satellite-star, and the planet-star two-camera methods. Analysis and modelling issues related to the nature and information content of the optical methods were examined. Dynamic and measurement system modelling, data sequence design, measurement extraction, model estimation and orbit determination, as relating optical navigation, are discussed, and the various error sources were analyzed. The methodology developed was applied to the Mariner 9 and the Viking Mars missions. Navigation accuracies were evaluated at the control and knowledge points, with particular emphasis devoted to the combined use of radio and optical data. A parametric probability analysis technique was developed to evaluate navigation performance as a function of system reliabilities.

  18. Imaging of optically diffusive media by use of opto-elastography

    NASA Astrophysics Data System (ADS)

    Bossy, Emmanuel; Funke, Arik R.; Daoudi, Khalid; Tanter, Mickael; Fink, Mathias; Boccara, Claude

    2007-02-01

    We present a camera-based optical detection scheme designed to detect the transient motion created by the acoustic radiation force in elastic media. An optically diffusive tissue mimicking phantom was illuminated with coherent laser light, and a high speed camera (2 kHz frame rate) was used to acquire and cross-correlate consecutive speckle patterns. Time-resolved transient decorrelations of the optical speckle were measured as the results of localised motion induced in the medium by the radiation force and subsequent propagating shear waves. As opposed to classical acousto-optic techniques which are sensitive to vibrations induced by compressional waves at ultrasonic frequencies, the proposed technique is sensitive only to the low frequency transient motion induced in the medium by the radiation force. It therefore provides a way to assess both optical and shear mechanical properties.

  19. Qualification Tests of Micro-camera Modules for Space Applications

    NASA Astrophysics Data System (ADS)

    Kimura, Shinichi; Miyasaka, Akira

    Visual capability is very important for space-based activities, for which small, low-cost space cameras are desired. Although cameras for terrestrial applications are continually being improved, little progress has been made on cameras used in space, which must be extremely robust to withstand harsh environments. This study focuses on commercial off-the-shelf (COTS) CMOS digital cameras because they are very small and are based on an established mass-market technology. Radiation and ultrahigh-vacuum tests were conducted on a small COTS camera that weighs less than 100 mg (including optics). This paper presents the results of the qualification tests for COTS cameras and for a small, low-cost COTS-based space camera.

  20. Structural Dynamics Analysis and Research for FEA Modeling Method of a Light High Resolution CCD Camera

    NASA Astrophysics Data System (ADS)

    Sun, Jiwen; Wei, Ling; Fu, Danying

    2002-01-01

    resolution and wide swath. In order to assure its high optical precision smoothly passing the rigorous dynamic load of launch, it should be of high structural rigidity. Therefore, a careful study of the dynamic features of the camera structure should be performed. Pro/E. An interference examination is performed on the precise CAD model of the camera for mending the structural design. for the first time in China, and the analysis of structural dynamic of the camera is accomplished by applying the structural analysis code PATRAN and NASTRAN. The main research programs include: 1) the comparative calculation of modes analysis of the critical structure of the camera is achieved by using 4 nodes and 10 nodes tetrahedral elements respectively, so as to confirm the most reasonable general model; 2) through the modes analysis of the camera from several cases, the inherent frequencies and modes are obtained and further the rationality of the structural design of the camera is proved; 3) the static analysis of the camera under self gravity and overloads is completed and the relevant deformation and stress distributions are gained; 4) the response calculation of sine vibration of the camera is completed and the corresponding response curve and maximum acceleration response with corresponding frequencies are obtained. software technique is accurate and efficient. sensitivity, the dynamic design and engineering optimization of the critical structure of the camera are discussed. fundamental technology in design of forecoming space optical instruments.

  1. NASA Tech Briefs, March 2012

    NASA Technical Reports Server (NTRS)

    2012-01-01

    The topics include: 1) Spectral Profiler Probe for In Situ Snow Grain Size and Composition Stratigraphy; 2) Portable Fourier Transform Spectroscopy for Analysis of Surface Contamination and Quality Control; 3) In Situ Geochemical Analysis and Age Dating of Rocks Using Laser Ablation-Miniature Mass Spectrometer; 4) Physics Mining of Multi-Source Data Sets; 5) Photogrammetry Tool for Forensic Analysis; 6) Connect Global Positioning System RF Module; 7) Simple Cell Balance Circuit; 8) Miniature EVA Software Defined Radio; 9) Remotely Accessible Testbed for Software Defined Radio Development; 10) System-of-Systems Technology-Portfolio-Analysis Tool; 11) VESGEN Software for Mapping and Quantification of Vascular Regulators; 12) Constructing a Database From Multiple 2D Images for Camera Pose Estimation and Robot Localization; 13) Adaption of G-TAG Software for Validating Touch and Go Asteroid Sample Return Design Methodology; 14) 3D Visualization for Phoenix Mars Lander Science Operations; 15) RxGen General Optical Model Prescription Generator; 16) Carbon Nanotube Bonding Strength Enhancement Using Metal Wicking Process; 17) Multi-Layer Far-Infrared Component Technology; 18) Germanium Lift-Off Masks for Thin Metal Film Patterning; 19) Sealing Materials for Use in Vacuum at High Temperatures; 20) Radiation Shielding System Using a Composite of Carbon Nanotubes Loaded With Electropolymers; 21) Nano Sponges for Drug Delivery and Medicinal Applications; 22) Molecular Technique to Understand Deep Microbial Diversity; 23) Methods and Compositions Based on Culturing Microorganisms in Low Sedimental Fluid Shear Conditions; 24) Secure Peer-to-Peer Networks for Scientific Information Sharing; 25) Multiplexer/Demultiplexer Loading Tool (MDMLT); 26) High-Rate Data-Capture for an Airborne Lidar System; 27) Wavefront Sensing Analysis of Grazing Incidence Optical Systems; 28) Foam-on-Tile Damage Model; 29) Instrument Package Manipulation Through the Generation and Use of an Attenuated-Fluent Gas Fold; 30) Multicolor Detectors for Ultrasensitive Long-Wave Imaging Cameras; 31) Lunar Reconnaissance Orbiter (LRO) Command and Data Handling Flight Electronics Subsystem; and 32) Electro-Optic Segment-Segment Sensors for Radio and Optical Telescopes.

  2. The TESS camera: modeling and measurements with deep depletion devices

    NASA Astrophysics Data System (ADS)

    Woods, Deborah F.; Vanderspek, Roland; MacDonald, Robert; Morgan, Edward; Villasenor, Joel; Thayer, Carolyn; Burke, Barry; Chesbrough, Christian; Chrisp, Michael; Clark, Kristin; Furesz, Gabor; Gonzales, Alexandria; Nguyen, Tam; Prigozhin, Gregory; Primeau, Brian; Ricker, George; Sauerwein, Timothy; Suntharalingam, Vyshnavi

    2016-07-01

    The Transiting Exoplanet Survey Satellite, a NASA Explorer-class mission in development, will discover planets around nearby stars, most notably Earth-like planets with potential for follow up characterization. The all-sky survey requires a suite of four wide field-of-view cameras with sensitivity across a broad spectrum. Deep depletion CCDs with a silicon layer of 100 μm thickness serve as the camera detectors, providing enhanced performance in the red wavelengths for sensitivity to cooler stars. The performance of the camera is critical for the mission objectives, with both the optical system and the CCD detectors contributing to the realized image quality. Expectations for image quality are studied using a combination of optical ray tracing in Zemax and simulations in Matlab to account for the interaction of the incoming photons with the 100 μm silicon layer. The simulations include a probabilistic model to determine the depth of travel in the silicon before the photons are converted to photo-electrons, and a Monte Carlo approach to charge diffusion. The charge diffusion model varies with the remaining depth for the photo-electron to traverse and the strength of the intermediate electric field. The simulations are compared with laboratory measurements acquired by an engineering unit camera with the TESS optical design and deep depletion CCDs. In this paper we describe the performance simulations and the corresponding measurements taken with the engineering unit camera, and discuss where the models agree well in predicted trends and where there are differences compared to observations.

  3. Suppressing the image smear of the vibration modulation transfer function for remote-sensing optical cameras.

    PubMed

    Li, Jin; Liu, Zilong; Liu, Si

    2017-02-20

    In on-board photographing processes of satellite cameras, the platform vibration can generate image motion, distortion, and smear, which seriously affect the image quality and image positioning. In this paper, we create a mathematical model of a vibrating modulate transfer function (VMTF) for a remote-sensing camera. The total MTF of a camera is reduced by the VMTF, which means the image quality is degraded. In order to avoid the degeneration of the total MTF caused by vibrations, we use an Mn-20Cu-5Ni-2Fe (M2052) manganese copper alloy material to fabricate a vibration-isolation mechanism (VIM). The VIM can transform platform vibration energy into irreversible thermal energy with its internal twin crystals structure. Our experiment shows the M2052 manganese copper alloy material is good enough to suppress image motion below 125 Hz, which is the vibration frequency of satellite platforms. The camera optical system has a higher MTF after suppressing the vibration of the M2052 material than before.

  4. Sublimation of icy aggregates in the coma of comet 67P/Churyumov-Gerasimenko detected with the OSIRIS cameras on board Rosetta

    NASA Astrophysics Data System (ADS)

    Gicquel, A.; Vincent, J.-B.; Agarwal, J.; A'Hearn, M. F.; Bertini, I.; Bodewits, D.; Sierks, H.; Lin, Z.-Y.; Barbieri, C.; Lamy, P. L.; Rodrigo, R.; Koschny, D.; Rickman, H.; Keller, H. U.; Barucci, M. A.; Bertaux, J.-L.; Besse, S.; Cremonese, G.; Da Deppo, V.; Davidsson, B.; Debei, S.; Deller, J.; De Cecco, M.; Frattin, E.; El-Maarry, M. R.; Fornasier, S.; Fulle, M.; Groussin, O.; Gutiérrez, P. J.; Gutiérrez-Marquez, P.; Güttler, C.; Höfner, S.; Hofmann, M.; Hu, X.; Hviid, S. F.; Ip, W.-H.; Jorda, L.; Knollenberg, J.; Kovacs, G.; Kramm, J.-R.; Kührt, E.; Küppers, M.; Lara, L. M.; Lazzarin, M.; Moreno, J. J. Lopez; Lowry, S.; Marzari, F.; Masoumzadeh, N.; Massironi, M.; Moreno, F.; Mottola, S.; Naletto, G.; Oklay, N.; Pajola, M.; Pommerol, A.; Preusker, F.; Scholten, F.; Shi, X.; Thomas, N.; Toth, I.; Tubiana, C.

    2016-11-01

    Beginning in 2014 March, the OSIRIS (Optical, Spectroscopic, and Infrared Remote Imaging System) cameras began capturing images of the nucleus and coma (gas and dust) of comet 67P/Churyumov-Gerasimenko using both the wide angle camera (WAC) and the narrow angle camera (NAC). The many observations taken since July of 2014 have been used to study the morphology, location, and temporal variation of the comet's dust jets. We analysed the dust monitoring observations shortly after the southern vernal equinox on 2015 May 30 and 31 with the WAC at the heliocentric distance Rh = 1.53 AU, where it is possible to observe that the jet rotates with the nucleus. We found that the decline of brightness as a function of the distance of the jet is much steeper than the background coma, which is a first indication of sublimation. We adapted a model of sublimation of icy aggregates and studied the effect as a function of the physical properties of the aggregates (composition and size). The major finding of this paper was that through the sublimation of the aggregates of dirty grains (radius a between 5 and 50 μm) we were able to completely reproduce the radial brightness profile of a jet beyond 4 km from the nucleus. To reproduce the data, we needed to inject a number of aggregates between 8.5 × 1013 and 8.5 × 1010 for a = 5 and 50 μm, respectively, or an initial mass of H2O ice around 22 kg.

  5. Smartphone Based Platform for Colorimetric Sensing of Dyes

    NASA Astrophysics Data System (ADS)

    Dutta, Sibasish; Nath, Pabitra

    We demonstrate the working of a smartphone based optical sensor for measuring absorption band of coloured dyes. By integration of simple laboratory optical components with the camera unit of the smartphone we have converted it into a visible spectrometer with a pixel resolution of 0.345 nm/pixel. Light from a broadband optical source is allowed to transmit through a specific dye solution. The transmitted light signal is captured by the camera of the smartphone. The present sensor is inexpensive, portable and light weight making it an ideal handy sensor suitable for different on-field sensing.

  6. Imaging using a supercontinuum laser to assess tumors in patients with breast carcinoma

    NASA Astrophysics Data System (ADS)

    Sordillo, Laura A.; Sordillo, Peter P.; Alfano, R. R.

    2016-03-01

    The supercontinuum laser light source has many advantages over other light sources, including broad spectral range. Transmission images of paired normal and malignant breast tissue samples from two patients were obtained using a Leukos supercontinuum (SC) laser light source with wavelengths in the second and third NIR optical windows and an IR- CCD InGaAs camera detector (Goodrich Sensors Inc. high response camera SU320KTSW-1.7RT with spectral response between 900 nm and 1,700 nm). Optical attenuation measurements at the four NIR optical windows were obtained from the samples.

  7. Conference Proceedings of the America Institute for Aeronautics and Astronautics Missile Sciences Held in Monterey, California on 29 November - 1 December 1988. Volume 6. Navy Ballistic Missile Technology

    DTIC Science & Technology

    1988-11-01

    atmospheric point the sensor line of sight to a target. Both oxidizers.) The stability of the booster plume as optical systems look out through windows...vertical. The optical layout olume unless it is tracking the UV plume outside for the UV camera is as shown in Figure 1. A the atmosphere. Thus, other...and olune and handoff to the missile in the atmosphere camera was used on the rear platform for the with high resolution optics . visible observation

  8. Transmission electron microscope CCD camera

    DOEpatents

    Downing, Kenneth H.

    1999-01-01

    In order to improve the performance of a CCD camera on a high voltage electron microscope, an electron decelerator is inserted between the microscope column and the CCD. This arrangement optimizes the interaction of the electron beam with the scintillator of the CCD camera while retaining optimization of the microscope optics and of the interaction of the beam with the specimen. Changing the electron beam energy between the specimen and camera allows both to be optimized.

  9. Exoplanetary Science: Instrumentation, Observations, and Expectations

    NASA Technical Reports Server (NTRS)

    McElwain, Michael

    2011-01-01

    More than 700 exoplanets have been discovered and studied using indirect techniques, leading our field into the exciting new era of comparative exoplanetology. However, the direct detection of exoplanetary systems still remains at the sensitivity limits of both ground- and space-based observatories. The development of new technologies for adaptive optics systems and high contrast instruments continues to increase the ability to directly study exoplanets. The scientific impact of these developments has promising prospects for both short and long timescales. In my talk, I will discuss recent highlights from the SEEDS survey and the current instrumentation in use at the Subaru telescope. SEEDS is a high contrast imaging strategic observing program with 120 nights of time allocated at the NAOJ's flagship optical and infrared telescope. I will also describe new instrumentation I designed to improve the SEEDS capabilities and efficiency. Finally, I will briefly discuss the conceptual design of a transiting planet camera to fly as a potential second generation instrument on-board NASA's SOFIA observatory.

  10. The Brazilian wide field imaging camera (WFI) for the China/Brazil earth resources satellite: CBERS 3 and 4

    NASA Astrophysics Data System (ADS)

    Scaduto, L. C. N.; Carvalho, E. G.; Modugno, R. G.; Cartolano, R.; Evangelista, S. H.; Segoria, D.; Santos, A. G.; Stefani, M. A.; Castro Neto, J. C.

    2017-11-01

    The purpose of this paper is to present the optical system developed for the Wide Field imaging Camera - WFI that will be integrated to the CBERS 3 and 4 satellites (China Brazil Earth resources Satellite). This camera will be used for remote sensing of the Earth and it is aimed to work at an altitude of 778 km. The optical system is designed for four spectral bands covering the range of wavelengths from blue to near infrared and its field of view is +/-28.63°, which covers 866 km, with a ground resolution of 64 m at nadir. WFI has been developed through a consortium formed by Opto Electrônica S. A. and Equatorial Sistemas. In particular, we will present the optical analysis based on the Modulation Transfer Function (MTF) obtained during the Engineering Model phase (EM) and the optical tests performed to evaluate the requirements. Measurements of the optical system MTF have been performed using an interferometer at the wavelength of 632.8nm and global MTF tests (including the CCD and signal processing electronic) have been performed by using a collimator with a slit target. The obtained results showed that the performance of the optical system meets the requirements of project.

  11. Cryogenic optical systems for the rapid infrared imager/spectrometer (RIMAS)

    NASA Astrophysics Data System (ADS)

    Capone, John I.; Content, David A.; Kutyrev, Alexander S.; Robinson, Frederick D.; Lotkin, Gennadiy N.; Toy, Vicki L.; Veilleux, Sylvain; Moseley, Samuel H.; Gehrels, Neil A.; Vogel, Stuart N.

    2014-07-01

    The Rapid Infrared Imager/Spectrometer (RIMAS) is designed to perform follow-up observations of transient astronomical sources at near infrared (NIR) wavelengths (0.9 - 2.4 microns). In particular, RIMAS will be used to perform photometric and spectroscopic observations of gamma-ray burst (GRB) afterglows to compliment the Swift satellite's science goals. Upon completion, RIMAS will be installed on Lowell Observatory's 4.3 meter Discovery Channel Telescope (DCT) located in Happy Jack, Arizona. The instrument's optical design includes a collimator lens assembly, a dichroic to divide the wavelength coverage into two optical arms (0.9 - 1.4 microns and 1.4 - 2.4 microns respectively), and a camera lens assembly for each optical arm. Because the wavelength coverage extends out to 2.4 microns, all optical elements are cooled to ~70 K. Filters and transmission gratings are located on wheels prior to each camera allowing the instrument to be quickly configured for photometry or spectroscopy. An athermal optomechanical design is being implemented to prevent lenses from loosing their room temperature alignment as the system is cooled. The thermal expansion of materials used in this design have been measured in the lab. Additionally, RIMAS has a guide camera consisting of four lenses to aid observers in passing light from target sources through spectroscopic slits. Efforts to align these optics are ongoing.

  12. Center for Adaptive Optics | Home

    Science.gov Websites

    Center for Adaptive Optics A University of California Science and Technology Center home Directions to The Center for Adaptive Optics Building Directions to the Center for Adaptive Optics Building * Seaway Inn * West Cliff Inn Last Modified: Apr 3, 2012 Center for Adaptive Optics | Search | The Center

  13. Center for Adaptive Optics | Software

    Science.gov Websites

    Center for Adaptive Optics A University of California Science and Technology Center home Adaptive Optics Software The Center for Adaptive Optics acts as a clearing house for distributing Software to Institutes it gives specialists in Adaptive Optics a place to distribute their software. All software is

  14. Optical Arc-Length Sensor For TIG Welding

    NASA Technical Reports Server (NTRS)

    Smith, Matthew A.

    1990-01-01

    Proposed subsystem of tungsten/inert-gas (TIG) welding system measures length of welding arc optically. Viewed by video camera, in one of three alternative optical configurations. Length of arc measured instead of inferred from voltage.

  15. Compact Video Microscope Imaging System Implemented in Colloid Studies

    NASA Technical Reports Server (NTRS)

    McDowell, Mark

    2002-01-01

    Long description Photographs showing fiber-optic light source, microscope and charge-coupled discharge (CCD) camera head connected to camera body, CCD camera body feeding data to image acquisition board in PC, and Cartesian robot controlled via PC board. The Compact Microscope Imaging System (CMIS) is a diagnostic tool with intelligent controls for use in space, industrial, medical, and security applications. CMIS can be used in situ with a minimum amount of user intervention. This system can scan, find areas of interest in, focus on, and acquire images automatically. Many multiple-cell experiments require microscopy for in situ observations; this is feasible only with compact microscope systems. CMIS is a miniature machine vision system that combines intelligent image processing with remote control. The software also has a user-friendly interface, which can be used independently of the hardware for further post-experiment analysis. CMIS has been successfully developed in the SML Laboratory at the NASA Glenn Research Center and adapted for use for colloid studies and is available for telescience experiments. The main innovations this year are an improved interface, optimized algorithms, and the ability to control conventional full-sized microscopes in addition to compact microscopes. The CMIS software-hardware interface is being integrated into our SML Analysis package, which will be a robust general-purpose image-processing package that can handle over 100 space and industrial applications.

  16. Physical and engineering aspect of carbon beam therapy

    NASA Astrophysics Data System (ADS)

    Kanai, Tatsuaki; Kanematsu, Nobuyuki; Minohara, Shinichi; Yusa, Ken; Urakabe, Eriko; Mizuno, Hideyuki; Iseki, Yasushi; Kanazawa, Mitsutaka; Kitagawa, Atsushi; Tomitani, Takehiro

    2003-08-01

    Conformal irradiation system of HIMAC has been up-graded for a clinical trial using a technique of a layer-stacking method. The system has been developed for localizing irradiation dose to target volume more effectively than the present irradiation dose. With dynamic control of the beam modifying devices, a pair of wobbler magnets, and multileaf collimator and range shifter, during the irradiation, more conformal radiotherapy can be achieved. The system, which has to be adequately safe for patient irradiations, was constructed and tested from a viewpoint of safety and the quality of the dose localization realized. A secondary beam line has been constructed for use of radioactive beam in heavy-ion radiotherapy. Spot scanning method has been adapted for the beam delivery system of the radioactive beam. Dose distributions of the spot beam were measured and analyzed taking into account of aberration of the beam optics. Distributions of the stopped positron-emitter beam can be observed by PET. Pencil beam of the positron-emitter, about 1 mm size, can also be used for measurements ranges of the test beam in patients using positron camera. The positron camera, consisting of a pair of Anger-type scintillation detectors, has been developed for this verification before treatment. Wash-out effect of the positron-emitter was examined using the positron camera installed. In this report, present status of the HIMAC irradiation system is described in detail.

  17. Establishment of Imaging Spectroscopy of Nuclear Gamma-Rays based on Geometrical Optics.

    PubMed

    Tanimori, Toru; Mizumura, Yoshitaka; Takada, Atsushi; Miyamoto, Shohei; Takemura, Taito; Kishimoto, Tetsuro; Komura, Shotaro; Kubo, Hidetoshi; Kurosawa, Shunsuke; Matsuoka, Yoshihiro; Miuchi, Kentaro; Mizumoto, Tetsuya; Nakamasu, Yuma; Nakamura, Kiseki; Parker, Joseph D; Sawano, Tatsuya; Sonoda, Shinya; Tomono, Dai; Yoshikawa, Kei

    2017-02-03

    Since the discovery of nuclear gamma-rays, its imaging has been limited to pseudo imaging, such as Compton Camera (CC) and coded mask. Pseudo imaging does not keep physical information (intensity, or brightness in Optics) along a ray, and thus is capable of no more than qualitative imaging of bright objects. To attain quantitative imaging, cameras that realize geometrical optics is essential, which would be, for nuclear MeV gammas, only possible via complete reconstruction of the Compton process. Recently we have revealed that "Electron Tracking Compton Camera" (ETCC) provides a well-defined Point Spread Function (PSF). The information of an incoming gamma is kept along a ray with the PSF and that is equivalent to geometrical optics. Here we present an imaging-spectroscopic measurement with the ETCC. Our results highlight the intrinsic difficulty with CCs in performing accurate imaging, and show that the ETCC surmounts this problem. The imaging capability also helps the ETCC suppress the noise level dramatically by ~3 orders of magnitude without a shielding structure. Furthermore, full reconstruction of Compton process with the ETCC provides spectra free of Compton edges. These results mark the first proper imaging of nuclear gammas based on the genuine geometrical optics.

  18. Diffraction-based optical sensor detection system for capture-restricted environments

    NASA Astrophysics Data System (ADS)

    Khandekar, Rahul M.; Nikulin, Vladimir V.

    2008-04-01

    The use of digital cameras and camcorders in prohibited areas presents a growing problem. Piracy in the movie theaters results in huge revenue loss to the motion picture industry every year, but still image and video capture may present even a bigger threat if performed in high-security locations. While several attempts are being made to address this issue, an effective solution is yet to be found. We propose to approach this problem using a very commonly observed optical phenomenon. Cameras and camcorders use CCD and CMOS sensors, which include a number of photosensitive elements/pixels arranged in a certain fashion. Those are photosites in CCD sensors and semiconductor elements in CMOS sensors. They are known to reflect a small fraction of incident light, but could also act as a diffraction grating, resulting in the optical response that could be utilized to identify the presence of such a sensor. A laser-based detection system is proposed that accounts for the elements in the optical train of the camera, as well as the eye-safety of the people who could be exposed to optical beam radiation. This paper presents preliminary experimental data, as well as the proof-of-concept simulation results.

  19. A Simple Spectrophotometer Using Common Materials and a Digital Camera

    ERIC Educational Resources Information Center

    Widiatmoko, Eko; Widayani; Budiman, Maman; Abdullah, Mikrajuddin; Khairurrijal

    2011-01-01

    A simple spectrophotometer was designed using cardboard, a DVD, a pocket digital camera, a tripod and a computer. The DVD was used as a diffraction grating and the camera as a light sensor. The spectrophotometer was calibrated using a reference light prior to use. The spectrophotometer was capable of measuring optical wavelengths with a…

  20. Surveillance Cameras and Their Use as a Dissecting Microscope in the Teaching of Biological Sciences

    ERIC Educational Resources Information Center

    Vale, Marcus R.

    2016-01-01

    Surveillance cameras are prevalent in various public and private areas, and they can also be coupled to optical microscopes and telescopes with excellent results. They are relatively simple cameras without sophisticated technological features and are much less expensive and more accessible to many people. These features enable them to be used in…

  1. Creating and Using a Camera Obscura

    ERIC Educational Resources Information Center

    Quinnell, Justin

    2012-01-01

    The camera obscura (Latin for "darkened room") is the earliest optical device and goes back over 2500 years. The small pinhole or lens at the front of the room allows light to enter and this is then "projected" onto a screen inside the room. This differs from a camera, which projects its image onto light-sensitive material.…

  2. Temporal Coding of Volumetric Imagery

    NASA Astrophysics Data System (ADS)

    Llull, Patrick Ryan

    'Image volumes' refer to realizations of images in other dimensions such as time, spectrum, and focus. Recent advances in scientific, medical, and consumer applications demand improvements in image volume capture. Though image volume acquisition continues to advance, it maintains the same sampling mechanisms that have been used for decades; every voxel must be scanned and is presumed independent of its neighbors. Under these conditions, improving performance comes at the cost of increased system complexity, data rates, and power consumption. This dissertation explores systems and methods capable of efficiently improving sensitivity and performance for image volume cameras, and specifically proposes several sampling strategies that utilize temporal coding to improve imaging system performance and enhance our awareness for a variety of dynamic applications. Video cameras and camcorders sample the video volume (x,y,t) at fixed intervals to gain understanding of the volume's temporal evolution. Conventionally, one must reduce the spatial resolution to increase the framerate of such cameras. Using temporal coding via physical translation of an optical element known as a coded aperture, the compressive temporal imaging (CACTI) camera emonstrates a method which which to embed the temporal dimension of the video volume into spatial (x,y) measurements, thereby greatly improving temporal resolution with minimal loss of spatial resolution. This technique, which is among a family of compressive sampling strategies developed at Duke University, temporally codes the exposure readout functions at the pixel level. Since video cameras nominally integrate the remaining image volume dimensions (e.g. spectrum and focus) at capture time, spectral (x,y,t,lambda) and focal (x,y,t,z) image volumes are traditionally captured via sequential changes to the spectral and focal state of the system, respectively. The CACTI camera's ability to embed video volumes into images leads to exploration of other information within that video; namely, focal and spectral information. The next part of the thesis demonstrates derivative works of CACTI: compressive extended depth of field and compressive spectral-temporal imaging. These works successfully show the technique's extension of temporal coding to improve sensing performance in these other dimensions. Geometrical optics-related tradeoffs, such as the classic challenges of wide-field-of-view and high resolution photography, have motivated the development of mulitscale camera arrays. The advent of such designs less than a decade ago heralds a new era of research- and engineering-related challenges. One significant challenge is that of managing the focal volume (x,y,z ) over wide fields of view and resolutions. The fourth chapter shows advances on focus and image quality assessment for a class of multiscale gigapixel cameras developed at Duke. Along the same line of work, we have explored methods for dynamic and adaptive addressing of focus via point spread function engineering. We demonstrate another form of temporal coding in the form of physical translation of the image plane from its nominal focal position. We demonstrate this technique's capability to generate arbitrary point spread functions.

  3. The ICE spectrograph for PEPSI at the LBT: preliminary optical design

    NASA Astrophysics Data System (ADS)

    Pallavicini, Roberto; Zerbi, Filippo M.; Spano, Paolo; Conconi, Paolo; Mazzoleni, Ruben; Molinari, Emilio; Strassmeier, Klaus G.

    2003-03-01

    We present a preliminary design study for a high-resolution echelle spectrograph (ICE) to be used with the spectropolarimeter PEPSI under development at the LBT. In order to meet the scientific requirements and take full advantage of the peculiarities of the LBT (i.e. the binocular nature and the adaptive optics capabilities), we have designed a fiber-fed bench mounted instrument for both high resolution (R ≍ 100,000; non-AO polarimetric and integral light modes) and ultra-high resolution (R ≍ 300,000; AO integral light mode). In both cases, 4 spectra per order (two for each primary mirror) shall be accomodated in a 2-dimensional cross dispersed echelle format. In order to obtain a resolution-slit product of ≍ 100,000 as required by the science case, we have considered two alternative designs, one with two R4 echelles in series and the other with a sigle R4 echelle and fiber slicing. A white-pupil design, VPH cross-dispersers and two cameras of different focal length for the AO and non-AO modes are adopted in both cases. It is concluded that the single-echelle fiber-slicer solution has to be preferred in terms of performances, complexity and cost. It can be implemented at the LBT in two phases, with the long-camera AO mode added in a second phase depending on the availability of funds and the time-scale for implementation of the AO system.

  4. 3-D endoscopic imaging using plenoptic camera.

    PubMed

    Le, Hanh N D; Decker, Ryan; Opferman, Justin; Kim, Peter; Krieger, Axel; Kang, Jin U

    2016-06-01

    Three-dimensional endoscopic imaging using plenoptic technique combined with F-matching algorithm has been pursued in this study. A custom relay optics was designed to integrate a commercial surgical straight endoscope with a plenoptic camera.

  5. Real-time laser cladding control with variable spot size

    NASA Astrophysics Data System (ADS)

    Arias, J. L.; Montealegre, M. A.; Vidal, F.; Rodríguez, J.; Mann, S.; Abels, P.; Motmans, F.

    2014-03-01

    Laser cladding processing has been used in different industries to improve the surface properties or to reconstruct damaged pieces. In order to cover areas considerably larger than the diameter of the laser beam, successive partially overlapping tracks are deposited. With no control over the process variables this conduces to an increase of the temperature, which could decrease mechanical properties of the laser cladded material. Commonly, the process is monitored and controlled by a PC using cameras, but this control suffers from a lack of speed caused by the image processing step. The aim of this work is to design and develop a FPGA-based laser cladding control system. This system is intended to modify the laser beam power according to the melt pool width, which is measured using a CMOS camera. All the control and monitoring tasks are carried out by a FPGA, taking advantage of its abundance of resources and speed of operation. The robustness of the image processing algorithm is assessed, as well as the control system performance. Laser power is decreased as substrate temperature increases, thus maintaining a constant clad width. This FPGA-based control system is integrated in an adaptive laser cladding system, which also includes an adaptive optical system that will control the laser focus distance on the fly. The whole system will constitute an efficient instrument for part repair with complex geometries and coating selective surfaces. This will be a significant step forward into the total industrial implementation of an automated industrial laser cladding process.

  6. Optical performance of a PDMS tunable lens with automatically controlled applied stress

    NASA Astrophysics Data System (ADS)

    Cruz-Felix, Angel S.; Santiago-Alvarado, Agustín.; Hernández-Méndez, Arturo; Reyes-Pérez, Emilio R.; Tepichín-Rodriguez, Eduardo

    2016-09-01

    The advances in the field of adaptive optics and in the fabrication of tunable optical components capable to automatically modify their physical features are of great interest in areas like machine vision, imaging systems, ophthalmology, etc. Such components like tunable lenses are used to reduce the overall size of optical setups like in small camera systems and even to imitate some biological functions made by the human eye. In this direction, in the last years we have been working in the development and fabrication of PDMS-made tunable lenses and in the design of special mechanical mounting systems to manipulate them. A PDMS-made tunable lens was previously designed by us, following the scheme reported by Navarro et al. in 1985, in order to mimic the accommodation process made by the crystalline lens of the human eye. The design included a simulation of the application of radial stress onto the lens and it was shown that the effective focal length was indeed changed. In this work we show the fabrication process of this particular tunable lens and an optimized mechanism that is able to automatically change the curvature of both surfaces of the lens by the application of controlled stress. We also show results of a study and analysis of aberrations performed to the Solid Elastic Lens (SEL).

  7. The Busot Observatory: towards a robotic autonomous telescope

    NASA Astrophysics Data System (ADS)

    García-Lozano, R.; Rodes, J. J.; Torrejón, J. M.; Bernabéu, G.; Berná, J. Á.

    2016-12-01

    We describe the Busot observatory, our project of a fully robotic autonomous telescope. This astronomical observatory, which obtained the Minor Planet Centre code MPC-J02 in 2009, includes a 14 inch MEADE LX200GPS telescope, a 2 m dome, a ST8-XME CCD camera from SBIG, with an AO-8 adaptive optics system, and a filter wheel equipped with UBVRI system. We are also implementing a spectrograph SGS ST-8 for the telescope. Currently, we are involved in long term studies of variable sources such as X-ray binaries systems, and variable stars. In this work we also present the discovery of W UMa systems and its orbital periods derived from the photometry light curve obtained at Busot Observatory.

  8. Atmospheric tomography using a fringe pattern in the sodium layer.

    PubMed

    Baharav, Y; Ribak, E N; Shamir, J

    1994-02-15

    We wish to measure and separate the contribution of atmospheric turbulent layers for multiconjugate adaptive optics. To this end, we propose to create a periodic fringe pattern in the sodium layer and image it with a modified Hartmann sensor. Overlapping sections of the fringes are imaged by a lenslet array onto contiguous areas in a large-format camera. Low-layer turbulence causes an overall shift of the fringe pattern in each lenslet, and high-attitude turbulence results in internal deformations in the pattern. Parallel Fourier analysis permits separation of the atmospheric layers. Two mirrors, one conjugate to a ground layer and the other conjugate to a single high-altitude layer, are shown to widen the field of view significantly compared with existing methods.

  9. Microlens array processor with programmable weight mask and direct optical input

    NASA Astrophysics Data System (ADS)

    Schmid, Volker R.; Lueder, Ernst H.; Bader, Gerhard; Maier, Gert; Siegordner, Jochen

    1999-03-01

    We present an optical feature extraction system with a microlens array processor. The system is suitable for online implementation of a variety of transforms such as the Walsh transform and DCT. Operating with incoherent light, our processor accepts direct optical input. Employing a sandwich- like architecture, we obtain a very compact design of the optical system. The key elements of the microlens array processor are a square array of 15 X 15 spherical microlenses on acrylic substrate and a spatial light modulator as transmissive mask. The light distribution behind the mask is imaged onto the pixels of a customized a-Si image sensor with adjustable gain. We obtain one output sample for each microlens image and its corresponding weight mask area as summation of the transmitted intensity within one sensor pixel. The resulting architecture is very compact and robust like a conventional camera lens while incorporating a high degree of parallelism. We successfully demonstrate a Walsh transform into the spatial frequency domain as well as the implementation of a discrete cosine transform with digitized gray values. We provide results showing the transformation performance for both synthetic image patterns and images of natural texture samples. The extracted frequency features are suitable for neural classification of the input image. Other transforms and correlations can be implemented in real-time allowing adaptive optical signal processing.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, H. D.; Fiorito, R. B.; Corbett, J.

    The 3GeV SPEAR3 synchrotron light source operates in top-up injection mode with up to 500 mA circulating in the storage ring (equivalently 392 nC). Each injection pulse contains 40–80 pC producing a contrast ratio between total stored charge and injected charge of about 6500:1. In order to study transient injected beam dynamics during user operations, it is desirable to optically image the injected pulse in the presence of the bright stored beam. In the present work this is done by imaging the visible component of the synchrotron radiation onto a digital micro-mirror-array device (DMD), which is then used as anmore » optical mask to block out light from the bright central core of the stored beam. The physical masking, together with an asynchronously-gated, ICCD imaging camera, makes it possible to observe the weak injected beam component on a turn-by-turn basis. The DMD optical masking system works similar to a classical solar coronagraph but has some distinct practical advantages: i.e. rapid adaption to changes in the shape of the stored beam, a high extinction ratio for unwanted light and minimum scattering from the primary beam into the secondary optics. In this paper we describe the DMD masking method, features of the high dynamic range point spread function for the SPEAR3 optical beam line and measurements of the injected beam in the presence of the stored beam.« less

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Hao; Fiorito, Ralph; Corbett, Jeff

    The 3GeV SPEAR3 synchrotron light source operates in top-up injection mode with up to 500mA circulating in the storage ring (equivalently 392nC). Each injection pulse contains only 40-80 pC producing a contrast ratio between total stored charge and injected charge of about 6500:1. In order to study transient injected beam dynamics during User operations, it is desirable to optically image the injected pulse in the presence of the bright stored beam. In the present work this is done by re-imaging visible synchrotron radiation onto a digital micro-mirror-array device (DMD), which is then used as an optical mask to block outmore » light from the bright central core of the stored beam. The physical masking, together with an asynchronously-gated, ICCD imaging camera makes it is possible to observe the weak injected beam component on a turn-by-turn basis. The DMD optical masking system works similar to a classical solar coronagraph but has some distinct practical advantages: i.e. rapid adaption to changes in the shape of the stored beam, high extinction ratio for unwanted light and minimum scattering from the primary beam into the secondary optics. In this paper we describe the DMD masking method, features of the high dynamic range point spread function for the SPEAR3 optical beam line and measurements of the injected beam in the presence of the stored beam.« less

  12. The NACA High-Speed Motion-Picture Camera Optical Compensation at 40,000 Photographs Per Second

    NASA Technical Reports Server (NTRS)

    Miller, Cearcy D

    1946-01-01

    The principle of operation of the NACA high-speed camera is completely explained. This camera, operating at the rate of 40,000 photographs per second, took the photographs presented in numerous NACA reports concerning combustion, preignition, and knock in the spark-ignition engine. Many design details are presented and discussed, details of an entirely conventional nature are omitted. The inherent aberrations of the camera are discussed and partly evaluated. The focal-plane-shutter effect of the camera is explained. Photographs of the camera are presented. Some high-speed motion pictures of familiar objects -- photoflash bulb, firecrackers, camera shutter -- are reproduced as an illustration of the quality of the photographs taken by the camera.

  13. MTF measurements on real time for performance analysis of electro-optical systems

    NASA Astrophysics Data System (ADS)

    Stuchi, Jose Augusto; Signoreto Barbarini, Elisa; Vieira, Flavio Pascoal; dos Santos, Daniel, Jr.; Stefani, Mário Antonio; Yasuoka, Fatima Maria Mitsue; Castro Neto, Jarbas C.; Linhari Rodrigues, Evandro Luis

    2012-06-01

    The need of methods and tools that assist in determining the performance of optical systems is actually increasing. One of the most used methods to perform analysis of optical systems is to measure the Modulation Transfer Function (MTF). The MTF represents a direct and quantitative verification of the image quality. This paper presents the implementation of the software, in order to calculate the MTF of electro-optical systems. The software was used for calculating the MTF of Digital Fundus Camera, Thermal Imager and Ophthalmologic Surgery Microscope. The MTF information aids the analysis of alignment and measurement of optical quality, and also defines the limit resolution of optical systems. The results obtained with the Fundus Camera and Thermal Imager was compared with the theoretical values. For the Microscope, the results were compared with MTF measured of Microscope Zeiss model, which is the quality standard of ophthalmological microscope.

  14. Procedure Enabling Simulation and In-Depth Analysis of Optical Effects in Camera-Based Time-Of Sensors

    NASA Astrophysics Data System (ADS)

    Baumgart, M.; Druml, N.; Consani, M.

    2018-05-01

    This paper presents a simulation approach for Time-of-Flight cameras to estimate sensor performance and accuracy, as well as to help understanding experimentally discovered effects. The main scope is the detailed simulation of the optical signals. We use a raytracing-based approach and use the optical path length as the master parameter for depth calculations. The procedure is described in detail with references to our implementation in Zemax OpticStudio and Python. Our simulation approach supports multiple and extended light sources and allows accounting for all effects within the geometrical optics model. Especially multi-object reflection/scattering ray-paths, translucent objects, and aberration effects (e.g. distortion caused by the ToF lens) are supported. The optical path length approach also enables the implementation of different ToF senor types and transient imaging evaluations. The main features are demonstrated on a simple 3D test scene.

  15. High-speed spectral domain polarization- sensitive optical coherence tomography using a single camera and an optical switch at 1.3 microm.

    PubMed

    Lee, Sang-Won; Jeong, Hyun-Woo; Kim, Beop-Min

    2010-01-01

    We propose high-speed spectral domain polarization-sensitive optical coherence tomography (SD-PS-OCT) using a single camera and a 1x2 optical switch at the 1.3-microm region. The PS-low coherence interferometer used in the system is constructed using free-space optics. The reflected horizontal and vertical polarization light rays are delivered via an optical switch to a single spectrometer by turns. Therefore, our system costs less to build than those that use dual spectrometers, and the processes of timing and triggering are simpler from the viewpoints of both hardware and software. Our SD-PS-OCT has a sensitivity of 101.5 dB, an axial resolution of 8.2 microm, and an acquisition speed of 23,496 A-scans per second. We obtain the intensity, phase retardation, and fast axis orientation images of a rat tail tendon ex vivo.

  16. Quasi-microscope concept for planetary missions.

    PubMed

    Huck, F O; Arvidson, R E; Burcher, E E; Giat, O; Wall, S D

    1977-09-01

    Viking lander cameras have returned stereo and multispectral views of the Martian surface with a resolution that approaches 2 mm/lp in the near field. A two-orders-of-magnitude increase in resolution could be obtained for collected surface samples by augmenting these cameras with auxiliary optics that would neither impose special camera design requirements nor limit the cameras field of view of the terrain. Quasi-microscope images would provide valuable data on the physical and chemical characteristics of planetary regoliths.

  17. Compression of CCD raw images for digital still cameras

    NASA Astrophysics Data System (ADS)

    Sriram, Parthasarathy; Sudharsanan, Subramania

    2005-03-01

    Lossless compression of raw CCD images captured using color filter arrays has several benefits. The benefits include improved storage capacity, reduced memory bandwidth, and lower power consumption for digital still camera processors. The paper discusses the benefits in detail and proposes the use of a computationally efficient block adaptive scheme for lossless compression. Experimental results are provided that indicate that the scheme performs well for CCD raw images attaining compression factors of more than two. The block adaptive method also compares favorably with JPEG-LS. A discussion is provided indicating how the proposed lossless coding scheme can be incorporated into digital still camera processors enabling lower memory bandwidth and storage requirements.

  18. FOREX-A Fiber Optics Diagnostic System For Study Of Materials At High Temperatures And Pressures

    NASA Astrophysics Data System (ADS)

    Smith, D. E.; Roeske, F.

    1983-03-01

    We have successfully fielded a Fiber Optics Radiation EXperiment system (FOREX) designed for measuring material properties at high temperatures and pressures on an underground nuclear test. The system collects light from radiating materials and transmits it through several hundred meters of optical fibers to a recording station consisting of a streak camera with film readout. The use of fiber optics provides a faster time response than can presently be obtained with equalized coaxial cables over comparable distances. Fibers also have significant cost and physical size advantages over coax cables. The streak camera achieves a much higher information density than an equivalent oscilloscope system, and it also serves as the light detector. The result is a wide bandwidth high capacity system that can be fielded at a relatively low cost in manpower, space, and materials. For this experiment, the streak camera had a 120 ns time window with a 1.2 ns time resolution. Dynamic range for the system was about 1000. Beam current statistical limitations were approximately 8% for a 0.3 ns wide data point at one decade above the threshold recording intensity.

  19. A versatile photogrammetric camera automatic calibration suite for multispectral fusion and optical helmet tracking

    NASA Astrophysics Data System (ADS)

    de Villiers, Jason; Jermy, Robert; Nicolls, Fred

    2014-06-01

    This paper presents a system to determine the photogrammetric parameters of a camera. The lens distortion, focal length and camera six degree of freedom (DOF) position are calculated. The system caters for cameras of different sensitivity spectra and fields of view without any mechanical modifications. The distortion characterization, a variant of Brown's classic plumb line method, allows many radial and tangential distortion coefficients and finds the optimal principal point. Typical values are 5 radial and 3 tangential coefficients. These parameters are determined stably and demonstrably produce superior results to low order models despite popular and prevalent misconceptions to the contrary. The system produces coefficients to model both the distorted to undistorted pixel coordinate transformation (e.g. for target designation) and the inverse transformation (e.g. for image stitching and fusion) allowing deterministic rates far exceeding real time. The focal length is determined to minimise the error in absolute photogrammetric positional measurement for both multi camera systems or monocular (e.g. helmet tracker) systems. The system determines the 6 DOF position of the camera in a chosen coordinate system. It can also determine the 6 DOF offset of the camera relative to its mechanical mount. This allows faulty cameras to be replaced without requiring a recalibration of the entire system (such as an aircraft cockpit). Results from two simple applications of the calibration results are presented: stitching and fusion of the images from a dual-band visual/ LWIR camera array, and a simple laboratory optical helmet tracker.

  20. Hypothesis on human eye perceiving optical spectrum rather than an image

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Szu, Harold

    2015-05-01

    It is a common knowledge that we see the world because our eyes can perceive an optical image. A digital camera seems a good example of simulating the eye imaging system. However, the signal sensing and imaging on human retina is very complicated. There are at least five layers (of neurons) along the signal pathway: photoreceptors (cones and rods), bipolar, horizontal, amacrine and ganglion cells. To sense an optical image, it seems that photoreceptors (as sensors) plus ganglion cells (converting to electrical signals for transmission) are good enough. Image sensing does not require ununiformed distribution of photoreceptors like fovea. There are some challenging questions, for example, why don't we feel the "blind spots" (never fibers exiting the eyes)? Similar situation happens to glaucoma patients who do not feel their vision loss until 50% or more nerves died. Now our hypothesis is that human retina initially senses optical (i.e., Fourier) spectrum rather than optical image. Due to the symmetric property of Fourier spectrum the signal loss from a blind spot or the dead nerves (for glaucoma patients) can be recovered. Eye logarithmic response to input light intensity much likes displaying Fourier magnitude. The optics and structures of human eyes satisfy the needs of optical Fourier spectrum sampling. It is unsure that where and how inverse Fourier transform is performed in human vision system to obtain an optical image. Phase retrieval technique in compressive sensing domain enables image reconstruction even without phase inputs. The spectrum-based imaging system can potentially tolerate up to 50% of bad sensors (pixels), adapt to large dynamic range (with logarithmic response), etc.

  1. Optical design of MEMS-based infrared multi-object spectrograph concept for the Gemini South Telescope

    NASA Astrophysics Data System (ADS)

    Chen, Shaojie; Sivanandam, Suresh; Moon, Dae-Sik

    2016-08-01

    We discuss the optical design of an infrared multi-object spectrograph (MOS) concept that is designed to take advantage of the multi-conjugate adaptive optics (MCAO) corrected field at the Gemini South telescope. This design employs a unique, cryogenic MEMS-based focal plane mask to select target objects for spectroscopy by utilizing the Micro-Shutter Array (MSA) technology originally developed for the Near Infrared Spectrometer (NIRSpec) of the James Webb Space Telescope (JWST). The optical design is based on all spherical refractive optics, which serves both imaging and spectroscopic modes across the wavelength range of 0.9-2.5 μm. The optical system consists of a reimaging system, MSA, collimator, volume phase holographic (VPH) grisms, and spectrograph camera optics. The VPH grisms, which are VPH gratings sandwiched between two prisms, provide high dispersing efficiencies, and a set of several VPH grisms provide the broad spectral coverage at high throughputs. The imaging mode is implemented by removing the MSA and the dispersing unit out of the beam. We optimize both the imaging and spectrographic modes simultaneously, while paying special attention to the performance of the pupil imaging at the cold stop. Our current design provides a 1' ♢ 1' and a 0.5' ♢ 1' field of views for imaging and spectroscopic modes, respectively, on a 2048 × 2048 pixel HAWAII-2RG detector array. The spectrograph's slit width and spectral resolving power are 0.18'' and 3,000, respectively, and spectra of up to 100 objects can be obtained simultaneously. We present the overall results of simulated performance using optical model we designed.

  2. Geometric calibration of lens and filter distortions for multispectral filter-wheel cameras.

    PubMed

    Brauers, Johannes; Aach, Til

    2011-02-01

    High-fidelity color image acquisition with a multispectral camera utilizes optical filters to separate the visible electromagnetic spectrum into several passbands. This is often realized with a computer-controlled filter wheel, where each position is equipped with an optical bandpass filter. For each filter wheel position, a grayscale image is acquired and the passbands are finally combined to a multispectral image. However, the different optical properties and non-coplanar alignment of the filters cause image aberrations since the optical path is slightly different for each filter wheel position. As in a normal camera system, the lens causes additional wavelength-dependent image distortions called chromatic aberrations. When transforming the multispectral image with these aberrations into an RGB image, color fringes appear, and the image exhibits a pincushion or barrel distortion. In this paper, we address both the distortions caused by the lens and by the filters. Based on a physical model of the bandpass filters, we show that the aberrations caused by the filters can be modeled by displaced image planes. The lens distortions are modeled by an extended pinhole camera model, which results in a remaining mean calibration error of only 0.07 pixels. Using an absolute calibration target, we then geometrically calibrate each passband and compensate for both lens and filter distortions simultaneously. We show that both types of aberrations can be compensated and present detailed results on the remaining calibration errors.

  3. Center for Adaptive Optics | Search

    Science.gov Websites

    Center for Adaptive Optics A University of California Science and Technology Center home Search CfAO Google Search search: CfAO All of UCOLick.org Whole Web Search for recent Adaptive Optics news at GoogleNews! Last Modified: Sep 21, 2010 Center for Adaptive Optics | Search | The Center | Adaptive Optics

  4. Aerial Photography

    NASA Technical Reports Server (NTRS)

    1985-01-01

    John Hill, a pilot and commercial aerial photographer, needed an information base. He consulted NERAC and requested a search of the latest developments in camera optics. NERAC provided information; Hill contacted the manufacturers of camera equipment and reduced his photographic costs significantly.

  5. 3-D endoscopic imaging using plenoptic camera

    PubMed Central

    Le, Hanh N. D.; Decker, Ryan; Opferman, Justin; Kim, Peter; Krieger, Axel

    2017-01-01

    Three-dimensional endoscopic imaging using plenoptic technique combined with F-matching algorithm has been pursued in this study. A custom relay optics was designed to integrate a commercial surgical straight endoscope with a plenoptic camera. PMID:29276806

  6. VizieR Online Data Catalog: Antennae galaxies (NGC 4038/4039) revisited (Whitmore+, 2010)

    NASA Astrophysics Data System (ADS)

    Whitmore, B. C.; Chandar, R.; Schweizer, F.; Rothberg, B.; Leitherer, C.; Rieke, M.; Rieke, G.; Blair, W. P.; Mengel, S.; Alonso-Herrero, A.

    2012-06-01

    Observations of the main bodies of NGC 4038/39 were made with the Hubble Space Telescope (HST), using the ACS, as part of Program GO-10188. Multi-band photometry was obtained in the following optical broadband filters: F435W (~B), F550M (~V), and F814W (~I). Archival F336W photometry of the Antennae (Program GO-5962) was used to supplement our optical ACS/WFC observations. Infrared observations were made using the Near Infrared Camera and Multi-Object Spectrometer (NICMOS) camera on HST as part of Program GO-10188. Observations were made using the NIC2 camera with the F160W, F187N, and F237M filters, and the NIC3 camera with the F110W, F160W, F164W, F187N, and F222M filters. (10 data files).

  7. Miniaturisation of Pressure-Sensitive Paint Measurement Systems Using Low-Cost, Miniaturised Machine Vision Cameras.

    PubMed

    Quinn, Mark Kenneth; Spinosa, Emanuele; Roberts, David A

    2017-07-25

    Measurements of pressure-sensitive paint (PSP) have been performed using new or non-scientific imaging technology based on machine vision tools. Machine vision camera systems are typically used for automated inspection or process monitoring. Such devices offer the benefits of lower cost and reduced size compared with typically scientific-grade cameras; however, their optical qualities and suitability have yet to be determined. This research intends to show relevant imaging characteristics and also show the applicability of such imaging technology for PSP. Details of camera performance are benchmarked and compared to standard scientific imaging equipment and subsequent PSP tests are conducted using a static calibration chamber. The findings demonstrate that machine vision technology can be used for PSP measurements, opening up the possibility of performing measurements on-board small-scale model such as those used for wind tunnel testing or measurements in confined spaces with limited optical access.

  8. Miniaturisation of Pressure-Sensitive Paint Measurement Systems Using Low-Cost, Miniaturised Machine Vision Cameras

    PubMed Central

    Spinosa, Emanuele; Roberts, David A.

    2017-01-01

    Measurements of pressure-sensitive paint (PSP) have been performed using new or non-scientific imaging technology based on machine vision tools. Machine vision camera systems are typically used for automated inspection or process monitoring. Such devices offer the benefits of lower cost and reduced size compared with typically scientific-grade cameras; however, their optical qualities and suitability have yet to be determined. This research intends to show relevant imaging characteristics and also show the applicability of such imaging technology for PSP. Details of camera performance are benchmarked and compared to standard scientific imaging equipment and subsequent PSP tests are conducted using a static calibration chamber. The findings demonstrate that machine vision technology can be used for PSP measurements, opening up the possibility of performing measurements on-board small-scale model such as those used for wind tunnel testing or measurements in confined spaces with limited optical access. PMID:28757553

  9. Sodium Laser Guide Star Technique, Spectroscopy and Imaging with Adaptive Optics

    NASA Astrophysics Data System (ADS)

    Ge, Jian

    A sodium laser guide star (LGS) adaptive optics (AO) system developed at Stewart Observatory is to be used at the 6.5m MMT. Annual measurements at Kitt Peak show that the mean mesospheric sodium column density varies from ~2×109cm-2 (summer) to ~5×109cm-2 (winter). The sodium column density also varies by a factor of two during a one hour period. The first simultaneous measurements of sodium LGS brightness, sodium column density and laser power were obtained. The absolute sodium return for a continuous wave circularly polarized beam is 1.2([/pm]0.3)× 106 photons s-1m-2W-1 for the sodium column density of 3.7×109cm-2. Theoretical studies demonstrate that the 6.5m MMT LGS AO can provide Strehl ratios better than 0.15 and about 50% flux concentration within 0.2'' aperture for 1-5.5μm under median seeing. This correction will be available for the full sky. Better Strehl and higher flux concentration can be achieved with natural guide stars, but limited sky coverage. The AO corrected field-of-view is about 60''. The Arizona IR Imager and Echelle Spectrograph (ARIES) was designed to match the 6.5m MMT AO. Detection limits of more than 2 magnitude fainter can be reached with the AO over without the AO. A pre-ARIES wide field near-IR camera was designed, built and tested. The camera provides 1'' images in the near-IR over an 8.5 × 8.5arcmin2 field. The 10-σ detection limit with one minute exposures is 17.9 mag. in the K band. A prototype very high resolution cross-dispersed optical echelle spectrograph was designed and built to match the Starfire Optical Range 1.5m AO images. Interstellar KI 7698A absorption lines have been detected in the spectra of αCyg and ζPer. The spectral resolution is 250.000. About 300A wavelengths were covered in a single exposure. Total detection efficiency of 1% has been achieved. For the first time, a near-single-mode fiber with 10μm core size was applied to transmit the Mt. Wilson 100inch AO corrected beams to a spectrograph. The coupling efficiency of the fiber reached up to 70%. Spectra of αOri were recorded. The spectral resolution is 200,000. The total wavelength coverage is about 650A per exposure.

  10. A method of camera calibration in the measurement process with reference mark for approaching observation space target

    NASA Astrophysics Data System (ADS)

    Zhang, Hua; Zeng, Luan

    2017-11-01

    Binocular stereoscopic vision can be used for space-based space targets near observation. In order to solve the problem that the traditional binocular vision system cannot work normally after interference, an online calibration method of binocular stereo measuring camera with self-reference is proposed. The method uses an auxiliary optical imaging device to insert the image of the standard reference object into the edge of the main optical path and image with the target on the same focal plane, which is equivalent to a standard reference in the binocular imaging optical system; When the position of the system and the imaging device parameters are disturbed, the image of the standard reference will change accordingly in the imaging plane, and the position of the standard reference object does not change. The camera's external parameters can be re-calibrated by the visual relationship of the standard reference object. The experimental results show that the maximum mean square error of the same object can be reduced from the original 72.88mm to 1.65mm when the right camera is deflected by 0.4 degrees and the left camera is high and low with 0.2° rotation. This method can realize the online calibration of binocular stereoscopic vision measurement system, which can effectively improve the anti - jamming ability of the system.

  11. ARNICA, the Arcetri Near-Infrared Camera

    NASA Astrophysics Data System (ADS)

    Lisi, F.; Baffa, C.; Bilotti, V.; Bonaccini, D.; del Vecchio, C.; Gennari, S.; Hunt, L. K.; Marcucci, G.; Stanga, R.

    1996-04-01

    ARNICA (ARcetri Near-Infrared CAmera) is the imaging camera for the near-infrared bands between 1.0 and 2.5 microns that the Arcetri Observatory has designed and built for the Infrared Telescope TIRGO located at Gornergrat, Switzerland. We describe the mechanical and optical design of the camera, and report on the astronomical performance of ARNICA as measured during the commissioning runs at the TIRGO (December, 1992 to December 1993), and an observing run at the William Herschel Telescope, Canary Islands (December, 1993). System performance is defined in terms of efficiency of the camera+telescope system and camera sensitivity for extended and point-like sources. (SECTION: Astronomical Instrumentation)

  12. General Model of Photon-Pair Detection with an Image Sensor

    NASA Astrophysics Data System (ADS)

    Defienne, Hugo; Reichert, Matthew; Fleischer, Jason W.

    2018-05-01

    We develop an analytic model that relates intensity correlation measurements performed by an image sensor to the properties of photon pairs illuminating it. Experiments using an effective single-photon counting camera, a linear electron-multiplying charge-coupled device camera, and a standard CCD camera confirm the model. The results open the field of quantum optical sensing using conventional detectors.

  13. ATTICA family of thermal cameras in submarine applications

    NASA Astrophysics Data System (ADS)

    Kuerbitz, Gunther; Fritze, Joerg; Hoefft, Jens-Rainer; Ruf, Berthold

    2001-10-01

    Optronics Mast Systems (US: Photonics Mast Systems) are electro-optical devices which enable a submarine crew to observe the scenery above water during dive. Unlike classical submarine periscopes they are non-hull-penetrating and therefore have no direct viewing capability. Typically they have electro-optical cameras both for the visual and for an IR spectral band with panoramic view and a stabilized line of sight. They can optionally be equipped with laser range- finders, antennas, etc. The brand name ATTICA (Advanced Two- dimensional Thermal Imager with CMOS-Array) characterizes a family of thermal cameras using focal-plane-array (FPA) detectors which can be tailored to a variety of requirements. The modular design of the ATTICA components allows the use of various detectors (InSb, CMT 3...5 μm , CMT 7...11 μm ) for specific applications. By means of a microscanner ATTICA cameras achieve full standard TV resolution using detectors with only 288 X 384 (US:240 X 320) detector elements. A typical requirement for Optronics-Mast Systems is a Quick- Look-Around capability. For FPA cameras this implies the need for a 'descan' module which can be incorporated in the ATTICA cameras without complications.

  14. FOAM: the modular adaptive optics framework

    NASA Astrophysics Data System (ADS)

    van Werkhoven, T. I. M.; Homs, L.; Sliepen, G.; Rodenhuis, M.; Keller, C. U.

    2012-07-01

    Control software for adaptive optics systems is mostly custom built and very specific in nature. We have developed FOAM, a modular adaptive optics framework for controlling and simulating adaptive optics systems in various environments. Portability is provided both for different control hardware and adaptive optics setups. To achieve this, FOAM is written in C++ and runs on standard CPUs. Furthermore we use standard Unix libraries and compilation procedures and implemented a hardware abstraction layer in FOAM. We have successfully implemented FOAM on the adaptive optics system of ExPo - a high-contrast imaging polarimeter developed at our institute - in the lab and will test it on-sky late June 2012. We also plan to implement FOAM on adaptive optics systems for microscopy and solar adaptive optics. FOAM is available* under the GNU GPL license and is free to be used by anyone.

  15. Camera System MTF: combining optic with detector

    NASA Astrophysics Data System (ADS)

    Andersen, Torben B.; Granger, Zachary A.

    2017-08-01

    MTF is one of the most common metrics used to quantify the resolving power of an optical component. Extensive literature is dedicated to describing methods to calculate the Modulation Transfer Function (MTF) for stand-alone optical components such as a camera lens or telescope, and some literature addresses approaches to determine an MTF for combination of an optic with a detector. The formulations pertaining to a combined electro-optical system MTF are mostly based on theory, and assumptions that detector MTF is described only by the pixel pitch which does not account for wavelength dependencies. When working with real hardware, detectors are often characterized by testing MTF at discrete wavelengths. This paper presents a method to simplify the calculation of a polychromatic system MTF when it is permissible to consider the detector MTF to be independent of wavelength.

  16. Adaptation of the Camera Link Interface for Flight-Instrument Applications

    NASA Technical Reports Server (NTRS)

    Randall, David P.; Mahoney, John C.

    2010-01-01

    COTS (commercial-off-the-shelf) hard ware using an industry-standard Camera Link interface is proposed to accomplish the task of designing, building, assembling, and testing electronics for an airborne spectrometer that would be low-cost, but sustain the required data speed and volume. The focal plane electronics were designed to support that hardware standard. Analysis was done to determine how these COTS electronics could be interfaced with space-qualified camera electronics. Interfaces available for spaceflight application do not support the industry standard Camera Link interface, but with careful design, COTS EGSE (electronics ground support equipment), including camera interfaces and camera simulators, can still be used.

  17. Perspective and potential of smart optical materials

    NASA Astrophysics Data System (ADS)

    Choi, Sang H.; Duzik, Adam J.; Kim, Hyun-Jung; Park, Yeonjoon; Kim, Jaehwan; Ko, Hyun-U.; Kim, Hyun-Chan; Yun, Sungryul; Kyung, Ki-Uk

    2017-09-01

    The increasing requirements of hyperspectral imaging optics, electro/photo-chromic materials, negative refractive index metamaterial optics, and miniaturized optical components from micro-scale to quantum-scale optics have all contributed to new features and advancements in optics technology. Development of multifunctional capable optics has pushed the boundaries of optics into new fields that require new disciplines and materials to maximize the potential benefits. The purpose of this study is to understand and show the fundamental materials and fabrication technology for field-controlled spectrally active optics (referred to as smart optics) that are essential for future industrial, scientific, military, and space applications, such as membrane optics, filters, windows for sensors and probes, telescopes, spectroscopes, cameras, light valves, light switches, and flat-panel displays. The proposed smart optics are based on the Stark and Zeeman effects in materials tailored with quantum dot arrays and thin films made from readily polarizable materials via ferroelectricity or ferromagnetism. Bound excitonic states of organic crystals are also capable of optical adaptability, tunability, and reconfigurability. To show the benefits of smart optics, this paper reviews spectral characteristics of smart optical materials and device technology. Experiments testing the quantum-confined Stark effect, arising from rare earth element doping effects in semiconductors, and applied electric field effects on spectral and refractive index are discussed. Other bulk and dopant materials were also discovered to have the same aspect of shifts in spectrum and refractive index. Other efforts focus on materials for creating field-controlled spectrally smart active optics on a selected spectral range. Surface plasmon polariton transmission of light through apertures is also discussed, along with potential applications. New breakthroughs in micro scale multiple zone plate optics as a micro convex lens are reviewed, along with the newly discovered pseudo-focal point not predicted with conventional optics modeling. Micron-sized solid state beam scanner chips for laser waveguides are reviewed as well.

  18. Miniaturized unified imaging system using bio-inspired fluidic lens

    NASA Astrophysics Data System (ADS)

    Tsai, Frank S.; Cho, Sung Hwan; Qiao, Wen; Kim, Nam-Hyong; Lo, Yu-Hwa

    2008-08-01

    Miniaturized imaging systems have become ubiquitous as they are found in an ever-increasing number of devices, such as cellular phones, personal digital assistants, and web cameras. Until now, the design and fabrication methodology of such systems have not been significantly different from conventional cameras. The only established method to achieve focusing is by varying the lens distance. On the other hand, the variable-shape crystalline lens found in animal eyes offers inspiration for a more natural way of achieving an optical system with high functionality. Learning from the working concepts of the optics in the animal kingdom, we developed bio-inspired fluidic lenses for a miniature universal imager with auto-focusing, macro, and super-macro capabilities. Because of the enormous dynamic range of fluidic lenses, the miniature camera can even function as a microscope. To compensate for the image quality difference between the central vision and peripheral vision and the shape difference between a solid-state image sensor and a curved retina, we adopted a hybrid design consisting of fluidic lenses for tunability and fixed lenses for aberration and color dispersion correction. A design of the world's smallest surgical camera with 3X optical zoom capabilities is also demonstrated using the approach of hybrid lenses.

  19. Opto-mechanical design of the G-CLEF flexure control camera system

    NASA Astrophysics Data System (ADS)

    Oh, Jae Sok; Park, Chan; Kim, Jihun; Kim, Kang-Min; Chun, Moo-Young; Yu, Young Sam; Lee, Sungho; Nah, Jakyoung; Park, Sung-Joon; Szentgyorgyi, Andrew; McMuldroch, Stuart; Norton, Timothy; Podgorski, William; Evans, Ian; Mueller, Mark; Uomoto, Alan; Crane, Jeffrey; Hare, Tyson

    2016-08-01

    The GMT-Consortium Large Earth Finder (G-CLEF) is the very first light instrument of the Giant Magellan Telescope (GMT). The G-CLEF is a fiber feed, optical band echelle spectrograph that is capable of extremely precise radial velocity measurement. KASI (Korea Astronomy and Space Science Institute) is responsible for Flexure Control Camera (FCC) included in the G-CLEF Front End Assembly (GCFEA). The FCC is a kind of guide camera, which monitors the field images focused on a fiber mirror to control the flexure and the focus errors within the GCFEA. The FCC consists of five optical components: a collimator including triple lenses for producing a pupil, neutral density filters allowing us to use much brighter star as a target or a guide, a tent prism as a focus analyzer for measuring the focus offset at the fiber mirror, a reimaging camera with three pair of lenses for focusing the beam on a CCD focal plane, and a CCD detector for capturing the image on the fiber mirror. In this article, we present the optical and mechanical FCC designs which have been modified after the PDR in April 2015.

  20. Material of LAPAN's thermal IR camera equipped with two microbolometers in one aperture

    NASA Astrophysics Data System (ADS)

    Bustanul, A.; Irwan, P.; Andi M., T.

    2017-11-01

    Besides the wavelength used, there is another factor that we have to notice in designing an optical system. It is material used which is correct for the spectral bands determined. Basically, due the limitation of the available range and expensive, choosing and determining materials for Infra Red (IR) wavelength are more difficult and complex rather than visible spectrum. We also had the same problem while designing our thermal IR camera equipped with two microbolometers sharing aperture. Two spectral bands, 3 - 4 μm (MWIR) and 8 - 12 μm (LWIR), have been decided to be our thermal IR camera spectrum to address missions, i.e., peat land fire, volcanoes activities, and Sea Surface Temperature (SST). Referring those bands, we chose the appropriate material for LAPAN's IR camera optics. This paper describes material of LAPAN's IR camera equipped with two microbolometer in one aperture. First of all, we were learning and understanding of optical materials properties all matters of IR technology including its bandwidths. Considering some aspects, i.e., Transmission, Index of Refraction, Thermal properties covering the index gradient and coefficient of thermal expansion (CTE), the analysis then has been accomplished. Moreover, we were utilizing a commercial software, Thermal Desktop/Sinda Fluint, to strengthen the process. Some restrictions such as space environment, low cost, and performance mainly durability and transmission, were also cared throughout the trade off the works. The results of all those analysis, either in graphs or in measurement, indicate that the lens of LAPAN's IR camera with sharing aperture is based on Germanium/Zinc Selenide materials.

  1. Minimizing camera-eye optical aberrations during the 3D reconstruction of retinal structures

    NASA Astrophysics Data System (ADS)

    Aldana-Iuit, Javier; Martinez-Perez, M. Elena; Espinosa-Romero, Arturo; Diaz-Uribe, Rufino

    2010-05-01

    3D reconstruction of blood vessels is a powerful visualization tool for physicians, since it allows them to refer to qualitative representation of their subject of study. In this paper we propose a 3D reconstruction method of retinal vessels from fundus images. The reconstruction method propose herein uses images of the same retinal structure in epipolar geometry. Images are preprocessed by RISA system for segmenting blood vessels and obtaining feature points for correspondences. The correspondence points process is solved using correlation. The LMedS analysis and Graph Transformation Matching algorithm are used for outliers suppression. Camera projection matrices are computed with the normalized eight point algorithm. Finally, we retrieve 3D position of the retinal tree points by linear triangulation. In order to increase the power of visualization, 3D tree skeletons are represented by surfaces via generalized cylinders whose radius correspond to morphological measurements obtained by RISA. In this paper the complete calibration process including the fundus camera and the optical properties of the eye, the so called camera-eye system is proposed. On one hand, the internal parameters of the fundus camera are obtained by classical algorithms using a reference pattern. On the other hand, we minimize the undesirable efects of the aberrations induced by the eyeball optical system assuming that contact enlarging lens corrects astigmatism, spherical and coma aberrations are reduced changing the aperture size and eye refractive errors are suppressed adjusting camera focus during image acquisition. Evaluation of two self-calibration proposals and results of 3D blood vessel surface reconstruction are presented.

  2. Trade-off between TMA and RC configurations for JANUS camera

    NASA Astrophysics Data System (ADS)

    Greggio, D.; Magrin, D.; Munari, M.; Paolinetti, R.; Turella, A.; Zusi, M.; Cremonese, G.; Debei, S.; Della Corte, V.; Friso, E.; Hoffmann, H.; Jaumann, R.; Michaelis, H.; Mugnuolo, R.; Olivieri, A.; Palumbo, P.; Ragazzoni, R.; Schmitz, N.

    2016-07-01

    JANUS (Jovis Amorum Ac Natorum Undique Scrutator) is a high-resolution visible camera designed for the ESA space mission JUICE (Jupiter Icy moons Explorer). The main scientific goal of JANUS is to observe the surface of the Jupiter satellites Ganymede and Europa in order to characterize their physical and geological properties. During the design phases, we have proposed two possible optical configurations: a Three Mirror Anastigmat (TMA) and a Ritchey-Chrétien (RC) both matching the performance requirements. Here we describe the two optical solutions and compare their performance both in terms of achieved optical quality, sensitivity to misalignment and stray light performances.

  3. SITHON: A Wireless Network of in Situ Optical Cameras Applied to the Early Detection-Notification-Monitoring of Forest Fires

    PubMed Central

    Tsiourlis, Georgios; Andreadakis, Stamatis; Konstantinidis, Pavlos

    2009-01-01

    The SITHON system, a fully wireless optical imaging system, integrating a network of in-situ optical cameras linking to a multi-layer GIS database operated by Control Operating Centres, has been developed in response to the need for early detection, notification and monitoring of forest fires. This article presents in detail the architecture and the components of SITHON, and demonstrates the first encouraging results of an experimental test with small controlled fires over Sithonia Peninsula in Northern Greece. The system has already been scheduled to be installed in some fire prone areas of Greece. PMID:22408536

  4. STS-61 art concept of astronauts during HST servicing

    NASA Image and Video Library

    1993-11-12

    S93-48826 (November 1993) --- This artist's rendition of the 1993 Hubble Space Telescope (HST) servicing mission shows astronauts installing the new Wide Field/Planetary Camera (WF/PC 2). The instruments to replace the original camera and contains corrective optics that compensate for the telescope's flawed primary mirror. During the 11-plus day mission, astronauts are also scheduled to install the Corrective Optics Space Telescope Axial Replacement (COSTAR) -- an optics package that focuses and routes light to the other three instruments aboard the observatory -- a new set of solar array panels, and other hardware and components. The artwork was done for JPL by Paul Hudson.

  5. Comparison of low-cost handheld retinal camera and traditional table top retinal camera in the detection of retinal features indicating a risk of cardiovascular disease

    NASA Astrophysics Data System (ADS)

    Joshi, V.; Wigdahl, J.; Nemeth, S.; Zamora, G.; Ebrahim, E.; Soliz, P.

    2018-02-01

    Retinal abnormalities associated with hypertensive retinopathy are useful in assessing the risk of cardiovascular disease, heart failure, and stroke. Assessing these risks as part of primary care can lead to a decrease in the incidence of cardiovascular disease-related deaths. Primary care is a resource limited setting where low cost retinal cameras may bring needed help without compromising care. We compared a low-cost handheld retinal camera to a traditional table top retinal camera on their optical characteristics and performance to detect hypertensive retinopathy. A retrospective dataset of N=40 subjects (28 with hypertensive retinopathy, 12 controls) was used from a clinical study conducted at a primary care clinic in Texas. Non-mydriatic retinal fundus images were acquired using a Pictor Plus hand held camera (Volk Optical Inc.) and a Canon CR1-Mark II tabletop camera (Canon USA) during the same encounter. The images from each camera were graded by a licensed optometrist according to the universally accepted Keith-Wagener-Barker Hypertensive Retinopathy Classification System, three weeks apart to minimize memory bias. The sensitivity of the hand-held camera to detect any level of hypertensive retinopathy was 86% compared to the Canon. Insufficient photographer's skills produced 70% of the false negative cases. The other 30% were due to the handheld camera's insufficient spatial resolution to resolve the vascular changes such as minor A/V nicking and copper wiring, but these were associated with non-referable disease. Physician evaluation of the performance of the handheld camera indicates it is sufficient to provide high risk patients with adequate follow up and management.

  6. Camera-Only Kinematics for Small Lunar Rovers

    NASA Astrophysics Data System (ADS)

    Fang, E.; Suresh, S.; Whittaker, W.

    2016-11-01

    Knowledge of the kinematic state of rovers is critical. Existing methods add sensors and wiring to moving parts, which can fail and adds mass and volume. This research presents a method to optically determine kinematic state using a single camera.

  7. Computer-generated hologram calculation for real scenes using a commercial portable plenoptic camera

    NASA Astrophysics Data System (ADS)

    Endo, Yutaka; Wakunami, Koki; Shimobaba, Tomoyoshi; Kakue, Takashi; Arai, Daisuke; Ichihashi, Yasuyuki; Yamamoto, Kenji; Ito, Tomoyoshi

    2015-12-01

    This paper shows the process used to calculate a computer-generated hologram (CGH) for real scenes under natural light using a commercial portable plenoptic camera. In the CGH calculation, a light field captured with the commercial plenoptic camera is converted into a complex amplitude distribution. Then the converted complex amplitude is propagated to a CGH plane. We tested both numerical and optical reconstructions of the CGH and showed that the CGH calculation from captured data with the commercial plenoptic camera was successful.

  8. Design of a MATLAB(registered trademark) Image Comparison and Analysis Tool for Augmentation of the Results of the Ann Arbor Distortion Test

    DTIC Science & Technology

    2016-06-25

    The equipment used in this procedure includes: Ann Arbor distortion tester with 50-line grating reticule, IQeye 720 digital video camera with 12...and import them into MATLAB. In order to digitally capture images of the distortion in an optical sample, an IQeye 720 video camera with a 12... video camera and Ann Arbor distortion tester. Figure 8. Computer interface for capturing images seen by IQeye 720 camera. Once an image was

  9. Optical alignment of high resolution Fourier transform spectrometers

    NASA Technical Reports Server (NTRS)

    Breckinridge, J. B.; Ocallaghan, F. G.; Cassie, A. G.

    1980-01-01

    Remote sensing, high resolution FTS instruments often contain three primary optical subsystems: Fore-Optics, Interferometer Optics, and Post, or Detector Optics. We discuss the alignment of a double-pass FTS containing a cat's-eye retro-reflector. Also, the alignment of fore-optics containing confocal paraboloids with a reflecting field stop which relays a field image onto a camera is discussed.

  10. Applications of optical fibers and miniature photonic elements in medical diagnostics

    NASA Astrophysics Data System (ADS)

    Blaszczak, Urszula; Gilewski, Marian; Gryko, Lukasz; Zajac, Andrzej; Kukwa, Andrzej; Kukwa, Wojciech

    2014-05-01

    Construction of endoscopes which are known for decades, in particular in small devices with the diameter of few millimetres, are based on the application of fibre optic imaging bundles or bundles of fibers in the illumination systems (usually with a halogen source). Cameras - CCD and CMOS - with the sensor size of less than 5 mm emerging commercially and high power LED solutions allow to design and construct modern endoscopes characterized by many innovative properties. These constructions offer higher resolution. They are also relatively cheaper especially in the context of the integration of the majority of the functions on a single chip. Mentioned features of the CMOS sensors reduce the cycle of introducing the newly developed instruments to the market. The paper includes a description of the concept of the endoscope with a miniature camera built on the basis of CMOS detector manufactured by Omni Vision. The set of LEDs located at the operator side works as the illuminating system. Fibre optic system and the lens of the camera are used in shaping the beam illuminating the observed tissue. Furthermore, to broaden the range of applications of the endoscope, the illuminator allows to control the spectral characteristics of emitted light. The paper presents the analysis of the basic parameters of the light-and-optical system of the endoscope. The possibility of adjusting the magnifications of the lens, the field of view of the camera and its spatial resolution is discussed. Special attention was drawn to the issues related to the selection of the light sources used for the illumination in terms of energy efficiency and the possibility of providing adjusting the colour of the emitted light in order to improve the quality of the image obtained by the camera.

  11. Optical Mapping of Membrane Potential and Epicardial Deformation in Beating Hearts.

    PubMed

    Zhang, Hanyu; Iijima, Kenichi; Huang, Jian; Walcott, Gregory P; Rogers, Jack M

    2016-07-26

    Cardiac optical mapping uses potentiometric fluorescent dyes to image membrane potential (Vm). An important limitation of conventional optical mapping is that contraction is usually arrested pharmacologically to prevent motion artifacts from obscuring Vm signals. However, these agents may alter electrophysiology, and by abolishing contraction, also prevent optical mapping from being used to study coupling between electrical and mechanical function. Here, we present a method to simultaneously map Vm and epicardial contraction in the beating heart. Isolated perfused swine hearts were stained with di-4-ANEPPS and fiducial markers were glued to the epicardium for motion tracking. The heart was imaged at 750 Hz with a video camera. Fluorescence was excited with cyan or blue LEDs on alternating camera frames, thus providing a 375-Hz effective sampling rate. Marker tracking enabled the pixel(s) imaging any epicardial site within the marked region to be identified in each camera frame. Cyan- and blue-elicited fluorescence have different sensitivities to Vm, but other signal features, primarily motion artifacts, are common. Thus, taking the ratio of fluorescence emitted by a motion-tracked epicardial site in adjacent frames removes artifacts, leaving Vm (excitation ratiometry). Reconstructed Vm signals were validated by comparison to monophasic action potentials and to conventional optical mapping signals. Binocular imaging with additional video cameras enabled marker motion to be tracked in three dimensions. From these data, epicardial deformation during the cardiac cycle was quantified by computing finite strain fields. We show that the method can simultaneously map Vm and strain in a left-sided working heart preparation and can image changes in both electrical and mechanical function 5 min after the induction of regional ischemia. By allowing high-resolution optical mapping in the absence of electromechanical uncoupling agents, the method relieves a long-standing limitation of optical mapping and has potential to enhance new studies in coupled cardiac electromechanics. Copyright © 2016 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  12. High-speed optical 3D sensing and its applications

    NASA Astrophysics Data System (ADS)

    Watanabe, Yoshihiro

    2016-12-01

    This paper reviews high-speed optical 3D sensing technologies for obtaining the 3D shape of a target using a camera. The focusing speed is from 100 to 1000 fps, exceeding normal camera frame rates, which are typically 30 fps. In particular, contactless, active, and real-time systems are introduced. Also, three example applications of this type of sensing technology are introduced, including surface reconstruction from time-sequential depth images, high-speed 3D user interaction, and high-speed digital archiving.

  13. Study of optical techniques for the Ames unitary wind tunnel. Part 5: Infrared imagery

    NASA Technical Reports Server (NTRS)

    Lee, George

    1992-01-01

    A survey of infrared thermography for aerodynamics was made. Particular attention was paid to boundary layer transition detection. IR thermography flow visualization of 2-D and 3-D separation was surveyed. Heat transfer measurements and surface temperature measurements were also covered. Comparisons of several commercial IR cameras were made. The use of a recently purchased IR camera in the Ames Unitary Plan Wind Tunnels was studied. Optical access for these facilities and the methods to scan typical models was investigated.

  14. Picosecond x-ray streak cameras

    NASA Astrophysics Data System (ADS)

    Averin, V. I.; Bryukhnevich, Gennadii I.; Kolesov, G. V.; Lebedev, Vitaly B.; Miller, V. A.; Saulevich, S. V.; Shulika, A. N.

    1991-04-01

    The first multistage image converter with an X-ray photocathode (UMI-93 SR) was designed in VNIIOFI in 1974 [1]. The experiments carried out in IOFAN pointed out that X-ray electron-optical cameras using the tube provided temporal resolution up to 12 picoseconds [2]. The later work has developed into the creation of the separate streak and intensifying tubes. Thus, PV-003R tube has been built on base of UMI-93SR design, fibre optically connected to PMU-2V image intensifier carrying microchannel plate.

  15. Ringfield lithographic camera

    DOEpatents

    Sweatt, William C.

    1998-01-01

    A projection lithography camera is presented with a wide ringfield optimized so as to make efficient use of extreme ultraviolet radiation from a large area radiation source (e.g., D.sub.source .apprxeq.0.5 mm). The camera comprises four aspheric mirrors optically arranged on a common axis of symmetry with an increased etendue for the camera system. The camera includes an aperture stop that is accessible through a plurality of partial aperture stops to synthesize the theoretical aperture stop. Radiation from a mask is focused to form a reduced image on a wafer, relative to the mask, by reflection from the four aspheric mirrors.

  16. Low-cost mobile phone microscopy with a reversed mobile phone camera lens.

    PubMed

    Switz, Neil A; D'Ambrosio, Michael V; Fletcher, Daniel A

    2014-01-01

    The increasing capabilities and ubiquity of mobile phones and their associated digital cameras offer the possibility of extending low-cost, portable diagnostic microscopy to underserved and low-resource areas. However, mobile phone microscopes created by adding magnifying optics to the phone's camera module have been unable to make use of the full image sensor due to the specialized design of the embedded camera lens, exacerbating the tradeoff between resolution and field of view inherent to optical systems. This tradeoff is acutely felt for diagnostic applications, where the speed and cost of image-based diagnosis is related to the area of the sample that can be viewed at sufficient resolution. Here we present a simple and low-cost approach to mobile phone microscopy that uses a reversed mobile phone camera lens added to an intact mobile phone to enable high quality imaging over a significantly larger field of view than standard microscopy. We demonstrate use of the reversed lens mobile phone microscope to identify red and white blood cells in blood smears and soil-transmitted helminth eggs in stool samples.

  17. Low-Cost Mobile Phone Microscopy with a Reversed Mobile Phone Camera Lens

    PubMed Central

    Fletcher, Daniel A.

    2014-01-01

    The increasing capabilities and ubiquity of mobile phones and their associated digital cameras offer the possibility of extending low-cost, portable diagnostic microscopy to underserved and low-resource areas. However, mobile phone microscopes created by adding magnifying optics to the phone's camera module have been unable to make use of the full image sensor due to the specialized design of the embedded camera lens, exacerbating the tradeoff between resolution and field of view inherent to optical systems. This tradeoff is acutely felt for diagnostic applications, where the speed and cost of image-based diagnosis is related to the area of the sample that can be viewed at sufficient resolution. Here we present a simple and low-cost approach to mobile phone microscopy that uses a reversed mobile phone camera lens added to an intact mobile phone to enable high quality imaging over a significantly larger field of view than standard microscopy. We demonstrate use of the reversed lens mobile phone microscope to identify red and white blood cells in blood smears and soil-transmitted helminth eggs in stool samples. PMID:24854188

  18. Dense depth maps from correspondences derived from perceived motion

    NASA Astrophysics Data System (ADS)

    Kirby, Richard; Whitaker, Ross

    2017-01-01

    Many computer vision applications require finding corresponding points between images and using the corresponding points to estimate disparity. Today's correspondence finding algorithms primarily use image features or pixel intensities common between image pairs. Some 3-D computer vision applications, however, do not produce the desired results using correspondences derived from image features or pixel intensities. Two examples are the multimodal camera rig and the center region of a coaxial camera rig. We present an image correspondence finding technique that aligns pairs of image sequences using optical flow fields. The optical flow fields provide information about the structure and motion of the scene, which are not available in still images but can be used in image alignment. We apply the technique to a dual focal length stereo camera rig consisting of a visible light-infrared camera pair and to a coaxial camera rig. We test our method on real image sequences and compare our results with the state-of-the-art multimodal and structure from motion (SfM) algorithms. Our method produces more accurate depth and scene velocity reconstruction estimates than the state-of-the-art multimodal and SfM algorithms.

  19. Relativistic Astronomy

    NASA Astrophysics Data System (ADS)

    Zhang, Bing; Li, Kunyang

    2018-02-01

    The “Breakthrough Starshot” aims at sending near-speed-of-light cameras to nearby stellar systems in the future. Due to the relativistic effects, a transrelativistic camera naturally serves as a spectrograph, a lens, and a wide-field camera. We demonstrate this through a simulation of the optical-band image of the nearby galaxy M51 in the rest frame of the transrelativistic camera. We suggest that observing celestial objects using a transrelativistic camera may allow one to study the astronomical objects in a special way, and to perform unique tests on the principles of special relativity. We outline several examples that suggest transrelativistic cameras may make important contributions to astrophysics and suggest that the Breakthrough Starshot cameras may be launched in any direction to serve as a unique astronomical observatory.

  20. The Atacama Cosmology Telescope: Instrument

    NASA Astrophysics Data System (ADS)

    Thornton, Robert J.; Atacama Cosmology Telescope Team

    2010-01-01

    The 6-meter Atacama Cosmology Telescope (ACT) is making detailed maps of the Cosmic Microwave Background at Cerro Toco in northern Chile. In this talk, I focus on the design and operation of the telescope and its commissioning instrument, the Millimeter Bolometer Array Camera. The camera contains three independent sets of optics that operate at 148 GHz, 217 GHz, and 277 GHz with arcminute resolution, each of which couples to a 1024-element array of Transition Edge Sensor (TES) bolometers. I will report on the camera performance, including the beam patterns, optical efficiencies, and detector sensitivities. Under development for ACT is a new polarimeter based on feedhorn-coupled TES devices that have improved sensitivity and are planned to operate at 0.1 K.

  1. Measuring the Temperature of the Ithaca College MOT Cloud using a CMOS Camera

    NASA Astrophysics Data System (ADS)

    Smucker, Jonathan; Thompson, Bruce

    2015-03-01

    We present our work on measuring the temperature of Rubidium atoms cooled using a magneto-optical trap (MOT). The MOT uses laser trapping methods and Doppler cooling to trap and cool Rubidium atoms to form a cloud that is visible to a CMOS Camera. The Rubidium atoms are cooled further using optical molasses cooling after they are released from the trap (by removing the magnetic field). In order to measure the temperature of the MOT we take pictures of the cloud using a CMOS camera as it expands and calculate the temperature based on the free expansion of the cloud. Results from the experiment will be presented along with a summary of the method used.

  2. Optical frequency comb profilometry using a single-pixel camera composed of digital micromirror devices.

    PubMed

    Pham, Quang Duc; Hayasaki, Yoshio

    2015-01-01

    We demonstrate an optical frequency comb profilometer with a single-pixel camera to measure the position and profile of an object's surface that exceeds far beyond light wavelength without 2π phase ambiguity. The present configuration of the single-pixel camera can perform the profilometry with an axial resolution of 3.4 μm at 1 GHz operation corresponding to a wavelength of 30 cm. Therefore, the axial dynamic range was increased to 0.87×105. It was found from the experiments and computer simulations that the improvement was derived from higher modulation contrast of digital micromirror devices. The frame rate was also increased to 20 Hz.

  3. Investigation of solar active regions at high resolution by balloon flights of the solar optical universal polarimeter, extended definition phase

    NASA Technical Reports Server (NTRS)

    Tarbell, Theodore D.

    1993-01-01

    Technical studies of the feasibility of balloon flights of the former Spacelab instrument, the Solar Optical Universal Polarimeter, with a modern charge-coupled device (CCD) camera, to study the structure and evolution of solar active regions at high resolution, are reviewed. In particular, different CCD cameras were used at ground-based solar observatories with the SOUP filter, to evaluate their performance and collect high resolution images. High resolution movies of the photosphere and chromosphere were successfully obtained using four different CCD cameras. Some of this data was collected in coordinated observations with the Yohkoh satellite during May-July, 1992, and they are being analyzed scientifically along with simultaneous X-ray observations.

  4. Effect of Clouds on Optical Imaging of the Space Shuttle During the Ascent Phase: A Statistical Analysis Based on a 3D Model

    NASA Technical Reports Server (NTRS)

    Short, David A.; Lane, Robert E., Jr.; Winters, Katherine A.; Madura, John T.

    2004-01-01

    Clouds are highly effective in obscuring optical images of the Space Shuttle taken during its ascent by ground-based and airborne tracking cameras. Because the imagery is used for quick-look and post-flight engineering analysis, the Columbia Accident Investigation Board (CAIB) recommended the return-to-flight effort include an upgrade of the imaging system to enable it to obtain at least three useful views of the Shuttle from lift-off to at least solid rocket booster (SRB) separation (NASA 2003). The lifetimes of individual cloud elements capable of obscuring optical views of the Shuttle are typically 20 minutes or less. Therefore, accurately observing and forecasting cloud obscuration over an extended network of cameras poses an unprecedented challenge for the current state of observational and modeling techniques. In addition, even the best numerical simulations based on real observations will never reach "truth." In order to quantify the risk that clouds would obscure optical imagery of the Shuttle, a 3D model to calculate probabilistic risk was developed. The model was used to estimate the ability of a network of optical imaging cameras to obtain at least N simultaneous views of the Shuttle from lift-off to SRB separation in the presence of an idealized, randomized cloud field.

  5. Model of an optical system's influence on sensitivity of microbolometric focal plane array

    NASA Astrophysics Data System (ADS)

    Gogler, Sławomir; Bieszczad, Grzegorz; Zarzycka, Alicja; Szymańska, Magdalena; Sosnowski, Tomasz

    2012-10-01

    Thermal imagers and used therein infrared array sensors are subject to calibration procedure and evaluation of their voltage sensitivity on incident radiation during manufacturing process. The calibration procedure is especially important in so-called radiometric cameras, where accurate radiometric quantities, given in physical units, are of concern. Even though non-radiometric cameras are not expected to stand up to such elevated standards, it is still important, that the image faithfully represents temperature variations across the scene. The detectors used in thermal camera are illuminated by infrared radiation transmitted through a specialized optical system. Each optical system used influences irradiation distribution across an sensor array. In the article a model describing irradiation distribution across an array sensor working with an optical system used in the calibration set-up has been proposed. In the said method optical and geometrical considerations of the array set-up have been taken into account. By means of Monte-Carlo simulation, large number of rays has been traced to the sensor plane, what allowed to determine the irradiation distribution across the image plane for different aperture limiting configurations. Simulated results have been confronted with proposed analytical expression. Presented radiometric model allows fast and accurate non-uniformity correction to be carried out.

  6. Development and Optical Testing of the Camera, Hand Lens, and Microscope Probe with Scannable Laser Spectroscopy (CHAMP-SLS)

    NASA Technical Reports Server (NTRS)

    Mungas, Greg S.; Gursel, Yekta; Sepulveda, Cesar A.; Anderson, Mark; La Baw, Clayton; Johnson, Kenneth R.; Deans, Matthew; Beegle, Luther; Boynton, John

    2008-01-01

    Conducting high resolution field microscopy with coupled laser spectroscopy that can be used to selectively analyze the surface chemistry of individual pixels in a scene is an enabling capability for next generation robotic and manned spaceflight missions, civil, and military applications. In the laboratory, we use a range of imaging and surface preparation tools that provide us with in-focus images, context imaging for identifying features that we want to investigate at high magnification, and surface-optical coupling that allows us to apply optical spectroscopic analysis techniques for analyzing surface chemistry particularly at high magnifications. The camera, hand lens, and microscope probe with scannable laser spectroscopy (CHAMP-SLS) is an imaging/spectroscopy instrument capable of imaging continuously from infinity down to high resolution microscopy (resolution of approx. 1 micron/pixel in a final camera format), the closer CHAMP-SLS is placed to a feature, the higher the resultant magnification. At hand lens to microscopic magnifications, the imaged scene can be selectively interrogated with point spectroscopic techniques such as Raman spectroscopy, microscopic Laser Induced Breakdown Spectroscopy (micro-LIBS), laser ablation mass-spectrometry, Fluorescence spectroscopy, and/or Reflectance spectroscopy. This paper summarizes the optical design, development, and testing of the CHAMP-SLS optics.

  7. Electro-optic holography method for determination of surface shape and deformation

    NASA Astrophysics Data System (ADS)

    Furlong, Cosme; Pryputniewicz, Ryszard J.

    1998-06-01

    Current demanding engineering analysis and design applications require effective experimental methodologies for characterization of surface shape and deformation. Such characterization is of primary importance in many applications, because these quantities are related to the functionality, performance, and integrity of the objects of interest, especially in view of advances relating to concurrent engineering. In this paper, a new approach to characterization of surface shape and deformation using a simple optical setup is described. The approach consists of a fiber optic based electro-optic holography (EOH) system based on an IR, temperature tuned laser diode, a single mode fiber optic directional coupler assembly, and a video processing computer. The EOH can be arranged in multiple configurations which include, the three-camera, three- illumination, and speckle correlation modes.In particular, the three-camera mode is described, as well as a brief description of the procedures for obtaining quantitative 3D shape and deformation information. A representative application of the three-camera EOH system demonstrates the viability of the approach as an effective engineering tool. A particular feature of this system and the procedure described in this paper is that the 3D quantitative data are written to data files which can be readily interfaced to commercial CAD/CAM environments.

  8. The application of high-speed photography in z-pinch high-temperature plasma diagnostics

    NASA Astrophysics Data System (ADS)

    Wang, Kui-lu; Qiu, Meng-tong; Hei, Dong-wei

    2007-01-01

    This invited paper is presented to discuss the application of high speed photography in z-pinch high temperature plasma diagnostics in recent years in Northwest Institute of Nuclear Technology in concentrative mode. The developments and applications of soft x-ray framing camera, soft x-ray curved crystal spectrometer, optical framing camera, ultraviolet four-frame framing camera and ultraviolet-visible spectrometer are introduced.

  9. Backward-gazing method for heliostats shape errors measurement and calibration

    NASA Astrophysics Data System (ADS)

    Coquand, Mathieu; Caliot, Cyril; Hénault, François

    2017-06-01

    The pointing and canting accuracies and the surface shape of the heliostats have a great influence on the solar tower power plant efficiency. At the industrial scale, one of the issues to solve is the time and the efforts devoted to adjust the different mirrors of the faceted heliostats, which could take several months if the current methods were used. Accurate control of heliostat tracking requires complicated and onerous devices. Thus, methods used to adjust quickly the whole field of a plant are essential for the rise of solar tower technology with a huge number of heliostats. Wavefront detection is widely use in adaptive optics and shape error reconstruction. Such systems can be sources of inspiration for the measurement of solar facets misalignment and tracking errors. We propose a new method of heliostat characterization inspired by adaptive optics devices. This method aims at observing the brightness distributions on heliostat's surface, from different points of view close to the receiver of the power plant, in order to calculate the wavefront of the reflection of the sun on the concentrated surface to determine its errors. The originality of this new method is to use the profile of the sun to determine the defects of the mirrors. In addition, this method would be easy to set-up and could be implemented without sophisticated apparatus: only four cameras would be used to perform the acquisitions.

  10. Adaptive optics imaging of geographic atrophy.

    PubMed

    Gocho, Kiyoko; Sarda, Valérie; Falah, Sabrina; Sahel, José-Alain; Sennlaub, Florian; Benchaboune, Mustapha; Ullern, Martine; Paques, Michel

    2013-05-01

    To report the findings of en face adaptive optics (AO) near infrared (NIR) reflectance fundus flood imaging in eyes with geographic atrophy (GA). Observational clinical study of AO NIR fundus imaging was performed in 12 eyes of nine patients with GA, and in seven controls using a flood illumination camera operating at 840 nm, in addition to routine clinical examination. To document short term and midterm changes, AO imaging sessions were repeated in four patients (mean interval between sessions 21 days; median follow up 6 months). As compared with scanning laser ophthalmoscope imaging, AO NIR imaging improved the resolution of the changes affecting the RPE. Multiple hyporeflective clumps were seen within and around GA areas. Time-lapse imaging revealed micrometric-scale details of the emergence and progression of areas of atrophy as well as the complex kinetics of some hyporeflective clumps. Such dynamic changes were observed within as well as outside atrophic areas. in eyes affected by GA, AO nir imaging allows high resolution documentation of the extent of RPE damage. this also revealed that a complex, dynamic process of redistribution of hyporeflective clumps throughout the posterior pole precedes and accompanies the emergence and progression of atrophy. therefore, these clumps are probably also a biomarker of rpe damage. AO NIR imaging may, therefore, be of interest to detect the earliest stages, to document the retinal pathology and to monitor the progression oF GA. (ClinicalTrials.gov number, NCT01546181.).

  11. Periodicity analysis on cat-eye reflected beam profiles of optical detectors

    NASA Astrophysics Data System (ADS)

    Gong, Mali; He, Sifeng

    2017-05-01

    The cat-eye effect reflected beam profiles of most optical detectors have a certain characteristic of periodicity, which is caused by array arrangement of sensors at their optical focal planes. It is the first time to find and prove that the reflected beam profile becomes several periodic spots at the reflected propagation distance corresponding to half the imaging distance of a CCD camera. Furthermore, the spatial cycle of these spots is approximately constant, independent of the CCD camera's imaging distance, which is related only to the focal length and pixel size of the CCD sensor. Thus, we can obtain the imaging distance and intrinsic parameters of the optical detector by analyzing its cat-eye reflected beam profiles. This conclusion can be applied in the field of non-cooperative cat-eye target recognition.

  12. Underwater Calibration of Dome Port Pressure Housings

    NASA Astrophysics Data System (ADS)

    Nocerino, E.; Menna, F.; Fassi, F.; Remondino, F.

    2016-03-01

    Underwater photogrammetry using consumer grade photographic equipment can be feasible for different applications, e.g. archaeology, biology, industrial inspections, etc. The use of a camera underwater can be very different from its terrestrial use due to the optical phenomena involved. The presence of the water and camera pressure housing in front of the camera act as additional optical elements. Spherical dome ports are difficult to manufacture and consequently expensive but at the same time they are the most useful for underwater photogrammetry as they keep the main geometric characteristics of the lens unchanged. Nevertheless, the manufacturing and alignment of dome port pressure housing components can be the source of unexpected changes of radial and decentring distortion, source of systematic errors that can influence the final 3D measurements. The paper provides a brief introduction of underwater optical phenomena involved in underwater photography, then presents the main differences between flat and dome ports to finally discuss the effect of manufacturing on 3D measurements in two case studies.

  13. Air-borne shape measurement of parabolic trough collector fields

    NASA Astrophysics Data System (ADS)

    Prahl, Christoph; Röger, Marc; Hilgert, Christoph

    2017-06-01

    The optical and thermal efficiency of parabolic trough collector solar fields is dependent on the performance and assembly accuracy of its components such as the concentrator and absorber. For the purpose of optical inspection/approval, yield analysis, localization of low performing areas, and optimization of the solar field, it is essential to create a complete view of the optical properties of the field. Existing optical measurement tools are based on ground based cameras, facing restriction concerning speed, volume and automation. QFly is an airborne qualification system which provides holistic and accurate information on geometrical, optical, and thermal properties of the entire solar field. It consists of an unmanned aerial vehicle, cameras and related software for flight path planning, data acquisition and evaluation. This article presents recent advances of the QFly measurement system and proposes a methodology on holistic qualification of the complete solar field with minimum impact on plant operation.

  14. Electro-optic control of photographic imaging quality through ‘Smart Glass’ windows in optics demonstrations

    NASA Astrophysics Data System (ADS)

    Ozolinsh, Maris; Paulins, Paulis

    2017-09-01

    An experimental setup allowing the modeling of conditions in optical devices and in the eye at various degrees of scattering such as cataract pathology in human eyes is presented. The scattering in cells of polymer-dispersed liquid crystals (PDLCs) and ‘Smart Glass’ windows is used in the modeling experiments. Both applications are used as optical obstacles placed in different positions of the optical information flow pathway either directly on the stimuli demonstration computer screen or mounted directly after the image-formation lens of a digital camera. The degree of scattering is changed continuously by applying an AC voltage of up to 30-80 V to the PDLC cell. The setup uses a camera with 14 bit depth and a 24 mm focal length lens. Light-emitting diodes and diode-pumped solid-state lasers emitting radiation of different wavelengths are used as portable small-divergence light sources in the experiments. Image formation, optical system point spread function, modulation transfer functions, and system resolution limits are determined for such sample optical systems in student optics and optometry experimental exercises.

  15. Advanced adaptive optics technology development

    NASA Astrophysics Data System (ADS)

    Olivier, Scot S.

    2002-02-01

    The NSF Center for Adaptive Optics (CfAO) is supporting research on advanced adaptive optics technologies. CfAO research activities include development and characterization of micro-electro-mechanical systems (MEMS) deformable mirror (DM) technology, as well as development and characterization of high-resolution adaptive optics systems using liquid crystal (LC) spatial light modulator (SLM) technology. This paper presents an overview of the CfAO advanced adaptive optics technology development activities including current status and future plans.

  16. Pedestrian Detection Based on Adaptive Selection of Visible Light or Far-Infrared Light Camera Image by Fuzzy Inference System and Convolutional Neural Network-Based Verification.

    PubMed

    Kang, Jin Kyu; Hong, Hyung Gil; Park, Kang Ryoung

    2017-07-08

    A number of studies have been conducted to enhance the pedestrian detection accuracy of intelligent surveillance systems. However, detecting pedestrians under outdoor conditions is a challenging problem due to the varying lighting, shadows, and occlusions. In recent times, a growing number of studies have been performed on visible light camera-based pedestrian detection systems using a convolutional neural network (CNN) in order to make the pedestrian detection process more resilient to such conditions. However, visible light cameras still cannot detect pedestrians during nighttime, and are easily affected by shadows and lighting. There are many studies on CNN-based pedestrian detection through the use of far-infrared (FIR) light cameras (i.e., thermal cameras) to address such difficulties. However, when the solar radiation increases and the background temperature reaches the same level as the body temperature, it remains difficult for the FIR light camera to detect pedestrians due to the insignificant difference between the pedestrian and non-pedestrian features within the images. Researchers have been trying to solve this issue by inputting both the visible light and the FIR camera images into the CNN as the input. This, however, takes a longer time to process, and makes the system structure more complex as the CNN needs to process both camera images. This research adaptively selects a more appropriate candidate between two pedestrian images from visible light and FIR cameras based on a fuzzy inference system (FIS), and the selected candidate is verified with a CNN. Three types of databases were tested, taking into account various environmental factors using visible light and FIR cameras. The results showed that the proposed method performs better than the previously reported methods.

  17. Using a plenoptic camera to measure distortions in wavefronts affected by atmospheric turbulence

    NASA Astrophysics Data System (ADS)

    Eslami, Mohammed; Wu, Chensheng; Rzasa, John; Davis, Christopher C.

    2012-10-01

    Ideally, as planar wave fronts travel through an imaging system, all rays, or vectors pointing in the direction of the propagation of energy are parallel, and thus the wave front is focused to a particular point. If the wave front arrives at an imaging system with energy vectors that point in different directions, each part of the wave front will be focused at a slightly different point on the sensor plane and result in a distorted image. The Hartmann test, which involves the insertion of a series of pinholes between the imaging system and the sensor plane, was developed to sample the wavefront at different locations and measure the distortion angles at different points in the wave front. An adaptive optic system, such as a deformable mirror, is then used to correct for these distortions and allow the planar wave front to focus at the point desired on the sensor plane, thereby correcting the distorted image. The apertures of a pinhole array limit the amount of light that reaches the sensor plane. By replacing the pinholes with a microlens array each bundle of rays is focused to brighten the image. Microlens arrays are making their way into newer imaging technologies, such as "light field" or "plenoptic" cameras. In these cameras, the microlens array is used to recover the ray information of the incoming light by using post processing techniques to focus on objects at different depths. The goal of this paper is to demonstrate the use of these plenoptic cameras to recover the distortions in wavefronts. Taking advantage of the microlens array within the plenoptic camera, CODE-V simulations show that its performance can provide more information than a Shack-Hartmann sensor. Using the microlens array to retrieve the ray information and then backstepping through the imaging system provides information about distortions in the arriving wavefront.

  18. Vision System Measures Motions of Robot and External Objects

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Matthies, Larry

    2008-01-01

    A prototype of an advanced robotic vision system both (1) measures its own motion with respect to a stationary background and (2) detects other moving objects and estimates their motions, all by use of visual cues. Like some prior robotic and other optoelectronic vision systems, this system is based partly on concepts of optical flow and visual odometry. Whereas prior optoelectronic visual-odometry systems have been limited to frame rates of no more than 1 Hz, a visual-odometry subsystem that is part of this system operates at a frame rate of 60 to 200 Hz, given optical-flow estimates. The overall system operates at an effective frame rate of 12 Hz. Moreover, unlike prior machine-vision systems for detecting motions of external objects, this system need not remain stationary: it can detect such motions while it is moving (even vibrating). The system includes a stereoscopic pair of cameras mounted on a moving robot. The outputs of the cameras are digitized, then processed to extract positions and velocities. The initial image-data-processing functions of this system are the same as those of some prior systems: Stereoscopy is used to compute three-dimensional (3D) positions for all pixels in the camera images. For each pixel of each image, optical flow between successive image frames is used to compute the two-dimensional (2D) apparent relative translational motion of the point transverse to the line of sight of the camera. The challenge in designing this system was to provide for utilization of the 3D information from stereoscopy in conjunction with the 2D information from optical flow to distinguish between motion of the camera pair and motions of external objects, compute the motion of the camera pair in all six degrees of translational and rotational freedom, and robustly estimate the motions of external objects, all in real time. To meet this challenge, the system is designed to perform the following image-data-processing functions: The visual-odometry subsystem (the subsystem that estimates the motion of the camera pair relative to the stationary background) utilizes the 3D information from stereoscopy and the 2D information from optical flow. It computes the relationship between the 3D and 2D motions and uses a least-mean-squares technique to estimate motion parameters. The least-mean-squares technique is suitable for real-time implementation when the number of external-moving-object pixels is smaller than the number of stationary-background pixels.

  19. [Cinematography of ocular fundus with a jointed optical system and tv or cine-camera (author's transl)].

    PubMed

    Kampik, A; Rapp, J

    1979-02-01

    A method of Cinematography of the ocular fundus is introduced which--by connecting a camera with an indirect ophthalmoscop--allows to record the monocular picture of the fundus as produced by the ophthalmic lens.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pradere, P.; Perol, A.

    The requirements for the design of an XRII can be quite different depending on the application: medical; industrial; low or high energy. A specific need for industrial applications is to reduce image burn-in, a permanent marking of the tube related to the inspection of sharp contrast objects with high X-ray doses. Burn-in is mainly related to the darkening of the output screen which depends on the electron beam density in the tube. A first way to reduce burn-in is to reduce the tube gain. A more efficient solution now proposed by Thomson Tubes Electroniques is to use a non browning,more » radiation hard glass for the tube output window together with a more adapted screen process that will limit the darkening of the output phosphor itself. The new industrial tube will be proposed in 9 in. (215 mm useful) or 12 in. (290 mm) format and could be ideally combined with a new high resolution (1024 x 1024 pixels) 12 bits real time CCD camera. This camera includes a new interline CCD developed to avoid image smear and blooming. Integrated image heads with power supply and folded optics are available. Low energy, beryllium windowed 9 in. XRII is already available in industrial version.« less

Top