Sample records for eye imaging system

  1. Design and simulation of a superposition compound eye system based on hybrid diffractive-refractive lenses.

    PubMed

    Zhang, Shuqing; Zhou, Luyang; Xue, Changxi; Wang, Lei

    2017-09-10

    Compound eyes offer a promising field of miniaturized imaging systems. In one application of a compound eye, superposition of compound eye systems forms a composite image by superposing the images produced by different channels. The geometric configuration of superposition compound eye systems is achieved by three micro-lens arrays with different pitches and focal lengths. High resolution is indispensable for the practicability of superposition compound eye systems. In this paper, hybrid diffractive-refractive lenses are introduced into the design of a compound eye system for this purpose. With the help of ZEMAX, two superposition compound eye systems with and without hybrid diffractive-refractive lenses were separately designed. Then, we demonstrate the effectiveness of using a hybrid diffractive-refractive lens to improve the image quality.

  2. Comparison of eye imaging pattern recognition using neural network

    NASA Astrophysics Data System (ADS)

    Bukhari, W. M.; Syed A., M.; Nasir, M. N. M.; Sulaima, M. F.; Yahaya, M. S.

    2015-05-01

    The beauty of eye recognition system that it is used in automatic identifying and verifies a human weather from digital images or video source. There are various behaviors of the eye such as the color of the iris, size of pupil and shape of the eye. This study represents the analysis, design and implementation of a system for recognition of eye imaging. All the eye images that had been captured from the webcam in RGB format must through several techniques before it can be input for the pattern and recognition processes. The result shows that the final value of weight and bias after complete training 6 eye images for one subject is memorized by the neural network system and be the reference value of the weight and bias for the testing part. The target classifies to 5 different types for 5 subjects. The eye images can recognize the subject based on the target that had been set earlier during the training process. When the values between new eye image and the eye image in the database are almost equal, it is considered the eye image is matched.

  3. A panoramic imaging system based on fish-eye lens

    NASA Astrophysics Data System (ADS)

    Wang, Ye; Hao, Chenyang

    2017-10-01

    Panoramic imaging has been closely watched as one of the major technologies of AR and VR. Mainstream panoramic imaging techniques lenses include fish-eye lenses, image splicing, and catadioptric imaging system. Meanwhile, fish-eyes are widely used in the big picture video surveillance. The advantage of fish-eye lenses is that they are easy to operate and cost less, but how to solve the image distortion of fish-eye lenses has always been a very important topic. In this paper, the image calibration algorithm of fish-eye lens is studied by comparing the method of interpolation, bilinear interpolation and double three interpolation, which are used to optimize the images.

  4. Image-based computer-assisted diagnosis system for benign paroxysmal positional vertigo

    NASA Astrophysics Data System (ADS)

    Kohigashi, Satoru; Nakamae, Koji; Fujioka, Hiromu

    2005-04-01

    We develop the image based computer assisted diagnosis system for benign paroxysmal positional vertigo (BPPV) that consists of the balance control system simulator, the 3D eye movement simulator, and the extraction method of nystagmus response directly from an eye movement image sequence. In the system, the causes and conditions of BPPV are estimated by searching the database for record matching with the nystagmus response for the observed eye image sequence of the patient with BPPV. The database includes the nystagmus responses for simulated eye movement sequences. The eye movement velocity is obtained by using the balance control system simulator that allows us to simulate BPPV under various conditions such as canalithiasis, cupulolithiasis, number of otoconia, otoconium size, and so on. Then the eye movement image sequence is displayed on the CRT by the 3D eye movement simulator. The nystagmus responses are extracted from the image sequence by the proposed method and are stored in the database. In order to enhance the diagnosis accuracy, the nystagmus response for a newly simulated sequence is matched with that for the observed sequence. From the matched simulation conditions, the causes and conditions of BPPV are estimated. We apply our image based computer assisted diagnosis system to two real eye movement image sequences for patients with BPPV to show its validity.

  5. A novel lobster-eye imaging system based on Schmidt-type objective for X-ray-backscattering inspection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Jie; Wang, Xin, E-mail: wangx@tongji.edu.cn, E-mail: mubz@tongji.edu.cn; Zhan, Qi

    This paper presents a novel lobster-eye imaging system for X-ray-backscattering inspection. The system was designed by modifying the Schmidt geometry into a treble-lens structure in order to reduce the resolution difference between the vertical and horizontal directions, as indicated by ray-tracing simulations. The lobster-eye X-ray imaging system is capable of operating over a wide range of photon energies up to 100 keV. In addition, the optics of the lobster-eye X-ray imaging system was tested to verify that they meet the requirements. X-ray-backscattering imaging experiments were performed in which T-shaped polymethyl-methacrylate objects were imaged by the lobster-eye X-ray imaging system basedmore » on both the double-lens and treble-lens Schmidt objectives. The results show similar resolution of the treble-lens Schmidt objective in both the vertical and horizontal directions. Moreover, imaging experiments were performed using a second treble-lens Schmidt objective with higher resolution. The results show that for a field of view of over 200 mm and with a 500 mm object distance, this lobster-eye X-ray imaging system based on a treble-lens Schmidt objective offers a spatial resolution of approximately 3 mm.« less

  6. A novel lobster-eye imaging system based on Schmidt-type objective for X-ray-backscattering inspection

    NASA Astrophysics Data System (ADS)

    Xu, Jie; Wang, Xin; Zhan, Qi; Huang, Shengling; Chen, Yifan; Mu, Baozhong

    2016-07-01

    This paper presents a novel lobster-eye imaging system for X-ray-backscattering inspection. The system was designed by modifying the Schmidt geometry into a treble-lens structure in order to reduce the resolution difference between the vertical and horizontal directions, as indicated by ray-tracing simulations. The lobster-eye X-ray imaging system is capable of operating over a wide range of photon energies up to 100 keV. In addition, the optics of the lobster-eye X-ray imaging system was tested to verify that they meet the requirements. X-ray-backscattering imaging experiments were performed in which T-shaped polymethyl-methacrylate objects were imaged by the lobster-eye X-ray imaging system based on both the double-lens and treble-lens Schmidt objectives. The results show similar resolution of the treble-lens Schmidt objective in both the vertical and horizontal directions. Moreover, imaging experiments were performed using a second treble-lens Schmidt objective with higher resolution. The results show that for a field of view of over 200 mm and with a 500 mm object distance, this lobster-eye X-ray imaging system based on a treble-lens Schmidt objective offers a spatial resolution of approximately 3 mm.

  7. [The eye, the optic system and its anomalies].

    PubMed

    Cohen, S Y

    1993-09-15

    The eye is a perceptive system with extremely complex physiology, although its optical properties can be assimilated to those of spherical diopters. Various approximations make it possible to reduce the eyeball to a single convex diopter. With a normal eye the image of an object situated ad infinitum focuses on the retina. The normal eye is called emmetropic. Otherwise, the eye is called ametropic. Several types of ametropy exist. When the image focuses in front of the retina the eye is said to be myopic. When the image focuses behind the retina the eye is called hypermetropic (or hyperopic). When the image of an object differs according to various focusing axes, the eye is said to be astigmatic.

  8. Micro-optical artificial compound eyes.

    PubMed

    Duparré, J W; Wippermann, F C

    2006-03-01

    Natural compound eyes combine small eye volumes with a large field of view at the cost of comparatively low spatial resolution. For small invertebrates such as flies or moths, compound eyes are the perfectly adapted solution to obtaining sufficient visual information about their environment without overloading their brains with the necessary image processing. However, to date little effort has been made to adopt this principle in optics. Classical imaging always had its archetype in natural single aperture eyes which, for example, human vision is based on. But a high-resolution image is not always required. Often the focus is on very compact, robust and cheap vision systems. The main question is consequently: what is the better approach for extremely miniaturized imaging systems-just scaling of classical lens designs or being inspired by alternative imaging principles evolved by nature in the case of small insects? In this paper, it is shown that such optical systems can be achieved using state-of-the-art micro-optics technology. This enables the generation of highly precise and uniform microlens arrays and their accurate alignment to the subsequent optics-, spacing- and optoelectronics structures. The results are thin, simple and monolithic imaging devices with a high accuracy of photolithography. Two different artificial compound eye concepts for compact vision systems have been investigated in detail: the artificial apposition compound eye and the cluster eye. Novel optical design methods and characterization tools were developed to allow the layout and experimental testing of the planar micro-optical imaging systems, which were fabricated for the first time by micro-optics technology. The artificial apposition compound eye can be considered as a simple imaging optical sensor while the cluster eye is capable of becoming a valid alternative to classical bulk objectives but is much more complex than the first system.

  9. A novel smartphone ophthalmic imaging adapter: User feasibility studies in Hyderabad, India

    PubMed Central

    Ludwig, Cassie A; Murthy, Somasheila I; Pappuru, Rajeev R; Jais, Alexandre; Myung, David J; Chang, Robert T

    2016-01-01

    Aim of Study: To evaluate the ability of ancillary health staff to use a novel smartphone imaging adapter system (EyeGo, now known as Paxos Scope) to capture images of sufficient quality to exclude emergent eye findings. Secondary aims were to assess user and patient experiences during image acquisition, interuser reproducibility, and subjective image quality. Materials and Methods: The system captures images using a macro lens and an indirect ophthalmoscopy lens coupled with an iPhone 5S. We conducted a prospective cohort study of 229 consecutive patients presenting to L. V. Prasad Eye Institute, Hyderabad, India. Primary outcome measure was mean photographic quality (FOTO-ED study 1–5 scale, 5 best). 210 patients and eight users completed surveys assessing comfort and ease of use. For 46 patients, two users imaged the same patient's eyes sequentially. For 182 patients, photos taken with the EyeGo system were compared to images taken by existing clinic cameras: a BX 900 slit-lamp with a Canon EOS 40D Digital Camera and an FF 450 plus Fundus Camera with VISUPAC™ Digital Imaging System. Images were graded post hoc by a reviewer blinded to diagnosis. Results: Nine users acquired 719 useable images and 253 videos of 229 patients. Mean image quality was ≥ 4.0/5.0 (able to exclude subtle findings) for all users. 8/8 users and 189/210 patients surveyed were comfortable with the EyeGo device on a 5-point Likert scale. For 21 patients imaged with the anterior adapter by two users, a weighted κ of 0.597 (95% confidence interval: 0.389–0.806) indicated moderate reproducibility. High level of agreement between EyeGo and existing clinic cameras (92.6% anterior, 84.4% posterior) was found. Conclusion: The novel, ophthalmic imaging system is easily learned by ancillary eye care providers, well tolerated by patients, and captures high-quality images of eye findings. PMID:27146928

  10. Truly simultaneous SS-OCT of the anterior and posterior human eye with full anterior chamber and 50° retinal field of views (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    McNabb, Ryan P.; Viehland, Christian; Keller, Brenton; Vann, Robin R.; Izatt, Joseph A.; Kuo, Anthony N.

    2017-02-01

    Optical coherence tomography (OCT) has revolutionized clinical observation of the eye and is an indispensable part of the modern ophthalmic practice. Unlike many other ophthalmic imaging techniques, OCT provides three-dimensional information about the imaged eye. However, conventional clinical OCT systems image only the anterior or the posterior eye during a single acquisition. Newer OCT systems have begun to image both during the same acquisition but with compromises such as limited field of view in the posterior eye or requiring rapid switching between the anterior and posterior eye during the scan. We describe here the development and demonstration of an OCT system with truly simultaneous imaging of both the anterior and posterior eye capable of imaging the full anterior chamber width and 50° on the retina (macula, optic nerve, and arcades). The whole eye OCT system was developed using custom optics and optomechanics. Polarization was utilized to separate the imaging channels. We utilized a 200kHz swept-source laser (Axsun Technologies) centered at 1040±50nm of bandwidth. The clock signal generated by the laser was interpolated 4x to generate 5504 samples per laser sweep. With the whole eye OCT system, we simultaneously acquired anterior and posterior segments with repeated B-scans as well as three-dimensional volumes from seven healthy volunteers (other than refractive error). On three of these volunteers, whole eye OCT and partial coherence interferometry (LenStar PCI, Haag-Streit) were used to measure axial eye length. We measured a mean repeatability of ±47µm with whole eye OCT and a mean difference from PCI of -68µm.

  11. Automated eye blink detection and correction method for clinical MR eye imaging.

    PubMed

    Wezel, Joep; Garpebring, Anders; Webb, Andrew G; van Osch, Matthias J P; Beenakker, Jan-Willem M

    2017-07-01

    To implement an on-line monitoring system to detect eye blinks during ocular MRI using field probes, and to reacquire corrupted k-space lines by means of an automatic feedback system integrated with the MR scanner. Six healthy subjects were scanned on a 7 Tesla MRI whole-body system using a custom-built receive coil. Subjects were asked to blink multiple times during the MR-scan. The local magnetic field changes were detected with an external fluorine-based field probe which was positioned close to the eye. The eye blink produces a field shift greater than a threshold level, this was communicated in real-time to the MR system which immediately reacquired the motion-corrupted k-space lines. The uncorrected images, using the original motion-corrupted data, showed severe artifacts, whereas the corrected images, using the reacquired data, provided an image quality similar to images acquired without blinks. Field probes can successfully detect eye blinks during MRI scans. By automatically reacquiring the eye blink-corrupted data, high quality MR-images of the eye can be acquired. Magn Reson Med 78:165-171, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  12. Biomedical sensing and imaging for the anterior segment of the eye

    NASA Astrophysics Data System (ADS)

    Eom, Tae Joong; Yoo, Young-Sik; Lee, Yong-Eun; Kim, Beop-Min; Joo, Choun-Ki

    2015-07-01

    Eye is an optical system composed briefly of cornea, lens, and retina. Ophthalmologists can diagnose status of patient's eye from information provided by optical sensors or images as well as from history taking or physical examinations. Recently, we developed a prototype of optical coherence tomography (OCT) image guided femtosecond laser cataract surgery system. The system combined a swept-source OCT and a femtosecond (fs) laser and afford the 2D and 3D structure information to increase the efficiency and safety of the cataract procedure. The OCT imaging range was extended to achieve the 3D image from the cornea to lens posterior. A prototype of OCT image guided fs laser cataract surgery system. The surgeons can plan the laser illumination range for the nuclear division and segmentation, and monitor the whole cataract surgery procedure using the real time OCT. The surgery system was demonstrated with an extracted pig eye and in vivo rabbit eye to verify the system performance and stability.

  13. Development of infrared thermal imager for dry eye diagnosis

    NASA Astrophysics Data System (ADS)

    Chiang, Huihua Kenny; Chen, Chih Yen; Cheng, Hung You; Chen, Ko-Hua; Chang, David O.

    2006-08-01

    This study aims at the development of non-contact dry eye diagnosis based on an infrared thermal imager system, which was used to measure the cooling of the ocular surface temperature of normal and dry eye patients. A total of 108 subjects were measured, including 26 normal and 82 dry eye patients. We have observed that the dry eye patients have a fast cooling of the ocular surface temperature than the normal control group. We have developed a simplified algorithm for calculating the temperature decay constant of the ocular surface for discriminating between normal and dry eye. This study shows the diagnostic of dry eye syndrome by the infrared thermal imager system has reached a sensitivity of 79.3%, a specificity of 75%, and the area under the ROC curve 0.841. The infrared thermal imager system has a great potential to be developed for dry eye screening with the advantages of non-contact, fast, and convenient implementation.

  14. Naked-eye 3D imaging employing a modified MIMO micro-ring conjugate mirrors

    NASA Astrophysics Data System (ADS)

    Youplao, P.; Pornsuwancharoen, N.; Amiri, I. S.; Thieu, V. N.; Yupapin, P.

    2018-03-01

    In this work, the use of a micro-conjugate mirror that can produce the 3D image incident probe and display is proposed. By using the proposed system together with the concept of naked-eye 3D imaging, a pixel and a large volume pixel of a 3D image can be created and displayed as naked-eye perception, which is valuable for the large volume naked-eye 3D imaging applications. In operation, a naked-eye 3D image that has a large pixel volume will be constructed by using the MIMO micro-ring conjugate mirror system. Thereafter, these 3D images, formed by the first micro-ring conjugate mirror system, can be transmitted through an optical link to a short distance away and reconstructed via the recovery conjugate mirror at the other end of the transmission. The image transmission is performed by the Fourier integral in MATLAB and compares to the Opti-wave program results. The Fourier convolution is also included for the large volume image transmission. The simulation is used for the manipulation, where the array of a micro-conjugate mirror system is designed and simulated for the MIMO system. The naked-eye 3D imaging is confirmed by the concept of the conjugate mirror in both the input and output images, in terms of the four-wave mixing (FWM), which is discussed and interpreted.

  15. Interoperative fundus image and report sharing in compliance with integrating the healthcare enterprise conformance and web access to digital imaging and communication in medicine persistent object protocol.

    PubMed

    Wu, Hui-Qun; Lv, Zheng-Min; Geng, Xing-Yun; Jiang, Kui; Tang, Le-Min; Zhou, Guo-Min; Dong, Jian-Cheng

    2013-01-01

    To address issues in interoperability between different fundus image systems, we proposed a web eye-picture archiving and communication system (PACS) framework in conformance with digital imaging and communication in medicine (DICOM) and health level 7 (HL7) protocol to realize fundus images and reports sharing and communication through internet. Firstly, a telemedicine-based eye care work flow was established based on integrating the healthcare enterprise (IHE) Eye Care technical framework. Then, a browser/server architecture eye-PACS system was established in conformance with the web access to DICOM persistent object (WADO) protocol, which contains three tiers. In any client system installed with web browser, clinicians could log in the eye-PACS to observe fundus images and reports. Multipurpose internet mail extensions (MIME) type of a structured report is saved as pdf/html with reference link to relevant fundus image using the WADO syntax could provide enough information for clinicians. Some functions provided by open-source Oviyam could be used to query, zoom, move, measure, view DICOM fundus images. Such web eye-PACS in compliance to WADO protocol could be used to store and communicate fundus images and reports, therefore is of great significance for teleophthalmology.

  16. Parallax barrier engineering for image quality improvement in an autostereoscopic 3D display.

    PubMed

    Kim, Sung-Kyu; Yoon, Ki-Hyuk; Yoon, Seon Kyu; Ju, Heongkyu

    2015-05-18

    We present a image quality improvement in a parallax barrier (PB)-based multiview autostereoscopic 3D display system under a real-time tracking of positions of a viewer's eyes. The system presented exploits a parallax barrier engineered to offer significantly improved quality of three-dimensional images for a moving viewer without an eyewear under the dynamic eye tracking. The improved image quality includes enhanced uniformity of image brightness, reduced point crosstalk, and no pseudoscopic effects. We control the relative ratio between two parameters i.e., a pixel size and the aperture of a parallax barrier slit to improve uniformity of image brightness at a viewing zone. The eye tracking that monitors positions of a viewer's eyes enables pixel data control software to turn on only pixels for view images near the viewer's eyes (the other pixels turned off), thus reducing point crosstalk. The eye tracking combined software provides right images for the respective eyes, therefore producing no pseudoscopic effects at its zone boundaries. The viewing zone can be spanned over area larger than the central viewing zone offered by a conventional PB-based multiview autostereoscopic 3D display (no eye tracking). Our 3D display system also provides multiviews for motion parallax under eye tracking. More importantly, we demonstrate substantial reduction of point crosstalk of images at the viewing zone, its level being comparable to that of a commercialized eyewear-assisted 3D display system. The multiview autostereoscopic 3D display presented can greatly resolve the point crosstalk problem, which is one of the critical factors that make it difficult for previous technologies for a multiview autostereoscopic 3D display to replace an eyewear-assisted counterpart.

  17. In vivo imaging of the rodent eye with swept source/Fourier domain OCT

    PubMed Central

    Liu, Jonathan J.; Grulkowski, Ireneusz; Kraus, Martin F.; Potsaid, Benjamin; Lu, Chen D.; Baumann, Bernhard; Duker, Jay S.; Hornegger, Joachim; Fujimoto, James G.

    2013-01-01

    Swept source/Fourier domain OCT is demonstrated for in vivo imaging of the rodent eye. Using commercial swept laser technology, we developed a prototype OCT imaging system for small animal ocular imaging operating in the 1050 nm wavelength range at an axial scan rate of 100 kHz with ~6 µm axial resolution. The high imaging speed enables volumetric imaging with high axial scan densities, measuring high flow velocities in vessels, and repeated volumetric imaging over time. The 1050 nm wavelength light provides increased penetration into tissue compared to standard commercial OCT systems at 850 nm. The long imaging range enables multiple operating modes for imaging the retina, posterior eye, as well as anterior eye and full eye length. A registration algorithm using orthogonally scanned OCT volumetric data sets which can correct motion on a per A-scan basis is applied to compensate motion and merge motion corrected volumetric data for enhanced OCT image quality. Ultrahigh speed swept source OCT is a promising technique for imaging the rodent eye, proving comprehensive information on the cornea, anterior segment, lens, vitreous, posterior segment, retina and choroid. PMID:23412778

  18. An adaptive optics imaging system designed for clinical use.

    PubMed

    Zhang, Jie; Yang, Qiang; Saito, Kenichi; Nozato, Koji; Williams, David R; Rossi, Ethan A

    2015-06-01

    Here we demonstrate a new imaging system that addresses several major problems limiting the clinical utility of conventional adaptive optics scanning light ophthalmoscopy (AOSLO), including its small field of view (FOV), reliance on patient fixation for targeting imaging, and substantial post-processing time. We previously showed an efficient image based eye tracking method for real-time optical stabilization and image registration in AOSLO. However, in patients with poor fixation, eye motion causes the FOV to drift substantially, causing this approach to fail. We solve that problem here by tracking eye motion at multiple spatial scales simultaneously by optically and electronically integrating a wide FOV SLO (WFSLO) with an AOSLO. This multi-scale approach, implemented with fast tip/tilt mirrors, has a large stabilization range of ± 5.6°. Our method consists of three stages implemented in parallel: 1) coarse optical stabilization driven by a WFSLO image, 2) fine optical stabilization driven by an AOSLO image, and 3) sub-pixel digital registration of the AOSLO image. We evaluated system performance in normal eyes and diseased eyes with poor fixation. Residual image motion with incremental compensation after each stage was: 1) ~2-3 arc minutes, (arcmin) 2) ~0.5-0.8 arcmin and, 3) ~0.05-0.07 arcmin, for normal eyes. Performance in eyes with poor fixation was: 1) ~3-5 arcmin, 2) ~0.7-1.1 arcmin and 3) ~0.07-0.14 arcmin. We demonstrate that this system is capable of reducing image motion by a factor of ~400, on average. This new optical design provides additional benefits for clinical imaging, including a steering subsystem for AOSLO that can be guided by the WFSLO to target specific regions of interest such as retinal pathology and real-time averaging of registered images to eliminate image post-processing.

  19. A Support System for Mouse Operations Using Eye-Gaze Input

    NASA Astrophysics Data System (ADS)

    Abe, Kiyohiko; Nakayama, Yasuhiro; Ohi, Shoichi; Ohyama, Minoru

    We have developed an eye-gaze input system for people with severe physical disabilities, such as amyotrophic lateral sclerosis (ALS) patients. This system utilizes a personal computer and a home video camera to detect eye-gaze under natural light. The system detects both vertical and horizontal eye-gaze by simple image analysis, and does not require special image processing units or sensors. Our conventional eye-gaze input system can detect horizontal eye-gaze with a high degree of accuracy. However, it can only classify vertical eye-gaze into 3 directions (up, middle and down). In this paper, we propose a new method for vertical eye-gaze detection. This method utilizes the limbus tracking method for vertical eye-gaze detection. Therefore our new eye-gaze input system can detect the two-dimension coordinates of user's gazing point. By using this method, we develop a new support system for mouse operation. This system can move the mouse cursor to user's gazing point.

  20. Tilt and decentration of intraocular lenses in vivo from Purkinje and Scheimpflug imaging. Validation study.

    PubMed

    de Castro, Alberto; Rosales, Patricia; Marcos, Susana

    2007-03-01

    To measure tilt and decentration of intraocular lenses (IOLs) with Scheimpflug and Purkinje imaging systems in physical model eyes with known amounts of tilt and decentration and patients. Instituto de Optica Daza de Valdés, Consejo Superior de Investigaciones Científicas, Madrid, Spain. Measurements of IOL tilt and decentration were obtained using a commercial Scheimpflug system (Pentacam, Oculus), custom algorithms, and a custom-built Purkinje imaging apparatus. Twenty-five Scheimpflug images of the anterior segment of the eye were obtained at different meridians. Custom algorithms were used to process the images (correction of geometrical distortion, edge detection, and curve fittings). Intraocular lens tilt and decentration were estimated by fitting sinusoidal functions to the projections of the pupillary axis and IOL axis in each image. The Purkinje imaging system captures pupil images showing reflections of light from the anterior corneal surface and anterior and posterior lens surfaces. Custom algorithms were used to detect the Purkinje image locations and estimate IOL tilt and decentration based on a linear system equation and computer eye models with individual biometry. Both methods were validated with a physical model eye in which IOL tilt and decentration can be set nominally. Twenty-one eyes of 12 patients with IOLs were measured with both systems. Measurements of the physical model eye showed an absolute discrepancy between nominal and measured values of 0.279 degree (Purkinje) and 0.243 degree (Scheimpflug) for tilt and 0.094 mm (Purkinje) and 0.228 mm (Scheimpflug) for decentration. In patients, the mean tilt was less than 2.6 degrees and the mean decentration less than 0.4 mm. Both techniques showed mirror symmetry between right eyes and left eyes for tilt around the vertical axis and for decentration in the horizontal axis. Both systems showed high reproducibility. Validation experiments on physical model eyes showed slightly higher accuracy with the Purkinje method than the Scheimpflug imaging method. Horizontal measurements of patients with both techniques were highly correlated. The IOLs tended to be tilted and decentered nasally in most patients.

  1. Novel shadowless imaging for eyes-like diagnosis in vivo

    NASA Astrophysics Data System (ADS)

    Xue, Ning; Jiang, Kai; Li, Qi; Zhang, Lili; Ma, Li; Huang, Guoliang

    2016-10-01

    Eyes-like diagnosis was a traditional Chinese medicine method for many diseases, such as chronic gastritis, diabetes, hypertension etc. There was a close relationship between viscera and eyes-like. White-Eye was divided into fourteen sections, which corresponded to different viscera, so eyes-like was the reflection of status of viscera, in another words, it was an epitome of viscera health condition. In this paper, we developed a novel shadowless imaging technology and system for eyes-like diagnosis in vivo, which consisted of an optical shadowless imaging device for capturing and saving images of patients' eyes-like, and a computer linked to the device for image processing. A character matching algorithm was developed to extract the character of white-eye in corresponding sections of eyes-like images taken by the optical shadowless imaging device, according to the character of eyes-like, whether there were viscera diseases could be learned. A series of assays were carried out, and the results verified the feasibility of eyes-like diagnosis technique.

  2. 3D ocular ultrasound using gaze tracking on the contralateral eye: a feasibility study.

    PubMed

    Afsham, Narges; Najafi, Mohammad; Abolmaesumi, Purang; Rohling, Robert

    2011-01-01

    A gaze-deviated examination of the eye with a 2D ultrasound transducer is a common and informative ophthalmic test; however, the complex task of the pose estimation of the ultrasound images relative to the eye affects 3D interpretation. To tackle this challenge, a novel system for 3D image reconstruction based on gaze tracking of the contralateral eye has been proposed. The gaze fixates on several target points and, for each fixation, the pose of the examined eye is inferred from the gaze tracking. A single camera system has been developed for pose estimation combined with subject-specific parameter identification. The ultrasound images are then transformed to the coordinate system of the examined eye to create a 3D volume. Accuracy of the proposed gaze tracking system and the pose estimation of the eye have been validated in a set of experiments. Overall system error, including pose estimation and calibration, are 3.12 mm and 4.68 degrees.

  3. Fixational Eye Movements in the Earliest Stage of Metazoan Evolution

    PubMed Central

    Bielecki, Jan; Høeg, Jens T.; Garm, Anders

    2013-01-01

    All known photoreceptor cells adapt to constant light stimuli, fading the retinal image when exposed to an immobile visual scene. Counter strategies are therefore necessary to prevent blindness, and in mammals this is accomplished by fixational eye movements. Cubomedusae occupy a key position for understanding the evolution of complex visual systems and their eyes are assumedly subject to the same adaptive problems as the vertebrate eye, but lack motor control of their visual system. The morphology of the visual system of cubomedusae ensures a constant orientation of the eyes and a clear division of the visual field, but thereby also a constant retinal image when exposed to stationary visual scenes. Here we show that bell contractions used for swimming in the medusae refresh the retinal image in the upper lens eye of Tripedalia cystophora. This strongly suggests that strategies comparable to fixational eye movements have evolved at the earliest metazoan stage to compensate for the intrinsic property of the photoreceptors. Since the timing and amplitude of the rhopalial movements concur with the spatial and temporal resolution of the eye it circumvents the need for post processing in the central nervous system to remove image blur. PMID:23776673

  4. Fixational eye movements in the earliest stage of metazoan evolution.

    PubMed

    Bielecki, Jan; Høeg, Jens T; Garm, Anders

    2013-01-01

    All known photoreceptor cells adapt to constant light stimuli, fading the retinal image when exposed to an immobile visual scene. Counter strategies are therefore necessary to prevent blindness, and in mammals this is accomplished by fixational eye movements. Cubomedusae occupy a key position for understanding the evolution of complex visual systems and their eyes are assumedly subject to the same adaptive problems as the vertebrate eye, but lack motor control of their visual system. The morphology of the visual system of cubomedusae ensures a constant orientation of the eyes and a clear division of the visual field, but thereby also a constant retinal image when exposed to stationary visual scenes. Here we show that bell contractions used for swimming in the medusae refresh the retinal image in the upper lens eye of Tripedalia cystophora. This strongly suggests that strategies comparable to fixational eye movements have evolved at the earliest metazoan stage to compensate for the intrinsic property of the photoreceptors. Since the timing and amplitude of the rhopalial movements concur with the spatial and temporal resolution of the eye it circumvents the need for post processing in the central nervous system to remove image blur.

  5. Depth-estimation-enabled compound eyes

    NASA Astrophysics Data System (ADS)

    Lee, Woong-Bi; Lee, Heung-No

    2018-04-01

    Most animals that have compound eyes determine object distances by using monocular cues, especially motion parallax. In artificial compound eye imaging systems inspired by natural compound eyes, object depths are typically estimated by measuring optic flow; however, this requires mechanical movement of the compound eyes or additional acquisition time. In this paper, we propose a method for estimating object depths in a monocular compound eye imaging system based on the computational compound eye (COMPU-EYE) framework. In the COMPU-EYE system, acceptance angles are considerably larger than interommatidial angles, causing overlap between the ommatidial receptive fields. In the proposed depth estimation technique, the disparities between these receptive fields are used to determine object distances. We demonstrate that the proposed depth estimation technique can estimate the distances of multiple objects.

  6. High-Speed Noninvasive Eye-Tracking System

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; LaBaw, Clayton; Michael-Morookian, John; Monacos, Steve; Serviss, Orin

    2007-01-01

    The figure schematically depicts a system of electronic hardware and software that noninvasively tracks the direction of a person s gaze in real time. Like prior commercial noninvasive eye-tracking systems, this system is based on (1) illumination of an eye by a low-power infrared light-emitting diode (LED); (2) acquisition of video images of the pupil, iris, and cornea in the reflected infrared light; (3) digitization of the images; and (4) processing the digital image data to determine the direction of gaze from the centroids of the pupil and cornea in the images. Relative to the prior commercial systems, the present system operates at much higher speed and thereby offers enhanced capability for applications that involve human-computer interactions, including typing and computer command and control by handicapped individuals,and eye-based diagnosis of physiological disorders that affect gaze responses.

  7. Optimization of illumination schemes in a head-mounted display integrated with eye tracking capabilities

    NASA Astrophysics Data System (ADS)

    Pansing, Craig W.; Hua, Hong; Rolland, Jannick P.

    2005-08-01

    Head-mounted display (HMD) technologies find a variety of applications in the field of 3D virtual and augmented environments, 3D scientific visualization, as well as wearable displays. While most of the current HMDs use head pose to approximate line of sight, we propose to investigate approaches and designs for integrating eye tracking capability into HMDs from a low-level system design perspective and to explore schemes for optimizing system performance. In this paper, we particularly propose to optimize the illumination scheme, which is a critical component in designing an eye tracking-HMD (ET-HMD) integrated system. An optimal design can improve not only eye tracking accuracy, but also robustness. Using LightTools, we present the simulation of a complete eye illumination and imaging system using an eye model along with multiple near infrared LED (IRLED) illuminators and imaging optics, showing the irradiance variation of the different eye structures. The simulation of dark pupil effects along with multiple 1st-order Purkinje images will be presented. A parametric analysis is performed to investigate the relationships between the IRLED configurations and the irradiance distribution at the eye, and a set of optimal configuration parameters is recommended. The analysis will be further refined by actual eye image acquisition and processing.

  8. An Insect Eye Inspired Miniaturized Multi-Camera System for Endoscopic Imaging.

    PubMed

    Cogal, Omer; Leblebici, Yusuf

    2017-02-01

    In this work, we present a miniaturized high definition vision system inspired by insect eyes, with a distributed illumination method, which can work in dark environments for proximity imaging applications such as endoscopy. Our approach is based on modeling biological systems with off-the-shelf miniaturized cameras combined with digital circuit design for real time image processing. We built a 5 mm radius hemispherical compound eye, imaging a 180 ° ×180 ° degrees field of view while providing more than 1.1 megapixels (emulated ommatidias) as real-time video with an inter-ommatidial angle ∆ϕ = 0.5 ° at 18 mm radial distance. We made an FPGA implementation of the image processing system which is capable of generating 25 fps video with 1080 × 1080 pixel resolution at a 120 MHz processing clock frequency. When compared to similar size insect eye mimicking systems in literature, the system proposed in this paper features 1000 × resolution increase. To the best of our knowledge, this is the first time that a compound eye with built-in illumination idea is reported. We are offering our miniaturized imaging system for endoscopic applications like colonoscopy or laparoscopic surgery where there is a need for large field of view high definition imagery. For that purpose we tested our system inside a human colon model. We also present the resulting images and videos from the human colon model in this paper.

  9. Noncontact detection of dry eye using a custom designed infrared thermal image system

    NASA Astrophysics Data System (ADS)

    Su, Tai Yuan; Hwa, Chen Kerh; Liu, Po Hsuan; Wu, Ming Hong; Chang, David O.; Su, Po Fang; Chang, Shu Wen; Chiang, Huihua Kenny

    2011-04-01

    Dry eye syndrome is a common irritating eye disease. Current clinical diagnostic methods are invasive and uncomfortable for patients. This study developed a custom designed noncontact infrared (IR) thermal image system to measure the spatial and temporal variation of the ocular surface temperature over a 6-second eye-open period. This research defined two parameters: the temperature difference value and the compactness value to represent the temperature change and the irregularity of the temperature distribution on the tear film. Using these two parameters, this study achieved discrimination results for the dry eye and the normal eye groups; the sensitivity is 0.84, the specificity is 0.83, and the receiver operating characteristic area is 0.87. The results suggest that the custom designed IR thermal image system may be used as an effective tool for noncontact detection of dry eye.

  10. Noncontact detection of dry eye using a custom designed IR thermal image system

    NASA Astrophysics Data System (ADS)

    Su, Tai Yuan; Chen, Kerh Hwa; Liu, Po Hsuan; Wu, Ming Hong; Chang, David O.; Chiang, Huihua

    2011-03-01

    Dry eye syndrome is a common irritating eye disease. Current clinical diagnostic methods are invasive and uncomfortable to patients. A custom designed noncontact infrared (IR) thermal image system was developed to measure the spatial and temporal variation of the ocular surface temperature over a 6-second eye-opening period. We defined two parameters: the temperature difference value and the compactness value to represent the degree of the temperature change and irregularity of the temperature distribution on the tear film. By using these two parameters, in this study, a linear discrimination result for the dry eye and the normal eye groups; the sensitivity is 0.9, the specificity is 0.86 and the receiver operating characteristic (ROC) area is 0.91. The result suggests that the custom designed IR thermal image system may be used as an effective tool for noncontact detection of dry eye.

  11. Measurement of eye lens dose for Varian On-Board Imaging with different cone-beam computed tomography acquisition techniques

    PubMed Central

    Deshpande, Sudesh; Dhote, Deepak; Thakur, Kalpna; Pawar, Amol; Kumar, Rajesh; Kumar, Munish; Kulkarni, M. S.; Sharma, S. D.; Kannan, V.

    2016-01-01

    The objective of this work was to measure patient eye lens dose for different cone-beam computed tomography (CBCT) acquisition protocols of Varian's On-Board Imaging (OBI) system using optically stimulated luminescence dosimeter (OSLD) and to study the variation in eye lens dose with patient geometry and distance of isocenter to the eye lens. During the experimental measurements, OSLD was placed on the patient between the eyebrows of both eyes in line of nose during CBCT image acquisition to measure eye lens doses. The eye lens dose measurements were carried out for three different cone-beam acquisition protocols (standard dose head, low-dose head [LDH], and high-quality head [HQH]) of Varian OBI. Measured doses were correlated with patient geometry and distance between isocenter and eye lens. Measured eye lens doses for standard head and HQH protocols were in the range of 1.8–3.2 mGy and 4.5–9.9 mGy, respectively. However, the measured eye lens dose for the LDH protocol was in the range of 0.3–0.7 mGy. The measured data indicate that eye lens dose to patient depends on the selected imaging protocol. It was also observed that eye lens dose does not depend on patient geometry but strongly depends on distance between eye lens and treatment field isocenter. However, undoubted advantages of imaging system should not be counterbalanced by inappropriate selection of imaging protocol, especially for very intense imaging protocol. PMID:27651564

  12. Optical design of a novel instrument that uses the Hartmann-Shack sensor and Zernike polynomials to measure and simulate customized refraction correction surgery outcomes and patient satisfaction

    NASA Astrophysics Data System (ADS)

    Yasuoka, Fatima M. M.; Matos, Luciana; Cremasco, Antonio; Numajiri, Mirian; Marcato, Rafael; Oliveira, Otavio G.; Sabino, Luis G.; Castro N., Jarbas C.; Bagnato, Vanderlei S.; Carvalho, Luis A. V.

    2016-03-01

    An optical system that conjugates the patient's pupil to the plane of a Hartmann-Shack (HS) wavefront sensor has been simulated using optical design software. And an optical bench prototype is mounted using mechanical eye device, beam splitter, illumination system, lenses, mirrors, mirrored prism, movable mirror, wavefront sensor and camera CCD. The mechanical eye device is used to simulate aberrations of the eye. From this device the rays are emitted and travelled by the beam splitter to the optical system. Some rays fall on the camera CCD and others pass in the optical system and finally reach the sensor. The eye models based on typical in vivo eye aberrations is constructed using the optical design software Zemax. The computer-aided outcomes of each HS images for each case are acquired, and these images are processed using customized techniques. The simulated and real images for low order aberrations are compared using centroid coordinates to assure that the optical system is constructed precisely in order to match the simulated system. Afterwards a simulated version of retinal images is constructed to show how these typical eyes would perceive an optotype positioned 20 ft away. Certain personalized corrections are allowed by eye doctors based on different Zernike polynomial values and the optical images are rendered to the new parameters. Optical images of how that eye would see with or without corrections of certain aberrations are generated in order to allow which aberrations can be corrected and in which degree. The patient can then "personalize" the correction to their own satisfaction. This new approach to wavefront sensing is a promising change in paradigm towards the betterment of the patient-physician relationship.

  13. Segmentation method of eye region based on fuzzy logic system for classifying open and closed eyes

    NASA Astrophysics Data System (ADS)

    Kim, Ki Wan; Lee, Won Oh; Kim, Yeong Gon; Hong, Hyung Gil; Lee, Eui Chul; Park, Kang Ryoung

    2015-03-01

    The classification of eye openness and closure has been researched in various fields, e.g., driver drowsiness detection, physiological status analysis, and eye fatigue measurement. For a classification with high accuracy, accurate segmentation of the eye region is required. Most previous research used the segmentation method by image binarization on the basis that the eyeball is darker than skin, but the performance of this approach is frequently affected by thick eyelashes or shadows around the eye. Thus, we propose a fuzzy-based method for classifying eye openness and closure. First, the proposed method uses I and K color information from the HSI and CMYK color spaces, respectively, for eye segmentation. Second, the eye region is binarized using the fuzzy logic system based on I and K inputs, which is less affected by eyelashes and shadows around the eye. The combined image of I and K pixels is obtained through the fuzzy logic system. Third, in order to reflect the effect by all the inference values on calculating the output score of the fuzzy system, we use the revised weighted average method, where all the rectangular regions by all the inference values are considered for calculating the output score. Fourth, the classification of eye openness or closure is successfully made by the proposed fuzzy-based method with eye images of low resolution which are captured in the environment of people watching TV at a distance. By using the fuzzy logic system, our method does not require the additional procedure of training irrespective of the chosen database. Experimental results with two databases of eye images show that our method is superior to previous approaches.

  14. Ocular Screening System

    NASA Technical Reports Server (NTRS)

    1987-01-01

    Used to detect eye problems in children through analysis of retinal reflexes, the system incorporates image processing techniques. VISISCREEN's photorefractor is basically a 35 millimeter camera with a telephoto lens and an electronic flash. By making a color photograph, the system can test the human eye for refractive error and obstruction in the cornea or lens. Ocular alignment problems are detected by imaging both eyes simultaneously. Electronic flash sends light into the eyes and the light is reflected from the retina back to the camera lens. Photorefractor analyzes the retinal reflexes generated by the subject's response to the flash and produces an image of the subject's eyes in which the pupils are variously colored. The nature of a defect, where such exists, is identifiable by atrained observer's visual examination.

  15. Compact survey and inspection day/night image sensor suite for small unmanned aircraft systems (EyePod)

    NASA Astrophysics Data System (ADS)

    Bird, Alan; Anderson, Scott A.; Linne von Berg, Dale; Davidson, Morgan; Holt, Niel; Kruer, Melvin; Wilson, Michael L.

    2010-04-01

    EyePod is a compact survey and inspection day/night imaging sensor suite for small unmanned aircraft systems (UAS). EyePod generates georeferenced image products in real-time from visible near infrared (VNIR) and long wave infrared (LWIR) imaging sensors and was developed under the ONR funded FEATHAR (Fusion, Exploitation, Algorithms, and Targeting for High-Altitude Reconnaissance) program. FEATHAR is being directed and executed by the Naval Research Laboratory (NRL) in conjunction with the Space Dynamics Laboratory (SDL) and FEATHAR's goal is to develop and test new tactical sensor systems specifically designed for small manned and unmanned platforms (payload weight < 50 lbs). The EyePod suite consists of two VNIR/LWIR (day/night) gimbaled sensors that, combined, provide broad area survey and focused inspection capabilities. Each EyePod sensor pairs an HD visible EO sensor with a LWIR bolometric imager providing precision geo-referenced and fully digital EO/IR NITFS output imagery. The LWIR sensor is mounted to a patent-pending jitter-reduction stage to correct for the high-frequency motion typically found on small aircraft and unmanned systems. Details will be presented on both the wide-area and inspection EyePod sensor systems, their modes of operation, and results from recent flight demonstrations.

  16. Communication Aid with Human Eyes Only

    NASA Astrophysics Data System (ADS)

    Arai, Kohei; Yajima, Kenro

    A communication aid with human eyes only is proposed. A set of candidate character is displayed onto computer screen of relatively small and light Head Mount Display: HMD that is mounted on glasses of which user wears on. When user looks at a candidate character with his/hers left eye while right eye picture is taken with small and light web camera that also is mounted on the glasses. The proposed system can selects 81 characters with two layers of 9 by 9 character candidate image. Other than these there is another selective image including control keys and frequently use of sentences. By using image matching between previously acquired template image for each candidate character and the currently acquired image, the proposed system realizes that which character in the candidates is selected. By using blinking and fix one's eye on combine together, the proposed system recognizes that user determines the selected key from the candidates. The blinking detection method employs a morphologic filter to avoid misunderstanding of dark eye detection due to eyebrows and shadows. Thus user can input sentences. User also may edit the sentences and then the sentences are read with Text to Speech: TTS software tool. Thus the system allows support conversations between handicapped and disabled persons without voice and the others peoples because only the function required for conversation is human eyes. Also the proposed system can be used as an input system for wearable computing systems. Test results by the 6 different able persons show that the proposed system does work with acceptable speed, around 1.5 second / character.

  17. An adaptive optics imaging system designed for clinical use

    PubMed Central

    Zhang, Jie; Yang, Qiang; Saito, Kenichi; Nozato, Koji; Williams, David R.; Rossi, Ethan A.

    2015-01-01

    Here we demonstrate a new imaging system that addresses several major problems limiting the clinical utility of conventional adaptive optics scanning light ophthalmoscopy (AOSLO), including its small field of view (FOV), reliance on patient fixation for targeting imaging, and substantial post-processing time. We previously showed an efficient image based eye tracking method for real-time optical stabilization and image registration in AOSLO. However, in patients with poor fixation, eye motion causes the FOV to drift substantially, causing this approach to fail. We solve that problem here by tracking eye motion at multiple spatial scales simultaneously by optically and electronically integrating a wide FOV SLO (WFSLO) with an AOSLO. This multi-scale approach, implemented with fast tip/tilt mirrors, has a large stabilization range of ± 5.6°. Our method consists of three stages implemented in parallel: 1) coarse optical stabilization driven by a WFSLO image, 2) fine optical stabilization driven by an AOSLO image, and 3) sub-pixel digital registration of the AOSLO image. We evaluated system performance in normal eyes and diseased eyes with poor fixation. Residual image motion with incremental compensation after each stage was: 1) ~2–3 arc minutes, (arcmin) 2) ~0.5–0.8 arcmin and, 3) ~0.05–0.07 arcmin, for normal eyes. Performance in eyes with poor fixation was: 1) ~3–5 arcmin, 2) ~0.7–1.1 arcmin and 3) ~0.07–0.14 arcmin. We demonstrate that this system is capable of reducing image motion by a factor of ~400, on average. This new optical design provides additional benefits for clinical imaging, including a steering subsystem for AOSLO that can be guided by the WFSLO to target specific regions of interest such as retinal pathology and real-time averaging of registered images to eliminate image post-processing. PMID:26114033

  18. The coordinate system of the eye in cataract surgery: Performance comparison of the circle Hough transform and Daugman's algorithm

    NASA Astrophysics Data System (ADS)

    Vlachynska, Alzbeta; Oplatkova, Zuzana Kominkova; Sramka, Martin

    2017-07-01

    The aim of the work is to determine the coordinate system of an eye and insert a polar-axis system into images captured by a slip lamp. The image of the eye with the polar axis helps a surgeon accurately implant toric intraocular lens in the required position/rotation during the cataract surgery. In this paper, two common algorithms for pupil detection are compared: the circle Hough transform and Daugman's algorithm. The procedures were tested and analysed on the anonymous data set of 128 eyes captured at Gemini eye clinic in 2015.

  19. Optical eye tracking system for real-time noninvasive tumor localization in external beam radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Via, Riccardo, E-mail: riccardo.via@polimi.it; Fassi, Aurora; Fattori, Giovanni

    Purpose: External beam radiotherapy currently represents an important therapeutic strategy for the treatment of intraocular tumors. Accurate target localization and efficient compensation of involuntary eye movements are crucial to avoid deviations in dose distribution with respect to the treatment plan. This paper describes an eye tracking system (ETS) based on noninvasive infrared video imaging. The system was designed for capturing the tridimensional (3D) ocular motion and provides an on-line estimation of intraocular lesions position based on a priori knowledge coming from volumetric imaging. Methods: Eye tracking is performed by localizing cornea and pupil centers on stereo images captured by twomore » calibrated video cameras, exploiting eye reflections produced by infrared illumination. Additionally, torsional eye movements are detected by template matching in the iris region of eye images. This information allows estimating the 3D position and orientation of the eye by means of an eye local reference system. By combining ETS measurements with volumetric imaging for treatment planning [computed tomography (CT) and magnetic resonance (MR)], one is able to map the position of the lesion to be treated in local eye coordinates, thus enabling real-time tumor referencing during treatment setup and irradiation. Experimental tests on an eye phantom and seven healthy subjects were performed to assess ETS tracking accuracy. Results: Measurements on phantom showed an overall median accuracy within 0.16 mm and 0.40° for translations and rotations, respectively. Torsional movements were affected by 0.28° median uncertainty. On healthy subjects, the gaze direction error ranged between 0.19° and 0.82° at a median working distance of 29 cm. The median processing time of the eye tracking algorithm was 18.60 ms, thus allowing eye monitoring up to 50 Hz. Conclusions: A noninvasive ETS prototype was designed to perform real-time target localization and eye movement monitoring during ocular radiotherapy treatments. The device aims at improving state-of-the-art invasive procedures based on surgical implantation of radiopaque clips and repeated acquisition of X-ray images, with expected positive effects on treatment quality and patient outcome.« less

  20. Optical eye tracking system for real-time noninvasive tumor localization in external beam radiotherapy.

    PubMed

    Via, Riccardo; Fassi, Aurora; Fattori, Giovanni; Fontana, Giulia; Pella, Andrea; Tagaste, Barbara; Riboldi, Marco; Ciocca, Mario; Orecchia, Roberto; Baroni, Guido

    2015-05-01

    External beam radiotherapy currently represents an important therapeutic strategy for the treatment of intraocular tumors. Accurate target localization and efficient compensation of involuntary eye movements are crucial to avoid deviations in dose distribution with respect to the treatment plan. This paper describes an eye tracking system (ETS) based on noninvasive infrared video imaging. The system was designed for capturing the tridimensional (3D) ocular motion and provides an on-line estimation of intraocular lesions position based on a priori knowledge coming from volumetric imaging. Eye tracking is performed by localizing cornea and pupil centers on stereo images captured by two calibrated video cameras, exploiting eye reflections produced by infrared illumination. Additionally, torsional eye movements are detected by template matching in the iris region of eye images. This information allows estimating the 3D position and orientation of the eye by means of an eye local reference system. By combining ETS measurements with volumetric imaging for treatment planning [computed tomography (CT) and magnetic resonance (MR)], one is able to map the position of the lesion to be treated in local eye coordinates, thus enabling real-time tumor referencing during treatment setup and irradiation. Experimental tests on an eye phantom and seven healthy subjects were performed to assess ETS tracking accuracy. Measurements on phantom showed an overall median accuracy within 0.16 mm and 0.40° for translations and rotations, respectively. Torsional movements were affected by 0.28° median uncertainty. On healthy subjects, the gaze direction error ranged between 0.19° and 0.82° at a median working distance of 29 cm. The median processing time of the eye tracking algorithm was 18.60 ms, thus allowing eye monitoring up to 50 Hz. A noninvasive ETS prototype was designed to perform real-time target localization and eye movement monitoring during ocular radiotherapy treatments. The device aims at improving state-of-the-art invasive procedures based on surgical implantation of radiopaque clips and repeated acquisition of X-ray images, with expected positive effects on treatment quality and patient outcome.

  1. Comparing the Zeiss Callisto Eye and the Alcon Verion Image Guided System Toric Lens Alignment Technologies.

    PubMed

    Hura, Arjan S; Osher, Robert H

    2017-07-01

    To compare the alignment meridian generated by the Zeiss Callisto Eye (Carl Zeiss AG, Dublin, CA) and the Alcon Verion Image Guided System (Alcon Laboratories, Inc., Fort Worth, TX). In this retrospective comparative evaluation of technology, intraoperative images were captured at different steps in the same surgery, allowing the comparison of the guidance lines generated by the Verion system to the parallel guidance lines generated by the Callisto Eye system. Measurements of each hemi-meridian were quantified using Adobe Photoshop 2015 CC software (Adobe Systems, San Jose, CA). The numbers of degrees separating these alignment meridians were calculated, entered into a database, and analyzed. The authors found that of 98 captured images of 16 eyes, the two technologies were identical in 0 eyes (θ 1 = θ 2 = 0), similar by 3° in 52 (53%) captured images (θ 1 ≠ θ 2 ≠ 0), and different by at least 3° in 46 (47%) captured images (θ 1 ≠ θ 2 ≠ 0). The target meridians were superimposed, the target lines were minimally separated, and the target lines were dissimilar. It was noted that some intraoperative variation occurred from measurement to measurement. Within the small group of 16 cases of routine toric lens implantation in this study, the absolute average number of degrees of misalignment between the Verion and Callisto Eye systems was 3.355 for θ 1 and 3.838 for θ 2 . On average, the intraoperative variation termed "drift" was noted to be 3.963° for θ 1 , and 4.557° for θ 2 . The authors found that small deviations were frequent when comparing two sophisticated technologies. Although deviations greater than 3° occurred in less than 47% of captured images from 16 eyes, smaller but significant variations of less than 3° occurred in 53% of captured images from 16 eyes. It was rare to identify a large deviation. However, the authors identified "drift" in the same eye when measurements were taken at different times. The results indicate that the two systems are not currently interchangeable. Superiority of one system over the other was not determined. [J Refract Surg. 2017;33(7):482-487.]. Copyright 2017, SLACK Incorporated.

  2. Advanced autostereoscopic display for G-7 pilot project

    NASA Astrophysics Data System (ADS)

    Hattori, Tomohiko; Ishigaki, Takeo; Shimamoto, Kazuhiro; Sawaki, Akiko; Ishiguchi, Tsuneo; Kobayashi, Hiromi

    1999-05-01

    An advanced auto-stereoscopic display is described that permits the observation of a stereo pair by several persons simultaneously without the use of special glasses and any kind of head tracking devices for the viewers. The system is composed of a right eye system, a left eye system and a sophisticated head tracking system. In the each eye system, a transparent type color liquid crystal imaging plate is used with a special back light unit. The back light unit consists of a monochrome 2D display and a large format convex lens. The unit distributes the light of the viewers' correct each eye only. The right eye perspective system is combined with a left eye perspective system is combined with a left eye perspective system by a half mirror in order to function as a time-parallel stereoscopic system. The viewer's IR image is taken through and focused by the large format convex lens and feed back to the back light as a modulated binary half face image. The auto-stereoscopic display employs the TTL method as the accurate head tracking. The system was worked as a stereoscopic TV phone between Duke University Department Tele-medicine and Nagoya University School of Medicine Department Radiology using a high-speed digital line of GIBN. The applications are also described in this paper.

  3. Intermediate view synthesis for eye-gazing

    NASA Astrophysics Data System (ADS)

    Baek, Eu-Ttuem; Ho, Yo-Sung

    2015-01-01

    Nonverbal communication, also known as body language, is an important form of communication. Nonverbal behaviors such as posture, eye contact, and gestures send strong messages. In regard to nonverbal communication, eye contact is one of the most important forms that an individual can use. However, lack of eye contact occurs when we use video conferencing system. The disparity between locations of the eyes and a camera gets in the way of eye contact. The lock of eye gazing can give unapproachable and unpleasant feeling. In this paper, we proposed an eye gazing correction for video conferencing. We use two cameras installed at the top and the bottom of the television. The captured two images are rendered with 2D warping at virtual position. We implement view morphing to the detected face, and synthesize the face and the warped image. Experimental results verify that the proposed system is effective in generating natural gaze-corrected images.

  4. Virtual view image synthesis for eye-contact in TV conversation system

    NASA Astrophysics Data System (ADS)

    Murayama, Daisuke; Kimura, Keiichi; Hosaka, Tadaaki; Hamamoto, Takayuki; Shibuhisa, Nao; Tanaka, Seiichi; Sato, Shunichi; Saito, Sakae

    2010-02-01

    Eye-contact plays an important role for human communications in the sense that it can convey unspoken information. However, it is highly difficult to realize eye-contact in teleconferencing systems because of camera configurations. Conventional methods to overcome this difficulty mainly resorted to space-consuming optical devices such as half mirrors. In this paper, we propose an alternative approach to achieve eye-contact by techniques of arbitrary view image synthesis. In our method, multiple images captured by real cameras are converted to the virtual viewpoint (the center of the display) by homography, and evaluation of matching errors among these projected images provides the depth map and the virtual image. Furthermore, we also propose a simpler version of this method by using a single camera to save the computational costs, in which the only one real image is transformed to the virtual viewpoint based on the hypothesis that the subject is located at a predetermined distance. In this simple implementation, eye regions are separately generated by comparison with pre-captured frontal face images. Experimental results of both the methods show that the synthesized virtual images enable the eye-contact favorably.

  5. Digital image analysis: improving accuracy and reproducibility of radiographic measurement.

    PubMed

    Bould, M; Barnard, S; Learmonth, I D; Cunningham, J L; Hardy, J R

    1999-07-01

    To assess the accuracy and reproducibility of a digital image analyser and the human eye, in measuring radiographic dimensions. We experimentally compared radiographic measurement using either an image analyser system or the human eye with digital caliper. The assessment of total hip arthroplasty wear from radiographs relies on both the accuracy of radiographic images and the accuracy of radiographic measurement. Radiographs were taken of a slip gauge (30+/-0.00036 mm) and slip gauge with a femoral stem. The projected dimensions of the radiographic images were calculated by trigonometry. The radiographic dimensions were then measured by blinded observers using both techniques. For a single radiograph, the human eye was accurate to 0.26 mm and reproducible to +/-0.1 mm. In comparison the digital image analyser system was accurate to 0.01 mm with a reproducibility of +/-0.08 mm. In an arthroplasty model, where the dimensions of an object were corrected for magnification by the known dimensions of a femoral head, the human eye was accurate to 0.19 mm, whereas the image analyser system was accurate to 0.04 mm. The digital image analysis system is up to 20 times more accurate than the human eye, and in an arthroplasty model the accuracy of measurement increases four-fold. We believe such image analysis may allow more accurate and reproducible measurement of wear from standard follow-up radiographs.

  6. Microoptical artificial compound eyes: from design to experimental verification of two different concepts

    NASA Astrophysics Data System (ADS)

    Duparré, Jacques; Wippermann, Frank; Dannberg, Peter; Schreiber, Peter; Bräuer, Andreas; Völkel, Reinhard; Scharf, Toralf

    2005-09-01

    Two novel objective types on the basis of artificial compound eyes are examined. Both imaging systems are well suited for fabrication using microoptics technology due to the small required lens sags. In the apposition optics a microlens array (MLA) and a photo detector array of different pitch in its focal plane are applied. The image reconstruction is based on moire magnification. Several generations of demonstrators of this objective type are manufactured by photo lithographic processes. This includes a system with opaque walls between adjacent channels and an objective which is directly applied onto a CMOS detector array. The cluster eye approach, which is based on a mixture of superposition compound eyes and the vision system of jumping spiders, produces a regular image. Here, three microlens arrays of different pitch form arrays of Keplerian microtelescopes with tilted optical axes, including a field lens. The microlens arrays of this demonstrator are also fabricated using microoptics technology, aperture arrays are applied. Subsequently the lens arrays are stacked to the overall microoptical system on wafer scale. Both fabricated types of artificial compound eye imaging systems are experimentally characterized with respect to resolution, sensitivity and cross talk between adjacent channels. Captured images are presented.

  7. Implicit prosody mining based on the human eye image capture technology

    NASA Astrophysics Data System (ADS)

    Gao, Pei-pei; Liu, Feng

    2013-08-01

    The technology of eye tracker has become the main methods of analyzing the recognition issues in human-computer interaction. Human eye image capture is the key problem of the eye tracking. Based on further research, a new human-computer interaction method introduced to enrich the form of speech synthetic. We propose a method of Implicit Prosody mining based on the human eye image capture technology to extract the parameters from the image of human eyes when reading, control and drive prosody generation in speech synthesis, and establish prosodic model with high simulation accuracy. Duration model is key issues for prosody generation. For the duration model, this paper put forward a new idea for obtaining gaze duration of eyes when reading based on the eye image capture technology, and synchronous controlling this duration and pronunciation duration in speech synthesis. The movement of human eyes during reading is a comprehensive multi-factor interactive process, such as gaze, twitching and backsight. Therefore, how to extract the appropriate information from the image of human eyes need to be considered and the gaze regularity of eyes need to be obtained as references of modeling. Based on the analysis of current three kinds of eye movement control model and the characteristics of the Implicit Prosody reading, relative independence between speech processing system of text and eye movement control system was discussed. It was proved that under the same text familiarity condition, gaze duration of eyes when reading and internal voice pronunciation duration are synchronous. The eye gaze duration model based on the Chinese language level prosodic structure was presented to change previous methods of machine learning and probability forecasting, obtain readers' real internal reading rhythm and to synthesize voice with personalized rhythm. This research will enrich human-computer interactive form, and will be practical significance and application prospect in terms of disabled assisted speech interaction. Experiments show that Implicit Prosody mining based on the human eye image capture technology makes the synthesized speech has more flexible expressions.

  8. SU-E-J-11: Measurement of Eye Lens Dose for Varian On-Board Imaging with Different CBCT Acquisition Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deshpande, S; Dhote, D; Kumar, R

    Purpose: To measure actual patient eye lens dose for different cone beam computed tomography (CBCT) acquisition protocol of Varian’s On Board Imagining (OBI) system using Optically Stimulated Luminescence (OSL) dosimeter and study the eye lens dose with patient geometry and distance of isocenter to the eye lens Methods: OSL dosimeter was used to measure eye lens dose of patient. OSL dosimeter was placed on patient forehead center during CBCT image acquisition to measure eye lens dose. For three different cone beam acquisition protocol (standard dose head, low dose head and high quality head) of Varian On-Board Imaging, eye lens dosesmore » were measured. Measured doses were correlated with patient geometry and distance between isocenter to eye lens. Results: Measured eye lens dose for standard dose head was in the range of 1.8 mGy to 3.2 mGy, for high quality head protocol dose was in range of 4.5mGy to 9.9 mGy whereas for low dose head was in the range of 0.3mGy to 0.7mGy. Dose to eye lens is depends upon position of isocenter. For posterioraly located tumor eye lens dose is less. Conclusion: From measured doses it can be concluded that by proper selection of imagining protocol and frequency of imaging, it is possible to restrict the eye lens dose below the new limit set by ICRP. However, undoubted advantages of imaging system should be counter balanced by careful consideration of imaging protocol especially for very intense imaging sequences for Adoptive Radiotherapy or IMRT.« less

  9. Portable Hyperspectral Imaging Broadens Sensing Horizons

    NASA Technical Reports Server (NTRS)

    2007-01-01

    Broadband multispectral imaging can be very helpful in showing differences in energy being radiated and is often employed by NASA satellites to monitor temperature and climate changes. In addition, hyperspectral imaging is ideal for advanced laboratory uses, biomedical imaging, forensics, counter-terrorism, skin health, food safety, and Earth imaging. Lextel Intelligence Systems, LLC, of Jackson, Mississippi purchased Photon Industries Inc., a spinoff company of NASA's Stennis Space Center and the Institute for Technology Development dedicated to developing new hyperspectral imaging technologies. Lextel has added new features to and expanded the applicability of the hyperspectral imaging systems. It has made advances in the size, usability, and cost of the instruments. The company now offers a suite of turnkey hyperspectral imaging systems based on the original NASA groundwork. It currently has four lines of hyperspectral imaging products: the EagleEye VNIR 100E, the EagleEye SWIR 100E, the EagleEye SWIR 200E, and the EagleEye UV 100E. These Lextel instruments are used worldwide for a wide variety of applications including medical, military, forensics, and food safety.

  10. Active eye-tracking for an adaptive optics scanning laser ophthalmoscope

    PubMed Central

    Sheehy, Christy K.; Tiruveedhula, Pavan; Sabesan, Ramkumar; Roorda, Austin

    2015-01-01

    We demonstrate a system that combines a tracking scanning laser ophthalmoscope (TSLO) and an adaptive optics scanning laser ophthalmoscope (AOSLO) system resulting in both optical (hardware) and digital (software) eye-tracking capabilities. The hybrid system employs the TSLO for active eye-tracking at a rate up to 960 Hz for real-time stabilization of the AOSLO system. AOSLO videos with active eye-tracking signals showed, at most, an amplitude of motion of 0.20 arcminutes for horizontal motion and 0.14 arcminutes for vertical motion. Subsequent real-time digital stabilization limited residual motion to an average of only 0.06 arcminutes (a 95% reduction). By correcting for high amplitude, low frequency drifts of the eye, the active TSLO eye-tracking system enabled the AOSLO system to capture high-resolution retinal images over a larger range of motion than previously possible with just the AOSLO imaging system alone. PMID:26203370

  11. Fluorescent scanning laser ophthalmoscopy for cellular resolution in vivo mouse retinal imaging: benefits and drawbacks of implementing adaptive optics (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Zhang, Pengfei; Goswami, Mayank; Pugh, Edward N.; Zawadzki, Robert J.

    2016-03-01

    Scanning Laser Ophthalmoscopy (SLO) is a very important imaging tool in ophthalmology research. By combing with Adaptive Optics (AO) technique, AO-SLO can correct for ocular aberrations resulting in cellular level resolution, allowing longitudinal studies of single cells morphology in the living eyes. The numerical aperture (NA) sets the optical resolution that can be achieve in the "classical" imaging systems. Mouse eye has more than twice NA of the human eye, thus offering theoretically higher resolution. However, in most SLO based imaging systems the imaging beam size at mouse pupil sets the NA of that instrument, while most of the AO-SLO systems use almost the full NA of the mouse eye. In this report, we first simulated the theoretical resolution that can be achieved in vivo for different imaging beam sizes (different NA), assumingtwo cases: no aberrations and aberrations based on published mouse ocular wavefront data. Then we imaged mouse retinas with our custom build SLO system using different beam sizes to compare these results with theory. Further experiments include comparison of the SLO and AO-SLO systems for imaging different type of fluorescently labeled cells (microglia, ganglion, photoreceptors, etc.). By comparing those results and taking into account systems complexity and ease of use, the benefits and drawbacks of two imaging systems will be discussed.

  12. Design of optical system for binocular fundus camera.

    PubMed

    Wu, Jun; Lou, Shiliang; Xiao, Zhitao; Geng, Lei; Zhang, Fang; Wang, Wen; Liu, Mengjia

    2017-12-01

    A non-mydriasis optical system for binocular fundus camera has been designed in this paper. It can capture two images of the same fundus retinal region from different angles at the same time, and can be used to achieve three-dimensional reconstruction of fundus. It is composed of imaging system and illumination system. In imaging system, Gullstrand Le Grand eye model is used to simulate normal human eye, and Schematic eye model is used to test the influence of ametropia in human eye on imaging quality. Annular aperture and black dot board are added into illumination system, so that the illumination system can eliminate stray light produced by corneal-reflected light and omentoscopic lens. Simulation results show that MTF of each visual field at the cut-off frequency of 90lp/mm is greater than 0.2, system distortion value is -2.7%, field curvature is less than 0.1 mm, radius of Airy disc is 3.25um. This system has a strong ability of chromatic aberration correction and focusing, and can image clearly for human fundus in which the range of diopters is from -10 D to +6 D(1 D = 1 m -1 ).

  13. Single-shot dimension measurements of the mouse eye using SD-OCT.

    PubMed

    Jiang, Minshan; Wu, Pei-Chang; Fini, M Elizabeth; Tsai, Chia-Ling; Itakura, Tatsuo; Zhang, Xiangyang; Jiao, Shuliang

    2012-01-01

    The authors demonstrate the feasibility and advantage of spectral-domain optical coherence tomography (SD-OCT) for single-shot ocular biometric measurement during the development of the mouse eye. A high-resolution SD-OCT system was built for single-shot imaging of the whole mouse eye in vivo. The axial resolution and imaging depth of the system are 4.5 μm (in tissue) and 5.2 mm, respectively. The system is capable of acquiring a cross-sectional OCT image consisting of 2,048 depth scans in 85 ms. The imaging capability of the SD-OCT system was validated by imaging the normal ocular growth and experimental myopia model using C57BL/6J mice. The biometric dimensions of the mouse eye can be calculated directly from one snapshot of the SD-OCT image. The biometric parameters of the mouse eye including axial length, corneal thickness, anterior chamber depth, lens thickness, vitreous chamber depth, and retinal thickness were successfully measured by the SD-OCT. In the normal ocular growth group, the axial length increased significantly from 28 to 82 days of age (P < .001). The lens thickness increased and the vitreous chamber depth decreased significantly during this period (P < .001 and P = .001, respectively). In the experimental myopia group, there were significant increases in vitreous chamber depth and axial length in comparison to the control eyes (P = .040 and P < .001, respectively). SD-OCT is capable of providing single-shot direct, fast, and high-resolution measurements of the dimensions of young and adult mouse eyes. As a result, SD-OCT is a potentially powerful tool that can be easily applied to research in eye development and myopia using small animal models. Copyright 2012, SLACK Incorporated.

  14. A 2D eye gaze estimation system with low-resolution webcam images

    NASA Astrophysics Data System (ADS)

    Ince, Ibrahim Furkan; Kim, Jin Woo

    2011-12-01

    In this article, a low-cost system for 2D eye gaze estimation with low-resolution webcam images is presented. Two algorithms are proposed for this purpose, one for the eye-ball detection with stable approximate pupil-center and the other one for the eye movements' direction detection. Eyeball is detected using deformable angular integral search by minimum intensity (DAISMI) algorithm. Deformable template-based 2D gaze estimation (DTBGE) algorithm is employed as a noise filter for deciding the stable movement decisions. While DTBGE employs binary images, DAISMI employs gray-scale images. Right and left eye estimates are evaluated separately. DAISMI finds the stable approximate pupil-center location by calculating the mass-center of eyeball border vertices to be employed for initial deformable template alignment. DTBGE starts running with initial alignment and updates the template alignment with resulting eye movements and eyeball size frame by frame. The horizontal and vertical deviation of eye movements through eyeball size is considered as if it is directly proportional with the deviation of cursor movements in a certain screen size and resolution. The core advantage of the system is that it does not employ the real pupil-center as a reference point for gaze estimation which is more reliable against corneal reflection. Visual angle accuracy is used for the evaluation and benchmarking of the system. Effectiveness of the proposed system is presented and experimental results are shown.

  15. Characterization of Long Working Distance Optical Coherence Tomography for Imaging of Pediatric Retinal Pathology.

    PubMed

    Qian, Ruobing; Carrasco-Zevallos, Oscar M; Mangalesh, Shwetha; Sarin, Neeru; Vajzovic, Lejla; Farsiu, Sina; Izatt, Joseph A; Toth, Cynthia A

    2017-10-01

    We determined the feasibility of fovea and optic nerve head imaging with a long working distance (LWD) swept source optical coherence tomography (OCT) prototype in adults, teenagers, and young children. A prototype swept source OCT system with a LWD (defined as distance from the last optical element of the imaging system to the eye) of 350 mm with custom fixation targets was developed to facilitate imaging of children. Imaging was performed in 49 participants from three age groups: 26 adults, 16 children 13 to 18 years old (teenagers), and seven children under 6 years old (young children) under an approved institutional review board protocol. The imaging goal was to acquire high quality scans of the fovea and optic nerve in each eye in the shortest time possible. OCT B-scans and volumes of the fovea and optic nerve head of each eligible eye were captured and graded based on four categories (lateral and axial centration, contrast, and resolution) and on ability to determine presence or absence of pathology. LWD-OCT imaging was successful in 88 of 94 eligible eyes, including seven of 10 eyes of young children. Of the successfully acquired OCT images, 83% of B-scan and volumetric images, including 86% from young children, were graded as high-quality scans. Pathology was observed in high-quality OCT images. The prototype LWD-OCT system achieved high quality retinal imaging of adults, teenagers, and some young children with and without pathology with reasonable alignment time. The LWD-OCT system can facilitate imaging in children.

  16. Characterization of Long Working Distance Optical Coherence Tomography for Imaging of Pediatric Retinal Pathology

    PubMed Central

    Qian, Ruobing; Carrasco-Zevallos, Oscar M.; Mangalesh, Shwetha; Sarin, Neeru; Vajzovic, Lejla; Farsiu, Sina; Izatt, Joseph A.; Toth, Cynthia A.

    2017-01-01

    Purpose We determined the feasibility of fovea and optic nerve head imaging with a long working distance (LWD) swept source optical coherence tomography (OCT) prototype in adults, teenagers, and young children. Methods A prototype swept source OCT system with a LWD (defined as distance from the last optical element of the imaging system to the eye) of 350 mm with custom fixation targets was developed to facilitate imaging of children. Imaging was performed in 49 participants from three age groups: 26 adults, 16 children 13 to 18 years old (teenagers), and seven children under 6 years old (young children) under an approved institutional review board protocol. The imaging goal was to acquire high quality scans of the fovea and optic nerve in each eye in the shortest time possible. OCT B-scans and volumes of the fovea and optic nerve head of each eligible eye were captured and graded based on four categories (lateral and axial centration, contrast, and resolution) and on ability to determine presence or absence of pathology. Results LWD-OCT imaging was successful in 88 of 94 eligible eyes, including seven of 10 eyes of young children. Of the successfully acquired OCT images, 83% of B-scan and volumetric images, including 86% from young children, were graded as high-quality scans. Pathology was observed in high-quality OCT images. Conclusions The prototype LWD-OCT system achieved high quality retinal imaging of adults, teenagers, and some young children with and without pathology with reasonable alignment time. Translational Relevance The LWD-OCT system can facilitate imaging in children. PMID:29057163

  17. Anterior Segment Optical Coherence Tomography Angiography for Identification of Iris Vasculature and Staging of Iris Neovascularization: A Pilot Study.

    PubMed

    Roberts, Philipp K; Goldstein, Debra A; Fawzi, Amani A

    2017-08-01

    Purpose/Aim of the study: To assess the ability of optical coherence tomographic angiography (OCTA) to visualize the normal iris vasculature as well as neovascularization of the iris (NVI). Study participants with healthy eyes, patients at risk of NVI development and patients with active or regressed NVI were consecutively included in this cross-sectional observational study. Imaging was performed using a commercially available OCTA system (RTVue- XR Avanti, Optovue Inc., Fremont, CA, USA). Abnormal iris vessels were graded on OCTA according to a modified clinical staging system and compared to slitlamp and gonioscopic findings. Fifty eyes of 26 study participants (16 healthy eyes, 19 eyes at risk, 15 eyes with different stages of NVI) were imaged using OCTA. In 11 out of 16 healthy eyes (69%) with light or moderately dark iris pigmentation, we observed physiological, radially aligned iris vasculature on OCTA imaging, which could not be visualized in five eyes (31%) with darkly pigmented irides. One eye in the "eyes at risk" group was diagnosed with NVI based on OCTA, which was not observed clinically. Fifteen eyes with clinically active or regressed NVI were imaged. Different stages of NVI could be differentiated by OCTA, corresponding well to an established clinical grading system. Four eyes showed regressed NVI by OCTA, not seen clinically, and were graded as a newly defined stage 4. This pilot clinical study showed that OCTA for imaging of the iris vasculature in health and disease is highly dependent on iris pigmentation. Fine, clinically invisible iris vessels can be visualized by OCTA in the very early stages as well as in the regressed stage of NVI.

  18. Anterior Segment Optical Coherence Tomography Angiography for Identification of Iris Vasculature and Staging of Iris Neovascularization: A Pilot Study

    PubMed Central

    Roberts, Philipp K.; Goldstein, Debra A.; Fawzi, Amani A.

    2017-01-01

    Purpose/Aim of the study To assess the ability of optical coherence tomographic angiography (OCTA) to visualize the normal iris vasculature as well as neovascularization of the iris (NVI). Materials and Methods Study participants with healthy eyes, patients at risk of NVI development and patients with active or regressed NVI were consecutively included in this cross-sectional observational study. Imaging was performed using a commercially available OCTA system (RTVue- XR Avanti, Optovue Inc., Fremont, CA, USA). Abnormal iris vessels were graded on OCTA according to a modified clinical staging system and compared to slitlamp and gonioscopic findings. Results Fifty eyes of 26 study participants (16 healthy eyes, 19 eyes at risk, 15 eyes with different stages of NVI) were imaged using OCTA. In 11 out of 16 healthy eyes (69%) with light or moderately dark iris pigmentation, we observed physiological, radially aligned iris vasculature on OCTA imaging, which could not be visualized in five eyes (31%) with darkly pigmented irides. One eye in the “eyes at risk” group was diagnosed with NVI based on OCTA, which was not observed clinically. Fifteen eyes with clinically active or regressed NVI were imaged. Different stages of NVI could be differentiated by OCTA, corresponding well to an established clinical grading system. Four eyes showed regressed NVI by OCTA, not seen clinically, and were graded as a newly defined stage 4. Conclusions This pilot clinical study showed that OCTA for imaging of the iris vasculature in health and disease is highly dependent on iris pigmentation. Fine, clinically invisible iris vessels can be visualized by OCTA in the very early stages as well as in the regressed stage of NVI. PMID:28441067

  19. Automated Detection of Glaucoma From Topographic Features of the Optic Nerve Head in Color Fundus Photographs.

    PubMed

    Chakrabarty, Lipi; Joshi, Gopal Datt; Chakravarty, Arunava; Raman, Ganesh V; Krishnadas, S R; Sivaswamy, Jayanthi

    2016-07-01

    To describe and evaluate the performance of an automated CAD system for detection of glaucoma from color fundus photographs. Color fundus photographs of 2252 eyes from 1126 subjects were collected from 2 centers: Aravind Eye Hospital, Madurai and Coimbatore, India. The images of 1926 eyes (963 subjects) were used to train an automated image analysis-based system, which was developed to provide a decision on a given fundus image. A total of 163 subjects were clinically examined by 2 ophthalmologists independently and their diagnostic decisions were recorded. The consensus decision was defined to be the clinical reference (gold standard). Fundus images of eyes with disagreement in diagnosis were excluded from the study. The fundus images of the remaining 314 eyes (157 subjects) were presented to 4 graders and their diagnostic decisions on the same were collected. The performance of the system was evaluated on the 314 images, using the reference standard. The sensitivity and specificity of the system and 4 independent graders were determined against the clinical reference standard. The system achieved an area under receiver operating characteristic curve of 0.792 with a sensitivity of 0.716 and specificity of 0.717 at a selected threshold for the detection of glaucoma. The agreement with the clinical reference standard as determined by Cohen κ is 0.45 for the proposed system. This is comparable to that of the image-based decisions of 4 ophthalmologists. An automated system was presented for glaucoma detection from color fundus photographs. The overall evaluation results indicated that the presented system was comparable in performance to glaucoma classification by a manual grader solely based on fundus image examination.

  20. Visidep (TM): A Three-Dimensional Imaging System For The Unaided Eye

    NASA Astrophysics Data System (ADS)

    McLaurin, A. Porter; Jones, Edwin R.; Cathey, LeConte

    1984-05-01

    The VISIDEP process for creating images in three dimensions on flat screens is suitable for photographic, electrographic and computer generated imaging systems. Procedures for generating these images vary from medium to medium due to the specific requirements of each technology. Imaging requirements for photographic and electrographic media are more directly tied to the hardware than are computer based systems. Applications of these technologies are not limited to entertainment, but have implications for training, interactive computer/video systems, medical imaging, and inspection equipment. Through minor modification the system can provide three-dimensional images with accurately measureable relationships for robotics and adds this factor for future developments in artificial intelligence. In almost any area requiring image analysis or critical review, VISIDEP provides the added advantage of three-dimensionality. All of this is readily accomplished without aids to the human eye. The system can be viewed in full color, false-color infra-red, and monochromatic modalities from any angle and is also viewable with a single eye. Thus, the potential of application for this developing system is extensive and covers the broad spectrum of human endeavor from entertainment to scientific study.

  1. Hypothesis on human eye perceiving optical spectrum rather than an image

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Szu, Harold

    2015-05-01

    It is a common knowledge that we see the world because our eyes can perceive an optical image. A digital camera seems a good example of simulating the eye imaging system. However, the signal sensing and imaging on human retina is very complicated. There are at least five layers (of neurons) along the signal pathway: photoreceptors (cones and rods), bipolar, horizontal, amacrine and ganglion cells. To sense an optical image, it seems that photoreceptors (as sensors) plus ganglion cells (converting to electrical signals for transmission) are good enough. Image sensing does not require ununiformed distribution of photoreceptors like fovea. There are some challenging questions, for example, why don't we feel the "blind spots" (never fibers exiting the eyes)? Similar situation happens to glaucoma patients who do not feel their vision loss until 50% or more nerves died. Now our hypothesis is that human retina initially senses optical (i.e., Fourier) spectrum rather than optical image. Due to the symmetric property of Fourier spectrum the signal loss from a blind spot or the dead nerves (for glaucoma patients) can be recovered. Eye logarithmic response to input light intensity much likes displaying Fourier magnitude. The optics and structures of human eyes satisfy the needs of optical Fourier spectrum sampling. It is unsure that where and how inverse Fourier transform is performed in human vision system to obtain an optical image. Phase retrieval technique in compressive sensing domain enables image reconstruction even without phase inputs. The spectrum-based imaging system can potentially tolerate up to 50% of bad sensors (pixels), adapt to large dynamic range (with logarithmic response), etc.

  2. Wavefront Derived Refraction and Full Eye Biometry in Pseudophakic Eyes

    PubMed Central

    Mao, Xinjie; Banta, James T.; Ke, Bilian; Jiang, Hong; He, Jichang; Liu, Che; Wang, Jianhua

    2016-01-01

    Purpose To assess wavefront derived refraction and full eye biometry including ciliary muscle dimension and full eye axial geometry in pseudophakic eyes using spectral domain OCT equipped with a Shack-Hartmann wavefront sensor. Methods Twenty-eight adult subjects (32 pseudophakic eyes) having recently undergone cataract surgery were enrolled in this study. A custom system combining two optical coherence tomography systems with a Shack-Hartmann wavefront sensor was constructed to image and monitor changes in whole eye biometry, the ciliary muscle and ocular aberration in the pseudophakic eye. A Badal optical channel and a visual target aligning with the wavefront sensor were incorporated into the system for measuring the wavefront-derived refraction. The imaging acquisition was performed twice. The coefficients of repeatability (CoR) and intraclass correlation coefficient (ICC) were calculated. Results Images were acquired and processed successfully in all patients. No significant difference was detected between repeated measurements of ciliary muscle dimension, full-eye biometry or defocus aberration. The CoR of full-eye biometry ranged from 0.36% to 3.04% and the ICC ranged from 0.981 to 0.999. The CoR for ciliary muscle dimensions ranged from 12.2% to 41.6% and the ICC ranged from 0.767 to 0.919. The defocus aberrations of the two measurements were 0.443 ± 0.534 D and 0.447 ± 0.586 D and the ICC was 0.951. Conclusions The combined system is capable of measuring full eye biometry and refraction with good repeatability. The system is suitable for future investigation of pseudoaccommodation in the pseudophakic eye. PMID:27010674

  3. Wavefront Derived Refraction and Full Eye Biometry in Pseudophakic Eyes.

    PubMed

    Mao, Xinjie; Banta, James T; Ke, Bilian; Jiang, Hong; He, Jichang; Liu, Che; Wang, Jianhua

    2016-01-01

    To assess wavefront derived refraction and full eye biometry including ciliary muscle dimension and full eye axial geometry in pseudophakic eyes using spectral domain OCT equipped with a Shack-Hartmann wavefront sensor. Twenty-eight adult subjects (32 pseudophakic eyes) having recently undergone cataract surgery were enrolled in this study. A custom system combining two optical coherence tomography systems with a Shack-Hartmann wavefront sensor was constructed to image and monitor changes in whole eye biometry, the ciliary muscle and ocular aberration in the pseudophakic eye. A Badal optical channel and a visual target aligning with the wavefront sensor were incorporated into the system for measuring the wavefront-derived refraction. The imaging acquisition was performed twice. The coefficients of repeatability (CoR) and intraclass correlation coefficient (ICC) were calculated. Images were acquired and processed successfully in all patients. No significant difference was detected between repeated measurements of ciliary muscle dimension, full-eye biometry or defocus aberration. The CoR of full-eye biometry ranged from 0.36% to 3.04% and the ICC ranged from 0.981 to 0.999. The CoR for ciliary muscle dimensions ranged from 12.2% to 41.6% and the ICC ranged from 0.767 to 0.919. The defocus aberrations of the two measurements were 0.443 ± 0.534 D and 0.447 ± 0.586 D and the ICC was 0.951. The combined system is capable of measuring full eye biometry and refraction with good repeatability. The system is suitable for future investigation of pseudoaccommodation in the pseudophakic eye.

  4. Numerical study on statistical properties of speckle pattern in laser projection display based on human eye model

    NASA Astrophysics Data System (ADS)

    Cui, Zhe; Wang, Anting; Ma, Qianli; Ming, Hai

    2013-12-01

    In this paper, the laser speckle pattern on human retina for a laser projection display is simulated. By introducing a specific eye model `Indiana Eye', the statistical properties of the laser speckle are numerical investigated. The results show that the aberrations of human eye (mostly spherical and chromatic) will decrease the speckle contrast felt by people. When the wavelength of the laser source is 550 nm (green), people will feel the strongest speck pattern and the weakest when the wavelength is 450 nm (blue). Myopia and hyperopia will decrease the speckle contrast by introducing large spherical aberrations. Although aberration is good for speckle reduction, but it will degrade the imaging capability of the eye. The results show that laser source (650 nm) will have the best image quality on the retina. At last, we compare the human eye with an aberration-free imaging system. Both the speckle contrast and the image quality appear different behavior in these two imaging systems. The results are useful when a standardized measurement procedure for speckle contrast needs to be built.

  5. Imaging and full-length biometry of the eye during accommodation using spectral domain OCT with an optical switch

    PubMed Central

    Ruggeri, Marco; Uhlhorn, Stephen R.; De Freitas, Carolina; Ho, Arthur; Manns, Fabrice; Parel, Jean-Marie

    2012-01-01

    Abstract: An optical switch was implemented in the reference arm of an extended depth SD-OCT system to sequentially acquire OCT images at different depths into the eye ranging from the cornea to the retina. A custom-made accommodation module was coupled with the delivery of the OCT system to provide controlled step stimuli of accommodation and disaccommodation that preserve ocular alignment. The changes in the lens shape were imaged and ocular distances were dynamically measured during accommodation and disaccommodation. The system is capable of dynamic in vivo imaging of the entire anterior segment and eye-length measurement during accommodation in real-time. PMID:22808424

  6. Imaging and full-length biometry of the eye during accommodation using spectral domain OCT with an optical switch.

    PubMed

    Ruggeri, Marco; Uhlhorn, Stephen R; De Freitas, Carolina; Ho, Arthur; Manns, Fabrice; Parel, Jean-Marie

    2012-07-01

    An optical switch was implemented in the reference arm of an extended depth SD-OCT system to sequentially acquire OCT images at different depths into the eye ranging from the cornea to the retina. A custom-made accommodation module was coupled with the delivery of the OCT system to provide controlled step stimuli of accommodation and disaccommodation that preserve ocular alignment. The changes in the lens shape were imaged and ocular distances were dynamically measured during accommodation and disaccommodation. The system is capable of dynamic in vivo imaging of the entire anterior segment and eye-length measurement during accommodation in real-time.

  7. Reconstruction of the optical system of personalized eye models by using magnetic resonance imaging.

    PubMed

    Sun, Han-Yin; Lee, Chi-Hung; Chuang, Chun-Chao

    2016-11-10

    This study presents a practical method for reconstructing the optical system of personalized eye models by using magnetic resonance imaging (MRI). Monocular images were obtained from a young (20-year-old) healthy subject viewing at a near point (10 cm). Each magnetic resonance image was first analyzed using several commercial software to capture the profile of each optical element of the human eye except for the anterior lens surface, which could not be determined because it overlapped the ciliary muscle. The missing profile was substituted with a modified profile from a generic eye model. After the data-including the refractive indices from a generic model-were input in ZEMAX, we obtained a reasonable initial layout. By further considering the resolution of the MRI, the model was optimized to match the optical performance of a healthy eye. The main benefit of having a personalized eye model is the ability to quantitatively identify wide-angle ocular aberrations, which were corrected by the designed free-form spectacle lens.

  8. COMPARISON OF RETINAL PATHOLOGY VISUALIZATION IN MULTISPECTRAL SCANNING LASER IMAGING.

    PubMed

    Meshi, Amit; Lin, Tiezhu; Dans, Kunny; Chen, Kevin C; Amador, Manuel; Hasenstab, Kyle; Muftuoglu, Ilkay Kilic; Nudleman, Eric; Chao, Daniel; Bartsch, Dirk-Uwe; Freeman, William R

    2018-03-16

    To compare retinal pathology visualization in multispectral scanning laser ophthalmoscope imaging between the Spectralis and Optos devices. This retrospective cross-sectional study included 42 eyes from 30 patients with age-related macular degeneration (19 eyes), diabetic retinopathy (10 eyes), and epiretinal membrane (13 eyes). All patients underwent retinal imaging with a color fundus camera (broad-spectrum white light), the Spectralis HRA-2 system (3-color monochromatic lasers), and the Optos P200 system (2-color monochromatic lasers). The Optos image was cropped to a similar size as the Spectralis image. Seven masked graders marked retinal pathologies in each image within a 5 × 5 grid that included the macula. The average area with detected retinal pathology in all eyes was larger in the Spectralis images compared with Optos images (32.4% larger, P < 0.0001), mainly because of better visualization of epiretinal membrane and retinal hemorrhage. The average detection rate of age-related macular degeneration and diabetic retinopathy pathologies was similar across the three modalities, whereas epiretinal membrane detection rate was significantly higher in the Spectralis images. Spectralis tricolor multispectral scanning laser ophthalmoscope imaging had higher rate of pathology detection primarily because of better epiretinal membrane and retinal hemorrhage visualization compared with Optos bicolor multispectral scanning laser ophthalmoscope imaging.

  9. 3D visualization and stereographic techniques for medical research and education.

    PubMed

    Rydmark, M; Kling-Petersen, T; Pascher, R; Philip, F

    2001-01-01

    While computers have been able to work with true 3D models for a long time, the same does not apply to the users in common. Over the years, a number of 3D visualization techniques have been developed to enable a scientist or a student, to see not only a flat representation of an object, but also an approximation of its Z-axis. In addition to the traditional flat image representation of a 3D object, at least four established methodologies exist: Stereo pairs. Using image analysis tools or 3D software, a set of images can be made, each representing the left and the right eye view of an object. Placed next to each other and viewed through a separator, the three dimensionality of an object can be perceived. While this is usually done on still images, tests at Mednet have shown this to work with interactively animated models as well. However, this technique requires some training and experience. Pseudo3D, such as VRML or QuickTime VR, where the interactive manipulation of a 3D model lets the user achieve a sense of the model's true proportions. While this technique works reasonably well, it is not a "true" stereographic visualization technique. Red/Green separation, i.e. "the traditional 3D image" where a red and a green representation of a model is superimposed at an angle corresponding to the viewing angle of the eyes and by using a similar set of eyeglasses, a person can create a mental 3D image. The end result does produce a sense of 3D but the effect is difficult to maintain. Alternating left/right eye systems. These systems (typified by the StereoGraphics CrystalEyes system) let the computer display a "left eye" image followed by a "right eye" image while simultaneously triggering the eyepiece to alternatively make one eye "blind". When run at 60 Hz or higher, the brain will fuse the left/right images together and the user will effectively see a 3D object. Depending on configurations, the alternating systems run at between 50 and 60 Hz, thereby creating a flickering effect, which is strenuous for prolonged use. However, all of the above have one or more drawbacks such as high costs, poor quality and localized use. A fifth system, recently released by Barco Systems, modifies the CrystalEyes system by projecting two superimposed images, using polarized light, with the wave plane of the left image at right angle to that of the right image. By using polarized glasses, each eye will see the appropriate image and true stereographic vision is achieved. While the system requires very expensive hardware, it solves some of the more important problems mentioned above, such as the capacity to use higher frame rates and the ability to display images to a large audience. Mednet has instigated a research project which uses reconstructed models from the central nervous system (human brain and basal ganglia, cortex, dendrites and dendritic spines) and peripheral nervous system (nodes of Ranvier and axoplasmic areas). The aim is to modify the models to fit the different visualization techniques mentioned above and compare a group of users perceived degree of 3D for each technique.

  10. Study of multi-channel optical system based on the compound eye

    NASA Astrophysics Data System (ADS)

    Zhao, Yu; Fu, Yuegang; Liu, Zhiying; Dong, Zhengchao

    2014-09-01

    As an important part of machine vision, compound eye optical systems have the characteristics of high resolution and large FOV. By applying the compound eye optical systems to target detection and recognition, the contradiction between large FOV and high resolution in the traditional single aperture optical systems could be solved effectively and also the parallel processing ability of the optical systems could be sufficiently shown. In this paper, the imaging features of the compound eye optical systems are analyzed. After discussing the relationship between the FOV in each subsystem and the contact ratio of the FOV in the whole system, a method to define the FOV of the subsystem is presented. And a compound eye optical system is designed, which is based on the large FOV synthesized of multi-channels. The compound eye optical system consists with a central optical system and array subsystem, in which the array subsystem is used to capture the target. The high resolution image of the target could be achieved by the central optical system. With the advantage of small volume, light weight and rapid response speed, the optical system could detect the objects which are in 3km and FOV of 60°without any scanning device. The objects in the central field 2w=5.1°could be imaged with high resolution so that the objects could be recognized.

  11. Arthropod eye-inspired digital camera with unique imaging characteristics

    NASA Astrophysics Data System (ADS)

    Xiao, Jianliang; Song, Young Min; Xie, Yizhu; Malyarchuk, Viktor; Jung, Inhwa; Choi, Ki-Joong; Liu, Zhuangjian; Park, Hyunsung; Lu, Chaofeng; Kim, Rak-Hwan; Li, Rui; Crozier, Kenneth B.; Huang, Yonggang; Rogers, John A.

    2014-06-01

    In nature, arthropods have a remarkably sophisticated class of imaging systems, with a hemispherical geometry, a wideangle field of view, low aberrations, high acuity to motion and an infinite depth of field. There are great interests in building systems with similar geometries and properties due to numerous potential applications. However, the established semiconductor sensor technologies and optics are essentially planar, which experience great challenges in building such systems with hemispherical, compound apposition layouts. With the recent advancement of stretchable optoelectronics, we have successfully developed strategies to build a fully functional artificial apposition compound eye camera by combining optics, materials and mechanics principles. The strategies start with fabricating stretchable arrays of thin silicon photodetectors and elastomeric optical elements in planar geometries, which are then precisely aligned and integrated, and elastically transformed to hemispherical shapes. This imaging device demonstrates nearly full hemispherical shape (about 160 degrees), with densely packed artificial ommatidia. The number of ommatidia (180) is comparable to those of the eyes of fire ants and bark beetles. We have illustrated key features of operation of compound eyes through experimental imaging results and quantitative ray-tracing-based simulations. The general strategies shown in this development could be applicable to other compound eye devices, such as those inspired by moths and lacewings (refracting superposition eyes), lobster and shrimp (reflecting superposition eyes), and houseflies (neural superposition eyes).

  12. Retinal axial focusing and multi-layer imaging with a liquid crystal adaptive optics camera

    NASA Astrophysics Data System (ADS)

    Liu, Rui-Xue; Zheng, Xian-Liang; Li, Da-Yu; Xia, Ming-Liang; Hu, Li-Fa; Cao, Zhao-Liang; Mu, Quan-Quan; Xuan, Li

    2014-09-01

    With the help of adaptive optics (AO) technology, cellular level imaging of living human retina can be achieved. Aiming to reduce distressing feelings and to avoid potential drug induced diseases, we attempted to image retina with dilated pupil and froze accommodation without drugs. An optimized liquid crystal adaptive optics camera was adopted for retinal imaging. A novel eye stared system was used for stimulating accommodation and fixating imaging area. Illumination sources and imaging camera kept linkage for focusing and imaging different layers. Four subjects with diverse degree of myopia were imaged. Based on the optical properties of the human eye, the eye stared system reduced the defocus to less than the typical ocular depth of focus. In this way, the illumination light can be projected on certain retina layer precisely. Since that the defocus had been compensated by the eye stared system, the adopted 512 × 512 liquid crystal spatial light modulator (LC-SLM) corrector provided the crucial spatial fidelity to fully compensate high-order aberrations. The Strehl ratio of a subject with -8 diopter myopia was improved to 0.78, which was nearly close to diffraction-limited imaging. By finely adjusting the axial displacement of illumination sources and imaging camera, cone photoreceptors, blood vessels and nerve fiber layer were clearly imaged successfully.

  13. Frequency analysis of gaze points with CT colonography interpretation using eye gaze tracking system

    NASA Astrophysics Data System (ADS)

    Tsutsumi, Shoko; Tamashiro, Wataru; Sato, Mitsuru; Okajima, Mika; Ogura, Toshihiro; Doi, Kunio

    2017-03-01

    It is important to investigate eye tracking gaze points of experts, in order to assist trainees in understanding of image interpretation process. We investigated gaze points of CT colonography (CTC) interpretation process, and analyzed the difference in gaze points between experts and trainees. In this study, we attempted to understand how trainees can be improved to a level achieved by experts in viewing of CTC. We used an eye gaze point sensing system, Gazefineder (JVCKENWOOD Corporation, Tokyo, Japan), which can detect pupil point and corneal reflection point by the dark pupil eye tracking. This system can provide gaze points images and excel file data. The subjects are radiological technologists who are experienced, and inexperienced in reading CTC. We performed observer studies in reading virtual pathology images and examined observer's image interpretation process using gaze points data. Furthermore, we examined eye tracking frequency analysis by using the Fast Fourier Transform (FFT). We were able to understand the difference in gaze points between experts and trainees by use of the frequency analysis. The result of the trainee had a large amount of both high-frequency components and low-frequency components. In contrast, both components by the expert were relatively low. Regarding the amount of eye movement in every 0.02 second we found that the expert tended to interpret images slowly and calmly. On the other hand, the trainee was moving eyes quickly and also looking for wide areas. We can assess the difference in the gaze points on CTC between experts and trainees by use of the eye gaze point sensing system and based on the frequency analysis. The potential improvements in CTC interpretation for trainees can be evaluated by using gaze points data.

  14. Robust adaptive optics systems for vision science

    NASA Astrophysics Data System (ADS)

    Burns, S. A.; de Castro, A.; Sawides, L.; Luo, T.; Sapoznik, K.

    2018-02-01

    Adaptive Optics (AO) is of growing importance for understanding the impact of retinal and systemic diseases on the retina. While AO retinal imaging in healthy eyes is now routine, AO imaging in older eyes and eyes with optical changes to the anterior eye can be difficult and requires a control and an imaging system that is resilient when there is scattering and occlusion from the cornea and lens, as well as in the presence of irregular and small pupils. Our AO retinal imaging system combines evaluation of local image quality of the pupil, with spatially programmable detection. The wavefront control system uses a woofer tweeter approach, combining an electromagnetic mirror and a MEMS mirror and a single Shack Hartmann sensor. The SH sensor samples an 8 mm exit pupil and the subject is aligned to a region within this larger system pupil using a chin and forehead rest. A spot quality metric is calculated in real time for each lenslet. Individual lenslets that do not meet the quality metric are eliminated from the processing. Mirror shapes are smoothed outside the region of wavefront control when pupils are small. The system allows imaging even with smaller irregular pupils, however because the depth of field increases under these conditions, sectioning performance decreases. A retinal conjugate micromirror array selectively directs mid-range scatter to additional detectors. This improves detection of retinal capillaries even when the confocal image has poorer image quality that includes both photoreceptors and blood vessels.

  15. Optics of wide-angle panoramic viewing system-assisted vitreous surgery.

    PubMed

    Chalam, Kakarla V; Shah, Vinay A

    2004-01-01

    The purpose of the article is to describe the optics of the contact wide-angle lens system with stereo-reinverter for vitreous surgery. A panoramic viewing system is made up of two components; an indirect ophthalmoscopy lens system for fundus image viewing, which is placed on the patient's cornea as a contact lens, and a separate removable prism system for reinversion of the image mounted on the microscope above the zooming system. The system provides a 104 degrees field of view in a phakic emmetropic eye with minification, which can be magnified by the operating microscope. It permits a binocular stereoptic view even through a small pupil (3 mm) or larger. In an air-filled phakic eye, field of view increases to approximately 130 degrees. The obtained image of the patient's fundus is reinverted to form true, erect, stereoscopic image by the reinversion system. In conclusion, this system permits wide-angle panoramic view of the surgical field. The contact lens neutralizes the optical irregularities of the corneal surface and allows improved visualization in eyes with irregular astigmatism induced by corneal scars. Excellent visualization is achieved in complex clinical situations such as miotic pupils, lenticular opacities, and in air-filled phakic eyes.

  16. Investigation of the isoplanatic patch and wavefront aberration along the pupillary axis compared to the line of sight in the eye

    PubMed Central

    Nowakowski, Maciej; Sheehan, Matthew; Neal, Daniel; Goncharov, Alexander V.

    2012-01-01

    Conventional optical systems usually provide best image quality on axis, while showing unavoidable gradual decrease in image quality towards the periphery of the field. The optical system of the human eye is not an exception. Within a limiting boundary the image quality can be considered invariant with field angle, and this region is known as the isoplanatic patch. We investigate the isoplanatic patch of eight healthy eyes and measure the wavefront aberration along the pupillary axis compared to the line of sight. The results are used to discuss methods of ocular aberration correction in wide-field retinal imaging with particular application to multi-conjugate adaptive optics systems. PMID:22312578

  17. A novel device for head gesture measurement system in combination with eye-controlled human machine interface

    NASA Astrophysics Data System (ADS)

    Lin, Chern-Sheng; Ho, Chien-Wa; Chang, Kai-Chieh; Hung, San-Shan; Shei, Hung-Jung; Yeh, Mau-Shiun

    2006-06-01

    This study describes the design and combination of an eye-controlled and a head-controlled human-machine interface system. This system is a highly effective human-machine interface, detecting head movement by changing positions and numbers of light sources on the head. When the users utilize the head-mounted display to browse a computer screen, the system will catch the images of the user's eyes with CCD cameras, which can also measure the angle and position of the light sources. In the eye-tracking system, the program in the computer will locate each center point of the pupils in the images, and record the information on moving traces and pupil diameters. In the head gesture measurement system, the user wears a double-source eyeglass frame, so the system catches images of the user's head by using a CCD camera in front of the user. The computer program will locate the center point of the head, transferring it to the screen coordinates, and then the user can control the cursor by head motions. We combine the eye-controlled and head-controlled human-machine interface system for the virtual reality applications.

  18. Enhanced Video-Oculography System

    NASA Technical Reports Server (NTRS)

    Moore, Steven T.; MacDougall, Hamish G.

    2009-01-01

    A previously developed video-oculography system has been enhanced for use in measuring vestibulo-ocular reflexes of a human subject in a centrifuge, motor vehicle, or other setting. The system as previously developed included a lightweight digital video camera mounted on goggles. The left eye was illuminated by an infrared light-emitting diode via a dichroic mirror, and the camera captured images of the left eye in infrared light. To extract eye-movement data, the digitized video images were processed by software running in a laptop computer. Eye movements were calibrated by having the subject view a target pattern, fixed with respect to the subject s head, generated by a goggle-mounted laser with a diffraction grating. The system as enhanced includes a second camera for imaging the scene from the subject s perspective, and two inertial measurement units (IMUs) for measuring linear accelerations and rates of rotation for computing head movements. One IMU is mounted on the goggles, the other on the centrifuge or vehicle frame. All eye-movement and head-motion data are time-stamped. In addition, the subject s point of regard is superimposed on each scene image to enable analysis of patterns of gaze in real time.

  19. A Web Browsing System by Eye-gaze Input

    NASA Astrophysics Data System (ADS)

    Abe, Kiyohiko; Owada, Kosuke; Ohi, Shoichi; Ohyama, Minoru

    We have developed an eye-gaze input system for people with severe physical disabilities, such as amyotrophic lateral sclerosis (ALS) patients. This system utilizes a personal computer and a home video camera to detect eye-gaze under natural light. The system detects both vertical and horizontal eye-gaze by simple image analysis, and does not require special image processing units or sensors. We also developed the platform for eye-gaze input based on our system. In this paper, we propose a new web browsing system for physically disabled computer users as an application of the platform for eye-gaze input. The proposed web browsing system uses a method of direct indicator selection. The method categorizes indicators by their function. These indicators are hierarchized relations; users can select the felicitous function by switching indicators group. This system also analyzes the location of selectable object on web page, such as hyperlink, radio button, edit box, etc. This system stores the locations of these objects, in other words, the mouse cursor skips to the object of candidate input. Therefore it enables web browsing at a faster pace.

  20. High-speed adaptive optics line scan confocal retinal imaging for human eye.

    PubMed

    Lu, Jing; Gu, Boyu; Wang, Xiaolin; Zhang, Yuhua

    2017-01-01

    Continuous and rapid eye movement causes significant intraframe distortion in adaptive optics high resolution retinal imaging. To minimize this artifact, we developed a high speed adaptive optics line scan confocal retinal imaging system. A high speed line camera was employed to acquire retinal image and custom adaptive optics was developed to compensate the wave aberration of the human eye's optics. The spatial resolution and signal to noise ratio were assessed in model eye and in living human eye. The improvement of imaging fidelity was estimated by reduction of intra-frame distortion of retinal images acquired in the living human eyes with frame rates at 30 frames/second (FPS), 100 FPS, and 200 FPS. The device produced retinal image with cellular level resolution at 200 FPS with a digitization of 512×512 pixels/frame in the living human eye. Cone photoreceptors in the central fovea and rod photoreceptors near the fovea were resolved in three human subjects in normal chorioretinal health. Compared with retinal images acquired at 30 FPS, the intra-frame distortion in images taken at 200 FPS was reduced by 50.9% to 79.7%. We demonstrated the feasibility of acquiring high resolution retinal images in the living human eye at a speed that minimizes retinal motion artifact. This device may facilitate research involving subjects with nystagmus or unsteady fixation due to central vision loss.

  1. Design of integrated eye tracker-display device for head mounted systems

    NASA Astrophysics Data System (ADS)

    David, Y.; Apter, B.; Thirer, N.; Baal-Zedaka, I.; Efron, U.

    2009-08-01

    We propose an Eye Tracker/Display system, based on a novel, dual function device termed ETD, which allows sharing the optical paths of the Eye tracker and the display and on-chip processing. The proposed ETD design is based on a CMOS chip combining a Liquid-Crystal-on-Silicon (LCoS) micro-display technology with near infrared (NIR) Active Pixel Sensor imager. The ET operation allows capturing the Near IR (NIR) light, back-reflected from the eye's retina. The retinal image is then used for the detection of the current direction of eye's gaze. The design of the eye tracking imager is based on the "deep p-well" pixel technology, providing low crosstalk while shielding the active pixel circuitry, which serves the imaging and the display drivers, from the photo charges generated in the substrate. The use of the ETD in the HMD Design enables a very compact design suitable for Smart Goggle applications. A preliminary optical, electronic and digital design of the goggle and its associated ETD chip and digital control, are presented.

  2. Wide-angle camera with multichannel architecture using microlenses on a curved surface.

    PubMed

    Liang, Wei-Lun; Shen, Hui-Kai; Su, Guo-Dung J

    2014-06-10

    We propose a multichannel imaging system that combines the principles of an insect's compound eye and the human eye. The optical system enables a reduction in track length of the imaging device to achieve miniaturization. The multichannel structure is achieved by a curved microlens array, and a Hypergon lens is used as the main lens to simulate the human eye, achieving large field of view (FOV). With this architecture, each microlens of the array transmits a segment of the overall FOV. The partial images are recorded in separate channels and stitched together to form the final image of the whole FOV by image processing. The design is 2.7 mm thick, with 59 channels; the 100°×80° full FOV is optimized using ZEMAX ray-tracing software on an image plane. The image plane size is 4.53  mm×3.29  mm. Given the recent progress in the fabrication of microlenses, this image system has the potential to be commercialized in the near future.

  3. Hybrid-modality ocular imaging using a clinical ultrasound system and nanosecond pulsed laser.

    PubMed

    Lim, Hoong-Ta; Matham, Murukeshan Vadakke

    2015-07-01

    Hybrid optical modality imaging is a special type of multimodality imaging significantly used in the recent past in order to harness the strengths of different imaging methods as well as to furnish complementary information beyond that provided by any individual method. We present a hybrid-modality imaging system based on a commercial clinical ultrasound imaging (USI) system using a linear array ultrasound transducer (UST) and a tunable nanosecond pulsed laser as the source. The integrated system uses photoacoustic imaging (PAI) and USI for ocular imaging to provide the complementary absorption and structural information of the eye. In this system, B-mode images from PAI and USI are acquired at 10 Hz and about 40 Hz, respectively. A linear array UST makes the system much faster compared to other ocular imaging systems using a single-element UST to form B-mode images. The results show that the proposed instrumentation is able to incorporate PAI and USI in a single setup. The feasibility and efficiency of this developed probe system was illustrated by using enucleated pig eyes as test samples. It was demonstrated that PAI could successfully capture photoacoustic signals from the iris, anterior lens surface, and posterior pole, while USI could accomplish the mapping of the eye to reveal the structures like the cornea, anterior chamber, lens, iris, and posterior pole. This system and the proposed methodology are expected to enable ocular disease diagnostic applications and can be used as a preclinical imaging system.

  4. Bio-inspired hemispherical compound eye camera

    NASA Astrophysics Data System (ADS)

    Xiao, Jianliang; Song, Young Min; Xie, Yizhu; Malyarchuk, Viktor; Jung, Inhwa; Choi, Ki-Joong; Liu, Zhuangjian; Park, Hyunsung; Lu, Chaofeng; Kim, Rak-Hwan; Li, Rui; Crozier, Kenneth B.; Huang, Yonggang; Rogers, John A.

    2014-03-01

    Compound eyes in arthropods demonstrate distinct imaging characteristics from human eyes, with wide angle field of view, low aberrations, high acuity to motion and infinite depth of field. Artificial imaging systems with similar geometries and properties are of great interest for many applications. However, the challenges in building such systems with hemispherical, compound apposition layouts cannot be met through established planar sensor technologies and conventional optics. We present our recent progress in combining optics, materials, mechanics and integration schemes to build fully functional artificial compound eye cameras. Nearly full hemispherical shapes (about 160 degrees) with densely packed artificial ommatidia were realized. The number of ommatidia (180) is comparable to those of the eyes of fire ants and bark beetles. The devices combine elastomeric compound optical elements with deformable arrays of thin silicon photodetectors, which were fabricated in the planar geometries and then integrated and elastically transformed to hemispherical shapes. Imaging results and quantitative ray-tracing-based simulations illustrate key features of operation. These general strategies seem to be applicable to other compound eye devices, such as those inspired by moths and lacewings (refracting superposition eyes), lobster and shrimp (reflecting superposition eyes), and houseflies (neural superposition eyes).

  5. Image processing and analysis using neural networks for optometry area

    NASA Astrophysics Data System (ADS)

    Netto, Antonio V.; Ferreira de Oliveira, Maria C.

    2002-11-01

    In this work we describe the framework of a functional system for processing and analyzing images of the human eye acquired by the Hartmann-Shack technique (HS), in order to extract information to formulate a diagnosis of eye refractive errors (astigmatism, hypermetropia and myopia). The analysis is to be carried out using an Artificial Intelligence system based on Neural Nets, Fuzzy Logic and Classifier Combination. The major goal is to establish the basis of a new technology to effectively measure ocular refractive errors that is based on methods alternative those adopted in current patented systems. Moreover, analysis of images acquired with the Hartmann-Shack technique may enable the extraction of additional information on the health of an eye under exam from the same image used to detect refraction errors.

  6. A new omni-directional multi-camera system for high resolution surveillance

    NASA Astrophysics Data System (ADS)

    Cogal, Omer; Akin, Abdulkadir; Seyid, Kerem; Popovic, Vladan; Schmid, Alexandre; Ott, Beat; Wellig, Peter; Leblebici, Yusuf

    2014-05-01

    Omni-directional high resolution surveillance has a wide application range in defense and security fields. Early systems used for this purpose are based on parabolic mirror or fisheye lens where distortion due to the nature of the optical elements cannot be avoided. Moreover, in such systems, the image resolution is limited to a single image sensor's image resolution. Recently, the Panoptic camera approach that mimics the eyes of flying insects using multiple imagers has been presented. This approach features a novel solution for constructing a spherically arranged wide FOV plenoptic imaging system where the omni-directional image quality is limited by low-end sensors. In this paper, an overview of current Panoptic camera designs is provided. New results for a very-high resolution visible spectrum imaging and recording system inspired from the Panoptic approach are presented. The GigaEye-1 system, with 44 single cameras and 22 FPGAs, is capable of recording omni-directional video in a 360°×100° FOV at 9.5 fps with a resolution over (17,700×4,650) pixels (82.3MP). Real-time video capturing capability is also verified at 30 fps for a resolution over (9,000×2,400) pixels (21.6MP). The next generation system with significantly higher resolution and real-time processing capacity, called GigaEye-2, is currently under development. The important capacity of GigaEye-1 opens the door to various post-processing techniques in surveillance domain such as large perimeter object tracking, very-high resolution depth map estimation and high dynamicrange imaging which are beyond standard stitching and panorama generation methods.

  7. Automated Age-related Macular Degeneration screening system using fundus images.

    PubMed

    Kunumpol, P; Umpaipant, W; Kanchanaranya, N; Charoenpong, T; Vongkittirux, S; Kupakanjana, T; Tantibundhit, C

    2017-07-01

    This work proposed an automated screening system for Age-related Macular Degeneration (AMD), and distinguishing between wet or dry types of AMD using fundus images to assist ophthalmologists in eye disease screening and management. The algorithm employs contrast-limited adaptive histogram equalization (CLAHE) in image enhancement. Subsequently, discrete wavelet transform (DWT) and locality sensitivity discrimination analysis (LSDA) were used to extract features for a neural network model to classify the results. The results showed that the proposed algorithm was able to distinguish between normal eyes, dry AMD, or wet AMD with 98.63% sensitivity, 99.15% specificity, and 98.94% accuracy, suggesting promising potential as a medical support system for faster eye disease screening at lower costs.

  8. Dynamic Imaging of the Eye, Optic Nerve, and Extraocular Muscles With Golden Angle Radial MRI

    PubMed Central

    Smith, David S.; Smith, Alex K.; Welch, E. Brian; Smith, Seth A.

    2017-01-01

    Purpose The eye and its accessory structures, the optic nerve and the extraocular muscles, form a complex dynamic system. In vivo magnetic resonance imaging (MRI) of this system in motion can have substantial benefits in understanding oculomotor functioning in health and disease, but has been restricted to date to imaging of static gazes only. The purpose of this work was to develop a technique to image the eye and its accessory visual structures in motion. Methods Dynamic imaging of the eye was developed on a 3-Tesla MRI scanner, based on a golden angle radial sequence that allows freely selectable frame-rate and temporal-span image reconstructions from the same acquired data set. Retrospective image reconstructions at a chosen frame rate of 57 ms per image yielded high-quality in vivo movies of various eye motion tasks performed in the scanner. Motion analysis was performed for a left–right version task where motion paths, lengths, and strains/globe angle of the medial and lateral extraocular muscles and the optic nerves were estimated. Results Offline image reconstructions resulted in dynamic images of bilateral visual structures of healthy adults in only ∼15-s imaging time. Qualitative and quantitative analyses of the motion enabled estimation of trajectories, lengths, and strains on the optic nerves and extraocular muscles at very high frame rates of ∼18 frames/s. Conclusions This work presents an MRI technique that enables high-frame-rate dynamic imaging of the eyes and orbital structures. The presented sequence has the potential to be used in furthering the understanding of oculomotor mechanics in vivo, both in health and disease. PMID:28813574

  9. Microcapillary imaging of lamina cribrosa in porcine eyes using photoacoustic microscopy

    NASA Astrophysics Data System (ADS)

    Moothanchery, Mohesh; Chuangsuwanich, Thanadet; Yan, Alvan Tsz Chung; Schmetterer, Leopold; Girard, Michael J. A.; Pramanik, Manojit

    2018-02-01

    In order to understand the pathophysiology of glaucoma, Lamina cribrosa (LC) perfusion needs to be the subject of thorough investigation. It is currently difficult to obtain high resolution images of the embedded microcapillary network of the LC using conventional imaging techniques. In this study, an optical resolution photoacoustic microscopy (OR-PAM) system was used for imaging lamina cribrosa of an ex vivo porcine eye. Extrinsic contrast agent was used to perfuse the eye via its ciliary arteries. The OR-PAM system have a lateral resolution of 4 μm and an axial resolution of 30 μm. The high resolution system could able resolve a perfused LC microcapillary network to show vascular structure within the LC thickness. OR-PAM could be a promising imaging modality to study the LC perfusion and hence could be used to elucidate the hemodynamic aspect of glaucoma.

  10. Rotational symmetric HMD with eye-tracking capability

    NASA Astrophysics Data System (ADS)

    Liu, Fangfang; Cheng, Dewen; Wang, Qiwei; Wang, Yongtian

    2016-10-01

    As an important auxiliary function of head-mounted displays (HMDs), eye tracking has an important role in the field of intelligent human-machine interaction. In this paper, an eye-tracking HMD system (ET-HMD) is designed based on the rotational symmetric system. The tracking principle in this paper is based on pupil-corneal reflection. The ET-HMD system comprises three optical paths for virtual display, infrared illumination, and eye tracking. The display optics is shared by three optical paths and consists of four spherical lenses. For the eye-tracking path, an extra imaging lens is added to match the image sensor and achieve eye tracking. The display optics provides users a 40° diagonal FOV with a ״ 0.61 OLED, the 19 mm eye clearance, and 10 mm exit pupil diameter. The eye-tracking path can capture 15 mm × 15 mm of the users' eyes. The average MTF is above 0.1 at 26 lp/mm for the display path, and exceeds 0.2 at 46 lp/mm for the eye-tracking path. Eye illumination is simulated using LightTools with an eye model and an 850 nm near-infrared LED (NIR-LED). The results of the simulation show that the illumination of the NIR-LED can cover the area of the eye model with the display optics that is sufficient for eye tracking. The integrated optical system HMDs with eye-tracking feature can help improve the HMD experience of users.

  11. A laser-based eye-tracking system.

    PubMed

    Irie, Kenji; Wilson, Bruce A; Jones, Richard D; Bones, Philip J; Anderson, Tim J

    2002-11-01

    This paper reports on the development of a new eye-tracking system for noninvasive recording of eye movements. The eye tracker uses a flying-spot laser to selectively image landmarks on the eye and, subsequently, measure horizontal, vertical, and torsional eye movements. Considerable work was required to overcome the adverse effects of specular reflection of the flying-spot from the surface of the eye onto the sensing elements of the eye tracker. These effects have been largely overcome, and the eye-tracker has been used to document eye movement abnormalities, such as abnormal torsional pulsion of saccades, in the clinical setting.

  12. Research on the liquid crystal adaptive optics system for human retinal imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Tong, Shoufeng; Song, Yansong; Zhao, Xin

    2013-12-01

    The blood vessels only in Human eye retinal can be observed directly. Many diseases that are not obvious in their early symptom can be diagnosed through observing the changes of distal micro blood vessel. In order to obtain the high resolution human retinal images,an adaptive optical system for correcting the aberration of the human eye was designed by using the Shack-Hartmann wavefront sensor and the Liquid Crystal Spatial Light Modulator(LCLSM) .For a subject eye with 8m-1 (8D)myopia, the wavefront error is reduced to 0.084 λ PV and 0.12 λRMS after adaptive optics(AO) correction ,which has reached diffraction limit.The results show that the LCLSM based AO system has the ability of correcting the aberration of the human eye efficiently,and making the blurred photoreceptor cell to clearly image on a CCD camera.

  13. Artificial eye for in vitro experiments of laser light interaction with aqueous media

    NASA Astrophysics Data System (ADS)

    Cain, Clarence P.; Noojin, Gary D.; Hammer, Daniel X.; Thomas, Robert J.; Rockwell, Benjamin A.

    1997-01-01

    An artificial eye has been designed and assembled that mimics the focusing geometry of the living eye. The artificial eye's focusing characteristics are measured and compared with those of the in vivo system. The artificial eye is used to measure several nonlinear optical phenomena that may have an impact on the laser damage thresholds of the retina produced by ultrashort laser pulses. We chose a focal length of 17 mm to simulate the rhesus monkey eye, with a visual cone angle of 8.4 deg for a 2.5-mm diameter laser beam input. The measured focal point image diameter was 5.6 plus or minus 1 micrometer, which was 1.5 times the calculated diffraction-limited image diameter. This focusing system had the best M2 of all the systems evaluated. We used the artificial eye to measure the threshold for laser- induced breakdown, stimulated Brillouin scattering, super- continuum generation, and pulse temporal broadening due to group velocity dispersion.

  14. Adaptive optics retinal imaging with automatic detection of the pupil and its boundary in real time using Shack-Hartmann images.

    PubMed

    de Castro, Alberto; Sawides, Lucie; Qi, Xiaofeng; Burns, Stephen A

    2017-08-20

    Retinal imaging with an adaptive optics (AO) system usually requires that the eye be centered and stable relative to the exit pupil of the system. Aberrations are then typically corrected inside a fixed circular pupil. This approach can be restrictive when imaging some subjects, since the pupil may not be round and maintaining a stable head position can be difficult. In this paper, we present an automatic algorithm that relaxes these constraints. An image quality metric is computed for each spot of the Shack-Hartmann image to detect the pupil and its boundary, and the control algorithm is applied only to regions within the subject's pupil. Images on a model eye as well as for five subjects were obtained to show that a system exit pupil larger than the subject's eye pupil could be used for AO retinal imaging without a reduction in image quality. This algorithm automates the task of selecting pupil size. It also may relax constraints on centering the subject's pupil and on the shape of the pupil.

  15. GeoEye(TradeMark) Corporate Overview

    NASA Technical Reports Server (NTRS)

    Jones, Dennis

    2007-01-01

    This viewgraph presentation gives a corporate overview of GeoEye, the world's largest commercial remote sensing company. The contents include: 1) About GeoEye; 2) GeoEye Mission; 3) The Company; 4) Com,pany Summary; 5) U.S. Government Commitment; 6) GeoEye Constellation; 7) Other Imaging Resources; 8) OrbView-3 & OrbView-2; 9) OrbView-3 System Architecture; 10) OrbView-3; 11) OrbView-2; 12) IKONOS; 13) Largest Image Archive in the World; 14) GeoEye-1; 15) Best-In-Class Development Team; 16) Highest Performance Available in the Commercial Market; and 17) Key Themes

  16. High-speed adaptive optics line scan confocal retinal imaging for human eye

    PubMed Central

    Wang, Xiaolin; Zhang, Yuhua

    2017-01-01

    Purpose Continuous and rapid eye movement causes significant intraframe distortion in adaptive optics high resolution retinal imaging. To minimize this artifact, we developed a high speed adaptive optics line scan confocal retinal imaging system. Methods A high speed line camera was employed to acquire retinal image and custom adaptive optics was developed to compensate the wave aberration of the human eye’s optics. The spatial resolution and signal to noise ratio were assessed in model eye and in living human eye. The improvement of imaging fidelity was estimated by reduction of intra-frame distortion of retinal images acquired in the living human eyes with frame rates at 30 frames/second (FPS), 100 FPS, and 200 FPS. Results The device produced retinal image with cellular level resolution at 200 FPS with a digitization of 512×512 pixels/frame in the living human eye. Cone photoreceptors in the central fovea and rod photoreceptors near the fovea were resolved in three human subjects in normal chorioretinal health. Compared with retinal images acquired at 30 FPS, the intra-frame distortion in images taken at 200 FPS was reduced by 50.9% to 79.7%. Conclusions We demonstrated the feasibility of acquiring high resolution retinal images in the living human eye at a speed that minimizes retinal motion artifact. This device may facilitate research involving subjects with nystagmus or unsteady fixation due to central vision loss. PMID:28257458

  17. Eye vision system using programmable micro-optics and micro-electronics

    NASA Astrophysics Data System (ADS)

    Riza, Nabeel A.; Amin, M. Junaid; Riza, Mehdi N.

    2014-02-01

    Proposed is a novel eye vision system that combines the use of advanced micro-optic and microelectronic technologies that includes programmable micro-optic devices, pico-projectors, Radio Frequency (RF) and optical wireless communication and control links, energy harvesting and storage devices and remote wireless energy transfer capabilities. This portable light weight system can measure eye refractive powers, optimize light conditions for the eye under test, conduct color-blindness tests, and implement eye strain relief and eye muscle exercises via time sequenced imaging. Described is the basic design of the proposed system and its first stage system experimental results for vision spherical lens refractive error correction.

  18. Escaping compound eye ancestry: the evolution of single-chamber eyes in holometabolous larvae.

    PubMed

    Buschbeck, Elke K

    2014-08-15

    Stemmata, the eyes of holometabolous insect larvae, have gained little attention, even though they exhibit remarkably different optical solutions, ranging from compound eyes with upright images, to sophisticated single-chamber eyes with inverted images. Such optical differences raise the question of how major transitions may have occurred. Stemmata evolved from compound eye ancestry, and optical differences are apparent even in some of the simplest systems that share strong cellular homology with adult ommatidia. The transition to sophisticated single-chamber eyes occurred many times independently, and in at least two different ways: through the fusion of many ommatidia [as in the sawfly (Hymenoptera)], and through the expansion of single ommatidia [as in tiger beetles (Coleoptera), antlions (Neuroptera) and dobsonflies (Megaloptera)]. Although ommatidia-like units frequently have multiple photoreceptor layers (tiers), sophisticated image-forming stemmata tend to only have one photoreceptor tier, presumably a consequence of the lens only being able to efficiently focus light on to one photoreceptor layer. An interesting exception is found in some diving beetles [Dytiscidae (Coleoptera)], in which two retinas receive sharp images from a bifocal lens. Taken together, stemmata represent a great model system to study an impressive set of optical solutions that evolved from a relatively simple ancestral organization. © 2014. Published by The Company of Biologists Ltd.

  19. Registration of Panoramic/Fish-Eye Image Sequence and LiDAR Points Using Skyline Features

    PubMed Central

    Zhu, Ningning; Jia, Yonghong; Ji, Shunping

    2018-01-01

    We propose utilizing a rigorous registration model and a skyline-based method for automatic registration of LiDAR points and a sequence of panoramic/fish-eye images in a mobile mapping system (MMS). This method can automatically optimize original registration parameters and avoid the use of manual interventions in control point-based registration methods. First, the rigorous registration model between the LiDAR points and the panoramic/fish-eye image was built. Second, skyline pixels from panoramic/fish-eye images and skyline points from the MMS’s LiDAR points were extracted, relying on the difference in the pixel values and the registration model, respectively. Third, a brute force optimization method was used to search for optimal matching parameters between skyline pixels and skyline points. In the experiments, the original registration method and the control point registration method were used to compare the accuracy of our method with a sequence of panoramic/fish-eye images. The result showed: (1) the panoramic/fish-eye image registration model is effective and can achieve high-precision registration of the image and the MMS’s LiDAR points; (2) the skyline-based registration method can automatically optimize the initial attitude parameters, realizing a high-precision registration of a panoramic/fish-eye image and the MMS’s LiDAR points; and (3) the attitude correction values of the sequences of panoramic/fish-eye images are different, and the values must be solved one by one. PMID:29883431

  20. Optical quality of the living cat eye

    PubMed Central

    Bonds, A. B.

    1974-01-01

    1. The optical quality of the living cat eye was measured under conditions similar to those of cat retinal ganglion cell experiments by recording the aerial image of a nearly monochromatic thin line of light. 2. Experiments were performed to assess the nature of the fundal reflexion of the cat eye, which was found to behave essentially as a diffuser. 3. The optical Modulation Transfer Function (MTF) was calculated from the measured aerial linespread using Fourier mathematics; the MTF of a `typical' cat eye was averaged from data collected from ten eyes. 4. The state of focus of the optical system, the pupil size and the angle of the light incident on the eye were all varied to determine their effect on image quality. 5. By using an image rotator, the aerial linespread was measured for several orientations of the line; these measurements yielded an approximation of the two-dimensional pointspread completely characterizing the optical system. 6. Evidence is reviewed to show that the optical resolution of the cat, albeit some 3-5 times worse than that of human, appears to be better than the neural resolution of its retina and its visual system as a whole. PMID:4449081

  1. Optical quality of the living cat eye.

    PubMed

    Bonds, A B

    1974-12-01

    1. The optical quality of the living cat eye was measured under conditions similar to those of cat retinal ganglion cell experiments by recording the aerial image of a nearly monochromatic thin line of light.2. Experiments were performed to assess the nature of the fundal reflexion of the cat eye, which was found to behave essentially as a diffuser.3. The optical Modulation Transfer Function (MTF) was calculated from the measured aerial linespread using Fourier mathematics; the MTF of a ;typical' cat eye was averaged from data collected from ten eyes.4. The state of focus of the optical system, the pupil size and the angle of the light incident on the eye were all varied to determine their effect on image quality.5. By using an image rotator, the aerial linespread was measured for several orientations of the line; these measurements yielded an approximation of the two-dimensional pointspread completely characterizing the optical system.6. Evidence is reviewed to show that the optical resolution of the cat, albeit some 3-5 times worse than that of human, appears to be better than the neural resolution of its retina and its visual system as a whole.

  2. Phototaxis and the origin of visual eyes

    PubMed Central

    Randel, Nadine

    2016-01-01

    Vision allows animals to detect spatial differences in environmental light levels. High-resolution image-forming eyes evolved from low-resolution eyes via increases in photoreceptor cell number, improvements in optics and changes in the neural circuits that process spatially resolved photoreceptor input. However, the evolutionary origins of the first low-resolution visual systems have been unclear. We propose that the lowest resolving (two-pixel) visual systems could initially have functioned in visual phototaxis. During visual phototaxis, such elementary visual systems compare light on either side of the body to regulate phototactic turns. Another, even simpler and non-visual strategy is characteristic of helical phototaxis, mediated by sensory–motor eyespots. The recent mapping of the complete neural circuitry (connectome) of an elementary visual system in the larva of the annelid Platynereis dumerilii sheds new light on the possible paths from non-visual to visual phototaxis and to image-forming vision. We outline an evolutionary scenario focusing on the neuronal circuitry to account for these transitions. We also present a comprehensive review of the structure of phototactic eyes in invertebrate larvae and assign them to the non-visual and visual categories. We propose that non-visual systems may have preceded visual phototactic systems in evolution that in turn may have repeatedly served as intermediates during the evolution of image-forming eyes. PMID:26598725

  3. The vestibular-related frontal cortex and its role in smooth-pursuit eye movements and vestibular-pursuit interactions

    PubMed Central

    Fukushima, Junko; Akao, Teppei; Kurkin, Sergei; Kaneko, Chris R.S.; Fukushima, Kikuro

    2006-01-01

    In order to see clearly when a target is moving slowly, primates with high acuity foveae use smooth-pursuit and vergence eye movements. The former rotates both eyes in the same direction to track target motion in frontal planes, while the latter rotates left and right eyes in opposite directions to track target motion in depth. Together, these two systems pursue targets precisely and maintain their images on the foveae of both eyes. During head movements, both systems must interact with the vestibular system to minimize slip of the retinal images. The primate frontal cortex contains two pursuit-related areas; the caudal part of the frontal eye fields (FEF) and supplementary eye fields (SEF). Evoked potential studies have demonstrated vestibular projections to both areas and pursuit neurons in both areas respond to vestibular stimulation. The majority of FEF pursuit neurons code parameters of pursuit such as pursuit and vergence eye velocity, gaze velocity, and retinal image motion for target velocity in frontal and depth planes. Moreover, vestibular inputs contribute to the predictive pursuit responses of FEF neurons. In contrast, the majority of SEF pursuit neurons do not code pursuit metrics and many SEF neurons are reported to be active in more complex tasks. These results suggest that FEF- and SEF-pursuit neurons are involved in different aspects of vestibular-pursuit interactions and that eye velocity coding of SEF pursuit neurons is specialized for the task condition. PMID:16917164

  4. Eye gazing direction inspection based on image processing technique

    NASA Astrophysics Data System (ADS)

    Hao, Qun; Song, Yong

    2005-02-01

    According to the research result in neural biology, human eyes can obtain high resolution only at the center of view of field. In the research of Virtual Reality helmet, we design to detect the gazing direction of human eyes in real time and feed it back to the control system to improve the resolution of the graph at the center of field of view. In the case of current display instruments, this method can both give attention to the view field of virtual scene and resolution, and improve the immersion of virtual system greatly. Therefore, detecting the gazing direction of human eyes rapidly and exactly is the basis of realizing the design scheme of this novel VR helmet. In this paper, the conventional method of gazing direction detection that based on Purklinje spot is introduced firstly. In order to overcome the disadvantage of the method based on Purklinje spot, this paper proposed a method based on image processing to realize the detection and determination of the gazing direction. The locations of pupils and shapes of eye sockets change with the gazing directions. With the aid of these changes, analyzing the images of eyes captured by the cameras, gazing direction of human eyes can be determined finally. In this paper, experiments have been done to validate the efficiency of this method by analyzing the images. The algorithm can carry out the detection of gazing direction base on normal eye image directly, and it eliminates the need of special hardware. Experiment results show that the method is easy to implement and have high precision.

  5. Computer-Aided Diagnosis of Anterior Segment Eye Abnormalities using Visible Wavelength Image Analysis Based Machine Learning.

    PubMed

    S V, Mahesh Kumar; R, Gunasundari

    2018-06-02

    Eye disease is a major health problem among the elderly people. Cataract and corneal arcus are the major abnormalities that exist in the anterior segment eye region of aged people. Hence, computer-aided diagnosis of anterior segment eye abnormalities will be helpful for mass screening and grading in ophthalmology. In this paper, we propose a multiclass computer-aided diagnosis (CAD) system using visible wavelength (VW) eye images to diagnose anterior segment eye abnormalities. In the proposed method, the input VW eye images are pre-processed for specular reflection removal and the iris circle region is segmented using a circular Hough Transform (CHT)-based approach. The first-order statistical features and wavelet-based features are extracted from the segmented iris circle and used for classification. The Support Vector Machine (SVM) by Sequential Minimal Optimization (SMO) algorithm was used for the classification. In experiments, we used 228 VW eye images that belong to three different classes of anterior segment eye abnormalities. The proposed method achieved a predictive accuracy of 96.96% with 97% sensitivity and 99% specificity. The experimental results show that the proposed method has significant potential for use in clinical applications.

  6. Multispectral imaging with vertical silicon nanowires

    PubMed Central

    Park, Hyunsung; Crozier, Kenneth B.

    2013-01-01

    Multispectral imaging is a powerful tool that extends the capabilities of the human eye. However, multispectral imaging systems generally are expensive and bulky, and multiple exposures are needed. Here, we report the demonstration of a compact multispectral imaging system that uses vertical silicon nanowires to realize a filter array. Multiple filter functions covering visible to near-infrared (NIR) wavelengths are simultaneously defined in a single lithography step using a single material (silicon). Nanowires are then etched and embedded into polydimethylsiloxane (PDMS), thereby realizing a device with eight filter functions. By attaching it to a monochrome silicon image sensor, we successfully realize an all-silicon multispectral imaging system. We demonstrate visible and NIR imaging. We show that the latter is highly sensitive to vegetation and furthermore enables imaging through objects opaque to the eye. PMID:23955156

  7. iDEAS: A web-based system for dry eye assessment.

    PubMed

    Remeseiro, Beatriz; Barreira, Noelia; García-Resúa, Carlos; Lira, Madalena; Giráldez, María J; Yebra-Pimentel, Eva; Penedo, Manuel G

    2016-07-01

    Dry eye disease is a public health problem, whose multifactorial etiology challenges clinicians and researchers making necessary the collaboration between different experts and centers. The evaluation of the interference patterns observed in the tear film lipid layer is a common clinical test used for dry eye diagnosis. However, it is a time-consuming task with a high degree of intra- as well as inter-observer variability, which makes the use of a computer-based analysis system highly desirable. This work introduces iDEAS (Dry Eye Assessment System), a web-based application to support dry eye diagnosis. iDEAS provides a framework for eye care experts to collaboratively work using image-based services in a distributed environment. It is composed of three main components: the web client for user interaction, the web application server for request processing, and the service module for image analysis. Specifically, this manuscript presents two automatic services: tear film classification, which classifies an image into one interference pattern; and tear film map, which illustrates the distribution of the patterns over the entire tear film. iDEAS has been evaluated by specialists from different institutions to test its performance. Both services have been evaluated in terms of a set of performance metrics using the annotations of different experts. Note that the processing time of both services has been also measured for efficiency purposes. iDEAS is a web-based application which provides a fast, reliable environment for dry eye assessment. The system allows practitioners to share images, clinical information and automatic assessments between remote computers. Additionally, it save time for experts, diminish the inter-expert variability and can be used in both clinical and research settings. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  8. A high-resolution 3D ultrasonic system for rapid evaluation of the anterior and posterior segment.

    PubMed

    Peyman, Gholam A; Ingram, Charles P; Montilla, Leonardo G; Witte, Russell S

    2012-01-01

    Traditional ultrasound imaging systems for ophthalmology employ slow, mechanical scanning of a single-element ultrasound transducer. The goal was to demonstrate rapid examination of the anterior and posterior segment with a three-dimensional (3D) commercial ultrasound system incorporating high-resolution linear probe arrays. The 3D images of the porcine eye were generated in approximately 10 seconds by scanning one of two commercial linear arrays (25- and 50-MHz). Healthy enucleated pig eyes were compared with those with induced injury or placement of a foreign material (eg, metal). Rapid, volumetric imaging was also demonstrated in one human eye in vivo. The 50-MHz probe provided exquisite volumetric images of the anterior segment at a depth up to 15 mm and axial resolution of 30 μm. The 25-MHz probe provided a larger field of view (lateral X depth: 20 × 30 mm), sufficient for capturing the entire anterior and posterior segments of the pig eye, at a resolution of 60 μm. A 50-MHz scan through the human eyelid illustrated detailed structures of the Meibomian glands, cilia, cornea, and anterior segment back to the posterior capsule. The 3D system with its high-frequency ultrasound arrays, fast data acquisition, and volume rendering capability shows promise for investigating anterior and posterior structures of the eye. Copyright 2012, SLACK Incorporated.

  9. Frequency dependence and passive drains in fish-eye lenses

    NASA Astrophysics Data System (ADS)

    Quevedo-Teruel, O.; Mitchell-Thomas, R. C.; Hao, Y.

    2012-11-01

    The Maxwell fish eye lens has previously been reported as being capable of the much sought after phenomenon of subwavelength imaging. The inclusion of a drain in this system is considered crucial to the imaging ability, although its role is the topic of much debate. This paper provides a numerical investigation into a practical implementation of a drain in such systems, and analyzes the strong frequency dependence of both the Maxwell fish eye lens and an alternative, the Miñano lens. The imaging capability of these types of lens is questioned, and it is supported by simulations involving various configurations of drain arrays. Finally, a discussion of the near-field and evanescent wave contribution is given.

  10. Study of optical design of three-dimensional digital ophthalmoscopes.

    PubMed

    Fang, Yi-Chin; Yen, Chih-Ta; Chu, Chin-Hsien

    2015-10-01

    This study primarily involves using optical zoom structures to design a three-dimensional (3D) human-eye optical sensory system with infrared and visible light. According to experimental data on two-dimensional (2D) and 3D images, human-eye recognition of 3D images is substantially higher (approximately 13.182%) than that of 2D images. Thus, 3D images are more effective than 2D images when they are used at work or in high-recognition devices. In the optical system design, infrared and visible light wavebands were incorporated as light sources to perform simulations. The results can be used to facilitate the design of optical systems suitable for 3D digital ophthalmoscopes.

  11. Image-size differences worsen stereopsis independent of eye position

    PubMed Central

    Vlaskamp, Björn N. S.; Filippini, Heather R.; Banks, Martin S.

    2010-01-01

    With the eyes in forward gaze, stereo performance worsens when one eye’s image is larger than the other’s. Near, eccentric objects naturally create retinal images of different sizes. Does this mean that stereopsis exhibits deficits for such stimuli? Or does the visual system compensate for the predictable image-size differences? To answer this, we measured discrimination of a disparity-defined shape for different relative image sizes. We did so for different gaze directions, some compatible with the image-size difference and some not. Magnifications of 10–15% caused a clear worsening of stereo performance. The worsening was determined only by relative image size and not by eye position. This shows that no neural compensation for image-size differences accompanies eye-position changes, at least prior to disparity estimation. We also found that a local cross-correlation model for disparity estimation performs like humans in the same task, suggesting that the decrease in stereo performance due to image-size differences is a byproduct of the disparity-estimation method. Finally, we looked for compensation in an observer who has constantly different image sizes due to differing eye lengths. She performed best when the presented images were roughly the same size, indicating that she has compensated for the persistent image-size difference. PMID:19271927

  12. Optical coherence tomography of the preterm eye: from retinopathy of prematurity to brain development

    PubMed Central

    Rothman, Adam L; Mangalesh, Shwetha; Chen, Xi; Toth, Cynthia A

    2016-01-01

    Preterm infants with retinopathy of prematurity are at increased risk of poor neurodevelopmental outcomes. Because the neurosensory retina is an extension of the central nervous system, anatomic abnormalities in the anterior visual pathway often relate to system and central nervous system health. We describe optical coherence tomography as a powerful imaging modality that has recently been adapted to the infant population and provides noninvasive, high-resolution, cross-sectional imaging of the infant eye at the bedside. Optical coherence tomography has increased understanding of normal eye development and has identified several potential biomarkers of brain abnormalities and poorer neurodevelopment. PMID:28539807

  13. A novel role for visual perspective cues in the neural computation of depth.

    PubMed

    Kim, HyungGoo R; Angelaki, Dora E; DeAngelis, Gregory C

    2015-01-01

    As we explore a scene, our eye movements add global patterns of motion to the retinal image, complicating visual motion produced by self-motion or moving objects. Conventionally, it has been assumed that extraretinal signals, such as efference copy of smooth pursuit commands, are required to compensate for the visual consequences of eye rotations. We consider an alternative possibility: namely, that the visual system can infer eye rotations from global patterns of image motion. We visually simulated combinations of eye translation and rotation, including perspective distortions that change dynamically over time. We found that incorporating these 'dynamic perspective' cues allowed the visual system to generate selectivity for depth sign from motion parallax in macaque cortical area MT, a computation that was previously thought to require extraretinal signals regarding eye velocity. Our findings suggest neural mechanisms that analyze global patterns of visual motion to perform computations that require knowledge of eye rotations.

  14. Instant electronic imaging systems are superior to Polaroid at detecting sight-threatening diabetic retinopathy.

    PubMed

    Ryder, R E; Kong, N; Bates, A S; Sim, J; Welch, J; Kritzinger, E E

    1998-03-01

    Polaroid photography in diabetic retinopathy screening allows instant image availability to enhance the results of ophthalmoscopy. Retinal cameras are now being developed which use video/digital imaging techniques to produce an instant enlarged retinal image on a computer monitor screen. We aimed to compare one such electronic imaging system, attached to a Canon CR5 45NM, with standard Polaroid retinal photography. Two hundred and thirteen eyes from 107 diabetic patients were photographed through dilated pupils by both systems in random order and the images were analysed blind. Diabetic retinopathy was present in 58 eyes of which 55/58 (95%) were detected on the electronic image and only 49/58 (84%) on the Polaroid. Of 34 eyes requiring ophthalmologist referral according to standard European criteria, 34/34 (100%) were detected on the electronic image and only 24/34 (71%) on the Polaroid. Side by side comparisons showed electronic imaging to be superior to Polaroid at lesion detection. Using linear analogue scales, the patients assessed the electronic imaging photographic flash as less uncomfortable than the Polaroid equivalent (p < 0.0001). Other advantages of electronic imaging include: ready storage of the images with other patient clinical data on the diabetes computerized register/database; potential for image enhancement and analysis using image analysis software and electronic transfer of images to ophthalmologist or general practitioner. Electronic imaging systems represent a potential major advance for the improvement of diabetic retinopathy screening.

  15. Image processing for improved eye-tracking accuracy

    NASA Technical Reports Server (NTRS)

    Mulligan, J. B.; Watson, A. B. (Principal Investigator)

    1997-01-01

    Video cameras provide a simple, noninvasive method for monitoring a subject's eye movements. An important concept is that of the resolution of the system, which is the smallest eye movement that can be reliably detected. While hardware systems are available that estimate direction of gaze in real-time from a video image of the pupil, such systems must limit image processing to attain real-time performance and are limited to a resolution of about 10 arc minutes. Two ways to improve resolution are discussed. The first is to improve the image processing algorithms that are used to derive an estimate. Off-line analysis of the data can improve resolution by at least one order of magnitude for images of the pupil. A second avenue by which to improve resolution is to increase the optical gain of the imaging setup (i.e., the amount of image motion produced by a given eye rotation). Ophthalmoscopic imaging of retinal blood vessels provides increased optical gain and improved immunity to small head movements but requires a highly sensitive camera. The large number of images involved in a typical experiment imposes great demands on the storage, handling, and processing of data. A major bottleneck had been the real-time digitization and storage of large amounts of video imagery, but recent developments in video compression hardware have made this problem tractable at a reasonable cost. Images of both the retina and the pupil can be analyzed successfully using a basic toolbox of image-processing routines (filtering, correlation, thresholding, etc.), which are, for the most part, well suited to implementation on vectorizing supercomputers.

  16. Adaptive optics-assisted optical coherence tomography for imaging of patients with age related macular degeneration

    NASA Astrophysics Data System (ADS)

    Sudo, Kenta; Cense, Barry

    2013-03-01

    We developed an optical coherence tomography (OCT) prototype with a sample arm that uses a 3.4 mm beam, which is considerably larger than the 1.2 to 1.5 mm beam that is used in commercialized OCT systems. The system is equipped with adaptive optics (AO), and to distinguish it from traditional AO-OCT systems with a larger 6 mm beam we have coined this concept AO-assisted OCT. Compared to commercialized OCT systems, the 3.4 mm aperture combined with AO improves light collection efficiency and imaging lateral resolution. In this paper, the performance of the AOa-OCT system was compared to a standard OCT system and demonstrated for imaging of age-related macular degeneration (AMD). Measurements were performed on the retinas of three human volunteers with healthy eyes and on one eye of a patient diagnosed with AMD. The AO-assisted OCT system imaged retinal structures of healthy human eyes and a patient eye affected by AMD with higher lateral resolution and a 9° by 9° field of view. This combination of a large isoplanatic patch and high lateral resolution can be expected to fill a gap between standard OCT with a 1.2 mm beam and conventional AO-OCT with a 6 mm beam and a 1.5° by 1.5° isoplanatic patch.

  17. Real-Time Confocal Imaging Of The Living Eye

    NASA Astrophysics Data System (ADS)

    Jester, James V.; Cavanagh, H. Dwight; Essepian, John; Shields, William J.; Lemp, Michael A.

    1989-12-01

    In 1986, we adapted the Tandem Scanning Reflected Light Microscope of Petran and Hadraysky to permit non-invasive, confocal imaging of the living eye in real-time. We were first to obtain stable, confocal optical sections in vivo, from human and animal eyes. Using confocal imaging systems we have now studied living, normal volunteers, rabbits, cats and primates sequentially, non-invasively, and in real-time. The continued development of real-time confocal imaging systems will unlock the door to a new field of cell biology involving for the first time the study of dynamic cellular processes in living organ systems. Towards this end we have concentrated our initial studies on three areas (1) evaluation of confocal microscope systems for real-time image acquisition, (2) studies of the living normal cornea (epithelium, stroma, endothelium) in human and other species; and (3) sequential wound-healing responses in the cornea in single animals to lamellar-keratectomy injury (cellular migration, inflammation, scarring). We believe that this instrument represents an important, new paradigm for research in cell biology and pathology and that it will fundamentally alter all experimental and clinical approaches in future years.

  18. A Macintosh-Based Scientific Images Video Analysis System

    NASA Technical Reports Server (NTRS)

    Groleau, Nicolas; Friedland, Peter (Technical Monitor)

    1994-01-01

    A set of experiments was designed at MIT's Man-Vehicle Laboratory in order to evaluate the effects of zero gravity on the human orientation system. During many of these experiments, the movements of the eyes are recorded on high quality video cassettes. The images must be analyzed off-line to calculate the position of the eyes at every moment in time. To this aim, I have implemented a simple inexpensive computerized system which measures the angle of rotation of the eye from digitized video images. The system is implemented on a desktop Macintosh computer, processes one play-back frame per second and exhibits adequate levels of accuracy and precision. The system uses LabVIEW, a digital output board, and a video input board to control a VCR, digitize video images, analyze them, and provide a user friendly interface for the various phases of the process. The system uses the Concept Vi LabVIEW library (Graftek's Image, Meudon la Foret, France) for image grabbing and displaying as well as translation to and from LabVIEW arrays. Graftek's software layer drives an Image Grabber board from Neotech (Eastleigh, United Kingdom). A Colour Adapter box from Neotech provides adequate video signal synchronization. The system also requires a LabVIEW driven digital output board (MacADIOS II from GW Instruments, Cambridge, MA) controlling a slightly modified VCR remote control used mainly to advance the video tape frame by frame.

  19. Indirect gonioscopy system for imaging iridocorneal angle of eye

    NASA Astrophysics Data System (ADS)

    Perinchery, Sandeep M.; Fu, Chan Yiu; Baskaran, Mani; Aung, Tin; Murukeshan, V. M.

    2017-08-01

    Current clinical optical imaging systems do not provide sufficient structural information of trabecular meshwork (TM) in the iridocorneal angle (ICA) of the eye due to their low resolution. Increase in the intraocular pressure (IOP) can occur due to the abnormalities in TM, which could subsequently lead to glaucoma. Here, we present an indirect gonioscopy based imaging probe with significantly improved visualization of structures in the ICA including TM region, compared to the currently available tools. Imaging quality of the developed system was tested in porcine samples. Improved direct high quality visualization of the TM region through this system can be used for Laser trabeculoplasty, which is a primary treatment of glaucoma. This system is expected to be used complementary to angle photography and gonioscopy.

  20. Feasibility and clinical utility of ultra-widefield indocyanine green angiography.

    PubMed

    Klufas, Michael A; Yannuzzi, Nicolas A; Pang, Claudine E; Srinivas, Sowmya; Sadda, Srinivas R; Freund, K Bailey; Kiss, Szilárd

    2015-03-01

    To evaluate the feasibility and clinical utility of a novel noncontact scanning laser ophthalmoscope-based ultra-widefield indocyanine green angiographic system. Ultra-widefield indocyanine green angiographic images were captured using a modified Optos P200Tx that produced high-resolution images of the choroidal vasculature with up to a 200° field. Ultra-widefield indocyanine green angiography was performed on patients with a variety of retinal conditions to assess utility of this imaging technique for diagnostic purposes and disease treatment monitoring. Ultra-widefield indocyanine green angiography was performed on 138 eyes of 69 patients. Mean age was 58 ± 16.9 years (range, 24-85 years). The most common ocular pathologies imaged included central serous chorioretinopathy (24 eyes), uveitis (various subtypes, 16 eyes), age-related macular degeneration (12 eyes), and polypoidal choroidal vasculopathy (4 eyes). In all eyes evaluated with ultra-widefield indocyanine green angiography, high-resolution images of choroidal and retinal circulation were obtained with sufficient detail out to 200° of the fundus. In this series of 138 eyes, scanning laser ophthalmoscope-based ultra-widefield indocyanine green angiography was clinically practical and provided detailed images of both the central and peripheral choroidal circulation. Future studies are needed to refine the clinical value of this imaging modality and the significance of peripheral choroidal vascular changes in the diagnosis, monitoring, and treatment of ocular diseases.

  1. Application of a new high-speed magnetic deformable mirror for in-vivo retinal imaging

    NASA Astrophysics Data System (ADS)

    Balderas-Mata, Sandra E.; Jones, Steven M.; Zawadzki, Robert J.; Werner, John S.

    2011-08-01

    Nowadays in ophthalmologic practice several commercial instruments are available to image patient retinas in vivo. Many modern fundus cameras and confocal scanning laser ophthalmoscopes allow acquisition of two dimensional en face images of the retina with both back reflected as well as fluorescent light. Additionally, optical coherence tomography systems allow non-invasive probing of three-dimensional retinal morphology. For all of these instruments the available lateral resolution is limited by optical quality of the human eye used as the imaging objective. To improve lateral resolution and achieve diffraction-limited imaging, adaptive optics (AO) can be implemented with any of these imaging systems to correct both static and dynamic aberrations inherent in human eyes. Most of the wavefront correctors used previously in AO systems have limited dynamic range and an insufficient number of actuators to achieve diffraction-limited correction of most human eyes. Thus, additional corrections were necessary, either by trial lenses or additional deformable mirrors (DMs). The UC Davis AO flood-illuminated fundus camera system described in this paper has been previously used to acquire in vivo images of the photoreceptor mosaic and for psychophysical studies on normal and diseased retinas. These results were acquired using a DM manufactured by Litton ITEK (DM109), which has 109 actuators arranged in a hexagonal array below a continuous front-surface mirror. It has an approximate surface actuator stroke of +/-2μm. Here we present results with a new hi-speed magnetic DM manufactured by ALPAO (DM97, voice coil technology), which has 97 actuators and similar inter-actuator stroke (>3μm, mirror surface) but much higher low-order aberration correction (defocus stroke of at least +/-30μm) than the previous one. In this paper we report results of testing performance of the ALPAO DM for the correction of human eye aberrations. Additionally changes made to our AO flood illuminated system are presented along with images of the model eye retina and in-vivo human retina acquired with this system.

  2. Method of preliminary localization of the iris in biometric access control systems

    NASA Astrophysics Data System (ADS)

    Minacova, N.; Petrov, I.

    2015-10-01

    This paper presents a method of preliminary localization of the iris, based on the stable brightness features of the iris in images of the eye. In tests on images of eyes from publicly available databases method showed good accuracy and speed compared to existing methods preliminary localization.

  3. Integrating the Advanced Human Eye Model (AHEM) and optical instrument models to model complete visual optical systems inclusive of the typical or atypical eye

    NASA Astrophysics Data System (ADS)

    Donnelly, William J., III

    2012-06-01

    PURPOSE: To present a commercially available optical modeling software tool to assist the development of optical instrumentation and systems that utilize and/or integrate with the human eye. METHODS: A commercially available flexible eye modeling system is presented, the Advanced Human Eye Model (AHEM). AHEM is a module that the engineer can use to perform rapid development and test scenarios on systems that integrate with the eye. Methods include merging modeled systems initially developed outside of AHEM and performing a series of wizard-type operations that relieve the user from requiring an optometric or ophthalmic background to produce a complete eye inclusive system. Scenarios consist of retinal imaging of targets and sources through integrated systems. Uses include, but are not limited to, optimization, telescopes, microscopes, spectacles, contact and intraocular lenses, ocular aberrations, cataract simulation and scattering, and twin eye model (binocular) systems. RESULTS: Metrics, graphical data, and exportable CAD geometry are generated from the various modeling scenarios.

  4. Effect of contact lens on optical coherence tomography imaging of rodent retina.

    PubMed

    Liu, Xiaojing; Wang, Chia-Hao; Dai, Cuixia; Camesa, Adam; Zhang, Hao F; Jiao, Shuliang

    2013-12-01

    To evaluate the effect of powerless contact lens on improving the quality of optical coherence tomography imaging of rodent retina. A spectral-domain optical coherence tomography (SD-OCT) system was built for in vivo imaging of rodent retina. The calibrated depth resolution of the system was 3 µm in tissue. A commercial powerless contact lens for rat eye was tested in the experiments. For each rat eye, the retina was imaged in vivo sequentially first without wearing contact lens and then with wearing contact lens. The lateral resolution and signal-to-noise ratio of the OCT images with and without contact lens were compared to evaluate the improvement of image quality. The fundus images generated from the measured 3D OCT datasets with contact lens showed sharper retinal blood vessels than those without contact lens. The contrast of the retinal blood vessels was also significantly enhanced in the OCT fundus images with contact lens. As high as 10 dB improvements in SNR was observed for OCT images with contact lens compared to the images of the same retinal area without contact lens. We have demonstrated that the use of powerless contact lens on rat eye can significantly improve OCT image quality of rodent retina, which is a benefit in addition to preventing cataract formation. We believe the improvement in image quality is the result of partial compensation of the optical aberrations of the rodent eye by the contact lens.

  5. The research and development of the adaptive optics in ophthalmology

    NASA Astrophysics Data System (ADS)

    Wu, Chuhan; Zhang, Xiaofang; Chen, Weilin

    2015-08-01

    Recently the combination of adaptive optics and ophthalmology has made great progress and become highly effective. The retina disease is diagnosed by retina imaging technique based on scanning optical system, so the scanning of eye requires optical system characterized by great ability of anti-moving and optical aberration correction. The adaptive optics possesses high level of adaptability and is available for real time imaging, which meets the requirement of medical retina detection with accurate images. Now the Scanning Laser Ophthalmoscope and the Optical Coherence Tomography are widely used, which are the core techniques in the area of medical retina detection. Based on the above techniques, in China, a few adaptive optics systems used for eye medical scanning have been designed by some researchers from The Institute of Optics And Electronics of CAS(The Chinese Academy of Sciences); some foreign research institutions have adopted other methods to eliminate the interference of eye moving and optical aberration; there are many relevant patents at home and abroad. In this paper, the principles and relevant technique details of the Scanning Laser Ophthalmoscope and the Optical Coherence Tomography are described. And the recent development and progress of adaptive optics in the field of eye retina imaging are analyzed and summarized.

  6. Data processing from lobster eye type optics

    NASA Astrophysics Data System (ADS)

    Nentvich, Ondrej; Stehlikova, Veronika; Urban, Martin; Hudec, Rene; Sieger, Ladislav

    2017-05-01

    Wolter I optics are commonly used for imaging in X-Ray spectrum. This system uses two reflections, and at higher energies, this system is not so much efficient but has a very good optical resolution. Here is another type of optics Lobster Eye, which is using also two reflections for focusing rays in Schmidt's or Angel's arrangement. Here is also possible to use Lobster eye optics as two one dimensional independent optics. This paper describes advantages of one dimensional and two dimensional Lobster Eye optics in Schmidt's arrangement and its data processing - find out a number of sources in wide field of view. Two dimensional (2D) optics are suitable to detect the number of point X-ray sources and their magnitude, but it is necessary to expose for a long time because a 2D system has much lower transitivity, due to double reflection, compared to one dimensional (1D) optics. Not only for this reason, two 1D optics are better to use for lower magnitudes of sources. In this case, additional image processing is necessary to achieve a 2D image. This article describes of approach an image reconstruction and advantages of two 1D optics without significant losses of transitivity.

  7. Toward a digital camera to rival the human eye

    NASA Astrophysics Data System (ADS)

    Skorka, Orit; Joseph, Dileepan

    2011-07-01

    All things considered, electronic imaging systems do not rival the human visual system despite notable progress over 40 years since the invention of the CCD. This work presents a method that allows design engineers to evaluate the performance gap between a digital camera and the human eye. The method identifies limiting factors of the electronic systems by benchmarking against the human system. It considers power consumption, visual field, spatial resolution, temporal resolution, and properties related to signal and noise power. A figure of merit is defined as the performance gap of the weakest parameter. Experimental work done with observers and cadavers is reviewed to assess the parameters of the human eye, and assessment techniques are also covered for digital cameras. The method is applied to 24 modern image sensors of various types, where an ideal lens is assumed to complete a digital camera. Results indicate that dynamic range and dark limit are the most limiting factors. The substantial functional gap, from 1.6 to 4.5 orders of magnitude, between the human eye and digital cameras may arise from architectural differences between the human retina, arranged in a multiple-layer structure, and image sensors, mostly fabricated in planar technologies. Functionality of image sensors may be significantly improved by exploiting technologies that allow vertical stacking of active tiers.

  8. How aquatic water-beetle larvae with small chambered eyes overcome challenges of hunting under water.

    PubMed

    Stowasser, Annette; Buschbeck, Elke K

    2014-11-01

    A particularly unusual visual system exists in the visually guided aquatic predator, the Sunburst Diving Beetle, Thermonectus marmoratus (Coleoptera: Dytiscidae). The question arises: how does this peculiar visual system function? A series of experiments suggests that their principal eyes (E1 and E2) are highly specialized for hunting. These eyes are tubular and have relatively long focal lengths leading to high image magnification. Their retinae are linear, and are divided into distinct green-sensitive distal and UV and polarization-sensitive proximal portions. Each distal retina, moreover, has many tiers of photoreceptors with rhabdomeres the long axis of which are peculiarly oriented perpendicular to the light path. Based on detailed optical investigations, the lenses of these eyes are bifocal and project focused images onto specific retinal tiers. Behavioral experiments suggest that these larvae approach prey within their eyes' near-fields, and that they can correctly gauge prey distances even when conventional distance-vision mechanisms are unavailable. In the near-field of these eyes object distance determines which of the many retinal layers receive the best-focused images. This retinal organization could facilitate an unusual distance-vision mechanism. We here summarize past findings and discuss how these eyes allow Thermonectus larvae to be such successful predators.

  9. Wavefront sensorless adaptive optics ophthalmoscopy in the human eye

    PubMed Central

    Hofer, Heidi; Sredar, Nripun; Queener, Hope; Li, Chaohong; Porter, Jason

    2011-01-01

    Wavefront sensor noise and fidelity place a fundamental limit on achievable image quality in current adaptive optics ophthalmoscopes. Additionally, the wavefront sensor ‘beacon’ can interfere with visual experiments. We demonstrate real-time (25 Hz), wavefront sensorless adaptive optics imaging in the living human eye with image quality rivaling that of wavefront sensor based control in the same system. A stochastic parallel gradient descent algorithm directly optimized the mean intensity in retinal image frames acquired with a confocal adaptive optics scanning laser ophthalmoscope (AOSLO). When imaging through natural, undilated pupils, both control methods resulted in comparable mean image intensities. However, when imaging through dilated pupils, image intensity was generally higher following wavefront sensor-based control. Despite the typically reduced intensity, image contrast was higher, on average, with sensorless control. Wavefront sensorless control is a viable option for imaging the living human eye and future refinements of this technique may result in even greater optical gains. PMID:21934779

  10. Multifocal microlens for bionic compound eye

    NASA Astrophysics Data System (ADS)

    Cao, Axiu; Wang, Jiazhou; Pang, Hui; Zhang, Man; Shi, Lifang; Deng, Qiling; Hu, Song

    2017-10-01

    Bionic compound eye optical element composed of multi-dimensional sub-eye microlenses plays an important role in miniaturizing the volume and weight of an imaging system. In this manuscript, we present a novel structure of the bionic compound eye with multiple focal lengths. By the division of the microlens into two concentric radial zones including the inner zone and the outer zone with independent radius, the sub-eye which is a multi-level micro-scale structure can be formed with multiple focal lengths. The imaging capability of the structure has been simulated. The results show that the optical information in different depths can be acquired by the structure. Meanwhile, the parameters including aperture and radius of the two zones, which have an influence on the imaging quality have been analyzed and discussed. With the increasing of the ratio of inner and outer aperture, the imaging quality of the inner zone is becoming better, and instead the outer zone will become worse. In addition, through controlling the radius of the inner and outer zone independently, the design of sub-eye with different focal lengths can be realized. With the difference between the radius of the inner and outer zone becoming larger, the imaging resolution of the sub-eye will decrease. Therefore, the optimization of the multifocal structure should be carried out according to the actual imaging quality demands. Meanwhile, this study can provide references for the further applications of multifocal microlens in bionic compound eye.

  11. Towards femtosecond laser surgery guidance in the posterior eye: utilization of optical coherence tomography and adaptive optics for focus positioning and shaping

    NASA Astrophysics Data System (ADS)

    Krüger, Alexander; Hansen, Anja; Matthias, Ben; Ripken, Tammo

    2014-02-01

    Although fs-laser surgery is clinically established in the field of corneal flap cutting for laser in situ keratomileusis, surgery with fs-laser in the posterior part of the eye is impaired by focus degradation due to aberrations. Precise targeting and keeping of safety distance to the retina also relies on an intraoperative depth resolved imaging. We demonstrate a concept for image guided fs-laser surgery in the vitreous body combining adaptive optics (AO) for focus reshaping and optical coherence tomography (OCT) for focus position guidance. The setup of the laboratory system consist of an 800 nm fs-laser which is focused into a simple eye model via a closed loop adaptive optics system with Hartmann-Shack sensor and a deformable mirror to correct for wavefront aberrations. A spectral domain optical coherence tomography system is used to target phantom structures in the eye model. Both systems are set up to share the same scanner and focusing optics. The use of adaptive optics results in a lowered threshold energy for laser induced breakdown and an increased cutting precision. 3D OCT imaging of porcine retinal tissue prior and immediately after fs-laser cutting is also demonstrated. In the near future OCT and AO will be two essential assistive components in possible clinical systems for fs-laser based eye surgery beyond the cornea.

  12. Imaging in Diabetic Retinopathy: A Review of Current and Future Techniques.

    PubMed

    Gajree, Sonul; Borooah, Shyamanga; Dhillon, Baljean

    2017-01-01

    Diabetic eye disease is the most common cause of blindness worldwide in the population under 65 years of age. The prevalence of sight-threatening diabetic eye disease continues to rise rapidly, resulting in an increasing burden on health systems worldwide. This highlights the need to develop new tools to help in the screening, diagnosis and management of diabetic eye disease. This review aims to provide a brief overview of the current standard in care for diabetic eye disease, before providing an up to date overview of newer imaging modalities, with potential application in the management of diabetic eye care. A literature search for the terms "enhanced depth imaging OCT", "swept source OCT", "retinal oximetry", "OCT angiography", "fundus autofluorescence" with the term "diabetes" was performed using the pubmed and google scholar databases. Only articles published within the last two years were selected for use in this article. There has been a rapid increase in the available imaging techniques used to manage diabetic eye disease. To date there has been variable use of these next generation imaging techniques. A greater understanding of how phenotypic findings link to the risk of sight loss is required before there is more widespread adoption by mainstream diabetic eye services. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  13. Objective evaluation of the visual acuity in human eyes

    NASA Astrophysics Data System (ADS)

    Rosales, M. A.; López-Olazagasti, E.; Ramírez-Zavaleta, G.; Varillas, G.; Tepichín, E.

    2009-08-01

    Traditionally, the quality of the human vision is evaluated by a subjective test in which the examiner asks the patient to read a series of characters of different sizes, located at a certain distance of the patient. Typically, we need to ensure a subtended angle of vision of 5 minutes, which implies an object of 8.8 mm high located at 6 meters (normal or 20/20 visual acuity). These characters constitute what is known as the Snellen chart, universally used to evaluate the spatial resolution of the human eyes. The mentioned process of identification of characters is carried out by means of the eye - brain system, giving an evaluation of the subjective visual performance. In this work we consider the eye as an isolated image-forming system, and show that it is possible to isolate the function of the eye from that of the brain in this process. By knowing the impulse response of the eye´s system we can obtain, in advance, the image of the Snellen chart simultaneously. From this information, we obtain the objective performance of the eye as the optical system under test. This type of results might help to detect anomalous situations of the human vision, like the so called "cerebral myopia".

  14. Use of EyeCam for imaging the anterior chamber angle.

    PubMed

    Perera, Shamira A; Baskaran, Mani; Friedman, David S; Tun, Tin A; Htoon, Hla M; Kumar, Rajesh S; Aung, Tin

    2010-06-01

    To compare EyeCam (Clarity Medical Systems, Pleasanton, CA) imaging with gonioscopy for detecting angle closure. In this prospective, hospital-based study, subjects underwent gonioscopy by a single observer and EyeCam imaging by a different operator. EyeCam images were graded by two masked observers. The anterior chamber angle in a quadrant was classified as closed if the trabecular meshwork could not be seen. The eye was classified as having angle closure if two or more quadrants were closed. One hundred fifty-two subjects were studied. The mean age was 57.4 years (SD 12.9) and there were 82 (54%) men. Of the 152 eyes, 21 (13.8%) had angle closure. The EyeCam provided clear images of the angles in 98.8% of subjects. The agreement between the EyeCam and gonioscopy for detecting angle closure in the superior, inferior, nasal, and temporal quadrants based on agreement coefficient (AC1) statistics was 0.73, 0.75, 0.76, and 0.72, respectively. EyeCam detected more closed angles than did gonioscopy in all quadrants (P < 0.05). With gonioscopy, 21/152 (13.8%) eyes were diagnosed as angle closure compared to 41 (27.0%) of 152 with EyeCam (P < 0.001, McNemar Test), giving an overall sensitivity of 76.2% (95% confidence interval [CI], 54.9%-90.7%), specificity of 80.9% (95%CI, 73.5%-87.3%), and an area under the receiver operating characteristic curve (AUC) of 0.79. The EyeCam showed good agreement with gonioscopy for detecting angle closure. However, it detected more closed angles than did gonioscopy in all quadrants.

  15. High-resolution imaging of retinal nerve fiber bundles in glaucoma using adaptive optics scanning laser ophthalmoscopy.

    PubMed

    Takayama, Kohei; Ooto, Sotaro; Hangai, Masanori; Ueda-Arakawa, Naoko; Yoshida, Sachiko; Akagi, Tadamichi; Ikeda, Hanako Ohashi; Nonaka, Atsushi; Hanebuchi, Masaaki; Inoue, Takashi; Yoshimura, Nagahisa

    2013-05-01

    To detect pathologic changes in retinal nerve fiber bundles in glaucomatous eyes seen on images obtained by adaptive optics (AO) scanning laser ophthalmoscopy (AO SLO). Prospective cross-sectional study. Twenty-eight eyes of 28 patients with open-angle glaucoma and 21 normal eyes of 21 volunteer subjects underwent a full ophthalmologic examination, visual field testing using a Humphrey Field Analyzer, fundus photography, red-free SLO imaging, spectral-domain optical coherence tomography, and imaging with an original prototype AO SLO system. The AO SLO images showed many hyperreflective bundles suggesting nerve fiber bundles. In glaucomatous eyes, the nerve fiber bundles were narrower than in normal eyes, and the nerve fiber layer thickness was correlated with the nerve fiber bundle widths on AO SLO (P < .001). In the nerve fiber layer defect area on fundus photography, the nerve fiber bundles on AO SLO were narrower compared with those in normal eyes (P < .001). At 60 degrees on the inferior temporal side of the optic disc, the nerve fiber bundle width was significantly lower, even in areas without nerve fiber layer defect, in eyes with glaucomatous eyes compared with normal eyes (P = .026). The mean deviations of each cluster in visual field testing were correlated with the corresponding nerve fiber bundle widths (P = .017). AO SLO images showed reduced nerve fiber bundle widths both in clinically normal and abnormal areas of glaucomatous eyes, and these abnormalities were associated with visual field defects, suggesting that AO SLO may be useful for detecting early nerve fiber bundle abnormalities associated with loss of visual function. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. VISUALIZATION FROM INTRAOPERATIVE SWEPT-SOURCE MICROSCOPE-INTEGRATED OPTICAL COHERENCE TOMOGRAPHY IN VITRECTOMY FOR COMPLICATIONS OF PROLIFERATIVE DIABETIC RETINOPATHY.

    PubMed

    Gabr, Hesham; Chen, Xi; Zevallos-Carrasco, Oscar M; Viehland, Christian; Dandrige, Alexandria; Sarin, Neeru; Mahmoud, Tamer H; Vajzovic, Lejla; Izatt, Joseph A; Toth, Cynthia A

    2018-01-10

    To evaluate the use of live volumetric (4D) intraoperative swept-source microscope-integrated optical coherence tomography in vitrectomy for proliferative diabetic retinopathy complications. In this prospective study, we analyzed a subgroup of patients with proliferative diabetic retinopathy complications who required vitrectomy and who were imaged by the research swept-source microscope-integrated optical coherence tomography system. In near real time, images were displayed in stereo heads-up display facilitating intraoperative surgeon feedback. Postoperative review included scoring image quality, identifying different diabetic retinopathy-associated pathologies and reviewing the intraoperatively documented surgeon feedback. Twenty eyes were included. Indications for vitrectomy were tractional retinal detachment (16 eyes), combined tractional-rhegmatogenous retinal detachment (2 eyes), and vitreous hemorrhage (2 eyes). Useful, good-quality 2D (B-scans) and 4D images were obtained in 16/20 eyes (80%). In these eyes, multiple diabetic retinopathy complications could be imaged. Swept-source microscope-integrated optical coherence tomography provided surgical guidance, e.g., in identifying dissection planes under fibrovascular membranes, and in determining residual membranes and traction that would benefit from additional peeling. In 4/20 eyes (20%), acceptable images were captured, but they were not useful due to high tractional retinal detachment elevation which was challenging for imaging. Swept-source microscope-integrated optical coherence tomography can provide important guidance during surgery for proliferative diabetic retinopathy complications through intraoperative identification of different complications and facilitation of intraoperative decision making.

  17. A method to reduce patient's eye lens dose in neuro-interventional radiology procedures

    NASA Astrophysics Data System (ADS)

    Safari, M. J.; Wong, J. H. D.; Kadir, K. A. A.; Sani, F. M.; Ng, K. H.

    2016-08-01

    Complex and prolonged neuro-interventional radiology procedures using the biplane angiography system increase the patient's risk of radiation-induced cataract. Physical collimation is the most effective way of reducing the radiation dose to the patient's eye lens, but in instances where collimation is not possible, an attenuator may be useful in protecting the eyes. In this study, an eye lens protector was designed and fabricated to reduce the radiation dose to the patients' eye lens during neuro-interventional procedures. The eye protector was characterised before being tested on its effectiveness in a simulated aneurysm procedure on an anthropomorphic phantom. Effects on the automatic dose rate control (ADRC) and image quality are also evaluated. The eye protector reduced the radiation dose by up to 62.1% at the eye lens. The eye protector is faintly visible in the fluoroscopy images and increased the tube current by a maximum of 3.7%. It is completely invisible in the acquisition mode and does not interfere with the clinical procedure. The eye protector placed within the radiation field of view was able to reduce the radiation dose to the eye lens by direct radiation beam of the lateral x-ray tube with minimal effect on the ADRC system.

  18. Lobster eye X-ray optics: Data processing from two 1D modules

    NASA Astrophysics Data System (ADS)

    Nentvich, O.; Urban, M.; Stehlikova, V.; Sieger, L.; Hudec, R.

    2017-07-01

    The X-ray imaging is usually done by Wolter I telescopes. They are suitable for imaging of a small part of the sky, not for all-sky monitoring. This monitoring could be done by a Lobster eye optics which can theoretically have a field of view up to 360 deg. All sky monitoring system enables a quick identification of source and its direction. This paper describes the possibility of using two independent one-dimensional Lobster Eye modules for this purpose instead of Wolter I and their post-processing into an 2D image. This arrangement allows scanning with less energy loss compared to Wolter I or two-dimensional Lobster Eye optics. It is most suitable especially for very weak sources.

  19. Reducing absorbed dose to eye lenses in head CT examinations: the effect of bismuth shielding.

    PubMed

    Ciarmatori, Alberto; Nocetti, L; Mistretta, G; Zambelli, G; Costi, T

    2016-06-01

    The eye lens is considered to be among the most radiosensitive human tissues. Brain CT scans may unnecessarily expose it to radiation even if the area of clinical interest is far from the eyes. The aim of this study is to implement a bismuth eye lens shielding system for Head-CT acquisitions in these cases. The study is focused on the assessment of the dosimetric characteristics of the shielding system as well as on its effect on image quality. The shielding system was tested in two set-ups which differ for distance ("contact" and "4 cm" Set up respectively). Scans were performed on a CTDI phantom and an anthropomorphic phantom. A reference set up without shielding system was acquired to establish a baseline. Image quality was assessed by signal (not HU converted), noise and contrast-to-noise ratio (CNR) evaluation. The overall dose reduction was evaluated by measuring the CTDIvol while the eye lens dose reduction was assessed by placing thermoluminescent dosimeters (TLDs) on an anthropomorphic phantom. The image quality analysis exhibits the presence of an artefact that mildly increases the CT number up to 3 cm below the shielding system. Below the artefact, the difference of the Signal and the CNR are negligible between the three different set-ups. Regarding the CTDI, the analysis demonstrates a decrease by almost 12 % (in the "contact" set-up) and 9 % (in the "4 cm" set-up). TLD measurements exhibit an eye lens dose reduction by 28.5 ± 5 and 21.1 ± 5 % respectively at the "contact" and the "4 cm" distance. No relevant artefact was found and image quality was not affected by the shielding system. Significant dose reductions were measured. These features make the shielding set-up useful for clinical implementation in both studied positions.

  20. A Model of the Human Eye

    ERIC Educational Resources Information Center

    Colicchia, G.; Wiesner, H.; Waltner, C.; Zollman, D.

    2008-01-01

    We describe a model of the human eye that incorporates a variable converging lens. The model can be easily constructed by students with low-cost materials. It shows in a comprehensible way the functionality of the eye's optical system. Images of near and far objects can be focused. Also, the defects of near and farsighted eyes can be demonstrated.

  1. A novel role for visual perspective cues in the neural computation of depth

    PubMed Central

    Kim, HyungGoo R.; Angelaki, Dora E.; DeAngelis, Gregory C.

    2014-01-01

    As we explore a scene, our eye movements add global patterns of motion to the retinal image, complicating visual motion produced by self-motion or moving objects. Conventionally, it has been assumed that extra-retinal signals, such as efference copy of smooth pursuit commands, are required to compensate for the visual consequences of eye rotations. We consider an alternative possibility: namely, that the visual system can infer eye rotations from global patterns of image motion. We visually simulated combinations of eye translation and rotation, including perspective distortions that change dynamically over time. We demonstrate that incorporating these “dynamic perspective” cues allows the visual system to generate selectivity for depth sign from motion parallax in macaque area MT, a computation that was previously thought to require extra-retinal signals regarding eye velocity. Our findings suggest novel neural mechanisms that analyze global patterns of visual motion to perform computations that require knowledge of eye rotations. PMID:25436667

  2. Adaptive optics with pupil tracking for high resolution retinal imaging

    PubMed Central

    Sahin, Betul; Lamory, Barbara; Levecq, Xavier; Harms, Fabrice; Dainty, Chris

    2012-01-01

    Adaptive optics, when integrated into retinal imaging systems, compensates for rapidly changing ocular aberrations in real time and results in improved high resolution images that reveal the photoreceptor mosaic. Imaging the retina at high resolution has numerous potential medical applications, and yet for the development of commercial products that can be used in the clinic, the complexity and high cost of the present research systems have to be addressed. We present a new method to control the deformable mirror in real time based on pupil tracking measurements which uses the default camera for the alignment of the eye in the retinal imaging system and requires no extra cost or hardware. We also present the first experiments done with a compact adaptive optics flood illumination fundus camera where it was possible to compensate for the higher order aberrations of a moving model eye and in vivo in real time based on pupil tracking measurements, without the real time contribution of a wavefront sensor. As an outcome of this research, we showed that pupil tracking can be effectively used as a low cost and practical adaptive optics tool for high resolution retinal imaging because eye movements constitute an important part of the ocular wavefront dynamics. PMID:22312577

  3. Adaptive optics with pupil tracking for high resolution retinal imaging.

    PubMed

    Sahin, Betul; Lamory, Barbara; Levecq, Xavier; Harms, Fabrice; Dainty, Chris

    2012-02-01

    Adaptive optics, when integrated into retinal imaging systems, compensates for rapidly changing ocular aberrations in real time and results in improved high resolution images that reveal the photoreceptor mosaic. Imaging the retina at high resolution has numerous potential medical applications, and yet for the development of commercial products that can be used in the clinic, the complexity and high cost of the present research systems have to be addressed. We present a new method to control the deformable mirror in real time based on pupil tracking measurements which uses the default camera for the alignment of the eye in the retinal imaging system and requires no extra cost or hardware. We also present the first experiments done with a compact adaptive optics flood illumination fundus camera where it was possible to compensate for the higher order aberrations of a moving model eye and in vivo in real time based on pupil tracking measurements, without the real time contribution of a wavefront sensor. As an outcome of this research, we showed that pupil tracking can be effectively used as a low cost and practical adaptive optics tool for high resolution retinal imaging because eye movements constitute an important part of the ocular wavefront dynamics.

  4. Adding polarimetric imaging to depth map using improved light field camera 2.0 structure

    NASA Astrophysics Data System (ADS)

    Zhang, Xuanzhe; Yang, Yi; Du, Shaojun; Cao, Yu

    2017-06-01

    Polarization imaging plays an important role in various fields, especially for skylight navigation and target identification, whose imaging system is always required to be designed with high resolution, broad band, and single-lens structure. This paper describe such a imaging system based on light field 2.0 camera structure, which can calculate the polarization state and depth distance from reference plane for every objet point within a single shot. This structure, including a modified main lens, a multi-quadrants Polaroid, a honeycomb-liked micro lens array, and a high resolution CCD, is equal to an "eyes array", with 3 or more polarization imaging "glasses" in front of each "eye". Therefore, depth can be calculated by matching the relative offset of corresponding patch on neighboring "eyes", while polarization state by its relative intensity difference, and their resolution will be approximately equal to each other. An application on navigation under clear sky shows that this method has a high accuracy and strong robustness.

  5. Robustness of an artificially tailored fisheye imaging system with a curvilinear image surface

    NASA Astrophysics Data System (ADS)

    Lee, Gil Ju; Nam, Won Il; Song, Young Min

    2017-11-01

    Curved image sensors inspired by animal and insect eyes have provided a new development direction in next-generation digital cameras. It is known that natural fish eyes afford an extremely wide field of view (FOV) imaging due to the geometrical properties of the spherical lens and hemispherical retina. However, its inherent drawbacks, such as the low off-axis illumination and the fabrication difficulty of a 'dome-like' hemispherical imager, limit the development of bio-inspired wide FOV cameras. Here, a new type of fisheye imaging system is introduced that has simple lens configurations with a curvilinear image surface, while maintaining high off-axis illumination and a wide FOV. Moreover, through comparisons with commercial conventional fisheye designs, it is determined that the volume and required number of optical elements of the proposed design is practical while capturing the fundamental optical performances. Detailed design guidelines for tailoring the proposed optic system are also discussed.

  6. The eye and visual nervous system: anatomy, physiology and toxicology.

    PubMed Central

    McCaa, C S

    1982-01-01

    The eyes are at risk to environmental injury by direct exposure to airborne pollutants, to splash injury from chemicals and to exposure via the circulatory system to numerous drugs and bloodborne toxins. In addition, drugs or toxins can destroy vision by damaging the visual nervous system. This review describes the anatomy and physiology of the eye and visual nervous system and includes a discussion of some of the more common toxins affecting vision in man. Images FIGURE 1. FIGURE 2. PMID:7084144

  7. Reliability and repeatability of swept-source Fourier-domain optical coherence tomography and Scheimpflug imaging in keratoconus.

    PubMed

    Szalai, Eszter; Berta, András; Hassan, Ziad; Módis, László

    2012-03-01

    To evaluate the repeatability and reliability of a recently introduced swept-source Fourier-domain anterior segment optical coherence tomography (AS-OCT) system and a high-resolution Scheimpflug camera and to assess the agreement between the 2 instruments when measuring healthy eyes and eyes with keratoconus. Department of Ophthalmology, Medical and Health Science Center, University of Debrecen, Debrecen, Hungary. Evaluation of diagnostic test or technology. Three consecutive series of anterior segment images were taken with AS-OCT (Casia SS-1000) followed by rotating Scheimpflug imaging (Pentacam high resolution). Axial keratometry in the steep and flat meridians and astigmatism values were recorded. Pachymetry in the apex, center, and the thinnest position and anterior chamber depth (ACD) measurements were also taken. This study enrolled 57 healthy volunteers (57 eyes) and 56 patients (84 eyes) with keratoconus. Significant difference was found in all measured anterior segment parameters between normal eyes and keratoconic eyes (P<.05). In keratoconic eyes, the difference between repeated measurements was less with AS-OCT than with Scheimpflug imaging in every keratometry and astigmatism value, in apical thickness, and in ACD. For keratometry, the thinnest and central pachymetry measurement repeatability was better in healthy eyes than in keratoconic eyes with both instruments. In general, the mean difference between AS-OCT and Scheimpflug imaging was higher in cases of keratoconus. Significant differences in keratometry, pachymetry, and ACD results were found between AS-OCT and Scheimpflug imaging. However, the repeatability of the measurements was comparable. No author has a financial or proprietary interest in any material or method mentioned. Copyright © 2012 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  8. The analysis of the influence of fractal structure of stimuli on fractal dynamics in fixational eye movements and EEG signal

    NASA Astrophysics Data System (ADS)

    Namazi, Hamidreza; Kulish, Vladimir V.; Akrami, Amin

    2016-05-01

    One of the major challenges in vision research is to analyze the effect of visual stimuli on human vision. However, no relationship has been yet discovered between the structure of the visual stimulus, and the structure of fixational eye movements. This study reveals the plasticity of human fixational eye movements in relation to the ‘complex’ visual stimulus. We demonstrated that the fractal temporal structure of visual dynamics shifts towards the fractal dynamics of the visual stimulus (image). The results showed that images with higher complexity (higher fractality) cause fixational eye movements with lower fractality. Considering the brain, as the main part of nervous system that is engaged in eye movements, we analyzed the governed Electroencephalogram (EEG) signal during fixation. We have found out that there is a coupling between fractality of image, EEG and fixational eye movements. The capability observed in this research can be further investigated and applied for treatment of different vision disorders.

  9. Adaptive optics scanning laser ophthalmoscope using liquid crystal on silicon spatial light modulator: Performance study with involuntary eye movement

    NASA Astrophysics Data System (ADS)

    Huang, Hongxin; Toyoda, Haruyoshi; Inoue, Takashi

    2017-09-01

    The performance of an adaptive optics scanning laser ophthalmoscope (AO-SLO) using a liquid crystal on silicon spatial light modulator and Shack-Hartmann wavefront sensor was investigated. The system achieved high-resolution and high-contrast images of human retinas by dynamic compensation for the aberrations in the eyes. Retinal structures such as photoreceptor cells, blood vessels, and nerve fiber bundles, as well as blood flow, could be observed in vivo. We also investigated involuntary eye movements and ascertained microsaccades and drifts using both the retinal images and the aberrations recorded simultaneously. Furthermore, we measured the interframe displacement of retinal images and found that during eye drift, the displacement has a linear relationship with the residual low-order aberration. The estimated duration and cumulative displacement of the drift were within the ranges estimated by a video tracking technique. The AO-SLO would not only be used for the early detection of eye diseases, but would also offer a new approach for involuntary eye movement research.

  10. Purkinje image eyetracking: A market survey

    NASA Technical Reports Server (NTRS)

    Christy, L. F.

    1979-01-01

    The Purkinje image eyetracking system was analyzed to determine the marketability of the system. The eyetracking system is a synthesis of two separate instruments, the optometer that measures the refractive power of the eye and the dual Purkinje image eyetracker that measures the direction of the visual axis.

  11. A TV Camera System Which Extracts Feature Points For Non-Contact Eye Movement Detection

    NASA Astrophysics Data System (ADS)

    Tomono, Akira; Iida, Muneo; Kobayashi, Yukio

    1990-04-01

    This paper proposes a highly efficient camera system which extracts, irrespective of background, feature points such as the pupil, corneal reflection image and dot-marks pasted on a human face in order to detect human eye movement by image processing. Two eye movement detection methods are sugested: One utilizing face orientation as well as pupil position, The other utilizing pupil and corneal reflection images. A method of extracting these feature points using LEDs as illumination devices and a new TV camera system designed to record eye movement are proposed. Two kinds of infra-red LEDs are used. These LEDs are set up a short distance apart and emit polarized light of different wavelengths. One light source beams from near the optical axis of the lens and the other is some distance from the optical axis. The LEDs are operated in synchronization with the camera. The camera includes 3 CCD image pick-up sensors and a prism system with 2 boundary layers. Incident rays are separated into 2 wavelengths by the first boundary layer of the prism. One set of rays forms an image on CCD-3. The other set is split by the half-mirror layer of the prism and forms an image including the regularly reflected component by placing a polarizing filter in front of CCD-1 or another image not including the component by not placing a polarizing filter in front of CCD-2. Thus, three images with different reflection characteristics are obtained by three CCDs. Through the experiment, it is shown that two kinds of subtraction operations between the three images output from CCDs accentuate three kinds of feature points: the pupil and corneal reflection images and the dot-marks. Since the S/N ratio of the subtracted image is extremely high, the thresholding process is simple and allows reducting the intensity of the infra-red illumination. A high speed image processing apparatus using this camera system is decribed. Realtime processing of the subtraction, thresholding and gravity position calculation of the feature points is possible.

  12. The image enhancement and region of interest extraction of lobster-eye X-ray dangerous material inspection system

    NASA Astrophysics Data System (ADS)

    Zhan, Qi; Wang, Xin; Mu, Baozhong; Xu, Jie; Xie, Qing; Li, Yaran; Chen, Yifan; He, Yanan

    2016-10-01

    Dangerous materials inspection is an important technique to confirm dangerous materials crimes. It has significant impact on the prohibition of dangerous materials-related crimes and the spread of dangerous materials. Lobster-Eye Optical Imaging System is a kind of dangerous materials detection device which mainly takes advantage of backscatter X-ray. The strength of the system is its applicability to access only one side of an object, and to detect dangerous materials without disturbing the surroundings of the target material. The device uses Compton scattered x-rays to create computerized outlines of suspected objects during security detection process. Due to the grid structure of the bionic object glass, which imitate the eye of a lobster, grids contribute to the main image noise during the imaging process. At the same time, when used to inspect structured or dense materials, the image is plagued by superposition artifacts and limited by attenuation and noise. With the goal of achieving high quality images which could be used for dangerous materials detection and further analysis, we developed effective image process methods applied to the system. The first aspect of the image process is the denoising and enhancing edge contrast process, during the process, we apply deconvolution algorithm to remove the grids and other noises. After image processing, we achieve high signal-to-noise ratio image. The second part is to reconstruct image from low dose X-ray exposure condition. We developed a kind of interpolation method to achieve the goal. The last aspect is the region of interest (ROI) extraction process, which could be used to help identifying dangerous materials mixed with complex backgrounds. The methods demonstrated in the paper have the potential to improve the sensitivity and quality of x-ray backscatter system imaging.

  13. A new mapping function in table-mounted eye tracker

    NASA Astrophysics Data System (ADS)

    Tong, Qinqin; Hua, Xiao; Qiu, Jian; Luo, Kaiqing; Peng, Li; Han, Peng

    2018-01-01

    Eye tracker is a new apparatus of human-computer interaction, which has caught much attention in recent years. Eye tracking technology is to obtain the current subject's "visual attention (gaze)" direction by using mechanical, electronic, optical, image processing and other means of detection. While the mapping function is one of the key technology of the image processing, and is also the determination of the accuracy of the whole eye tracker system. In this paper, we present a new mapping model based on the relationship among the eyes, the camera and the screen that the eye gazed. Firstly, according to the geometrical relationship among the eyes, the camera and the screen, the framework of mapping function between the pupil center and the screen coordinate is constructed. Secondly, in order to simplify the vectors inversion of the mapping function, the coordinate of the eyes, the camera and screen was modeled by the coaxial model systems. In order to verify the mapping function, corresponding experiment was implemented. It is also compared with the traditional quadratic polynomial function. And the results show that our approach can improve the accuracy of the determination of the gazing point. Comparing with other methods, this mapping function is simple and valid.

  14. Pupil Tracking for Real-Time Motion Corrected Anterior Segment Optical Coherence Tomography

    PubMed Central

    Carrasco-Zevallos, Oscar M.; Nankivil, Derek; Viehland, Christian; Keller, Brenton; Izatt, Joseph A.

    2016-01-01

    Volumetric acquisition with anterior segment optical coherence tomography (ASOCT) is necessary to obtain accurate representations of the tissue structure and to account for asymmetries of the anterior eye anatomy. Additionally, recent interest in imaging of anterior segment vasculature and aqueous humor flow resulted in application of OCT angiography techniques to generate en face and 3D micro-vasculature maps of the anterior segment. Unfortunately, ASOCT structural and vasculature imaging systems do not capture volumes instantaneously and are subject to motion artifacts due to involuntary eye motion that may hinder their accuracy and repeatability. Several groups have demonstrated real-time tracking for motion-compensated in vivo OCT retinal imaging, but these techniques are not applicable in the anterior segment. In this work, we demonstrate a simple and low-cost pupil tracking system integrated into a custom swept-source OCT system for real-time motion-compensated anterior segment volumetric imaging. Pupil oculography hardware coaxial with the swept-source OCT system enabled fast detection and tracking of the pupil centroid. The pupil tracking ASOCT system with a field of view of 15 x 15 mm achieved diffraction-limited imaging over a lateral tracking range of +/- 2.5 mm and was able to correct eye motion at up to 22 Hz. Pupil tracking ASOCT offers a novel real-time motion compensation approach that may facilitate accurate and reproducible anterior segment imaging. PMID:27574800

  15. Agreement between image grading of conventional (45°) and ultra wide-angle (200°) digital images in the macula in the Reykjavik eye study.

    PubMed

    Csutak, A; Lengyel, I; Jonasson, F; Leung, I; Geirsdottir, A; Xing, W; Peto, T

    2010-10-01

    To establish the agreement between image grading of conventional (45°) and ultra wide-angle (200°) digital images in the macula. In 2008, the 12-year follow-up was conducted on 573 participants of the Reykjavik Eye Study. This study included the use of the Optos P200C AF ultra wide-angle laser scanning ophthalmoscope alongside Zeiss FF 450 conventional digital fundus camera on 121 eyes with or without age-related macular degeneration using the International Classification System. Of these eyes, detailed grading was carried out on five cases each with hard drusen, geographic atrophy and chorioretinal neovascularisation, and six cases of soft drusen. Exact agreement and κ-statistics were calculated. Comparison of the conventional and ultra wide-angle images in the macula showed an overall 96.43% agreement (κ=0.93) with no disagreement at end-stage disease; although in one eye chorioretinal neovascularisation was graded as drusenoid pigment epithelial detachment. Of patients with drusen only, the exact agreement was 96.1%. The detailed grading showed no clinically significant disagreement between the conventional 45° and 200° images. On the basis of our results, there is a good agreement between grading conventional and ultra wide-angle images in the macula.

  16. The Minnesota Grading System Using Fundus Autofluorescence of Eye Bank Eyes: A Correlation To Age-Related Macular Degeneration (An AOS Thesis)

    PubMed Central

    Olsen, Timothy W.

    2008-01-01

    Purpose To establish a grading system of eye bank eyes using fundus autofluorescence (FAF) and identify a methodology that correlates FAF to age-related macular degeneration (AMD) with clinical correlation to the Age-Related Eye Disease Study (AREDS). Methods Two hundred sixty-two eye bank eyes were evaluated using a standardized analysis of FAF. Measurements were taken with the confocal scanning laser ophthalmoscope (cSLO). First, high-resolution, digital, stereoscopic, color images were obtained and graded according to AREDS criteria. With the neurosensory retina removed, mean FAF values were obtained from cSLO images using software analysis that excludes areas of atrophy and other artifact, generating an FAF value from a grading template. Age and AMD grade were compared to FAF values. An internal fluorescence reference standard was tested. Results Standardization of the cSLO machine demonstrated that reliable data could be acquired after a 1-hour warm-up. Images obtained prior to 1 hour had falsely elevated levels of FAF. In this initial analysis, there was no statistical correlation of age to mean FAF. There was a statistically significant decrease in FAF from AREDS grade 1, 2 to 3, 4 (P < .0001). An internal fluorescent standard may serve as a quantitative reference. Conclusions The Minnesota Grading System (MGS) of FAF (MGS-FAF) establishes a standardized methodology for grading eye bank tissue to quantify FAF compounds in the retinal pigment epithelium and correlate these findings to the AREDS. Future studies could then correlate specific FAF to the aging process, histopathology AMD phenotypes, and other maculopathies, as well as to analyze the biochemistry of autofluorescent fluorophores. PMID:19277247

  17. The Minnesota Grading System using fundus autofluorescence of eye bank eyes: a correlation to age-related macular degeneration (an AOS thesis).

    PubMed

    Olsen, Timothy W

    2008-01-01

    To establish a grading system of eye bank eyes using fundus autofluorescence (FAF) and identify a methodology that correlates FAF to age-related macular degeneration (AMD) with clinical correlation to the Age-Related Eye Disease Study (AREDS). Two hundred sixty-two eye bank eyes were evaluated using a standardized analysis of FAF. Measurements were taken with the confocal scanning laser ophthalmoscope (cSLO). First, high-resolution, digital, stereoscopic, color images were obtained and graded according to AREDS criteria. With the neurosensory retina removed, mean FAF values were obtained from cSLO images using software analysis that excludes areas of atrophy and other artifact, generating an FAF value from a grading template. Age and AMD grade were compared to FAF values. An internal fluorescence reference standard was tested. Standardization of the cSLO machine demonstrated that reliable data could be acquired after a 1-hour warm-up. Images obtained prior to 1 hour had falsely elevated levels of FAF. In this initial analysis, there was no statistical correlation of age to mean FAF. There was a statistically significant decrease in FAF from AREDS grade 1, 2 to 3, 4 (P < .0001). An internal fluorescent standard may serve as a quantitative reference. The Minnesota Grading System (MGS) of FAF (MGS-FAF) establishes a standardized methodology for grading eye bank tissue to quantify FAF compounds in the retinal pigment epithelium and correlate these findings to the AREDS. Future studies could then correlate specific FAF to the aging process, histopathology AMD phenotypes, and other maculopathies, as well as to analyze the biochemistry of autofluorescent fluorophores.

  18. In vivo volumetric depth-resolved vasculature imaging of human limbus and sclera with 1 μm swept source phase-variance optical coherence angiography

    NASA Astrophysics Data System (ADS)

    Poddar, Raju; Zawadzki, Robert J.; Cortés, Dennis E.; Mannis, Mark J.; Werner, John S.

    2015-06-01

    We present in vivo volumetric depth-resolved vasculature images of the anterior segment of the human eye acquired with phase-variance based motion contrast using a high-speed (100 kHz, 105 A-scans/s) swept source optical coherence tomography system (SSOCT). High phase stability SSOCT imaging was achieved by using a computationally efficient phase stabilization approach. The human corneo-scleral junction and sclera were imaged with swept source phase-variance optical coherence angiography and compared with slit lamp images from the same eyes of normal subjects. Different features of the rich vascular system in the conjunctiva and episclera were visualized and described. This system can be used as a potential tool for ophthalmological research to determine changes in the outflow system, which may be helpful for identification of abnormalities that lead to glaucoma.

  19. Low bandwidth eye tracker for scanning laser ophthalmoscopy

    NASA Astrophysics Data System (ADS)

    Harvey, Zachary G.; Dubra, Alfredo; Cahill, Nathan D.; Lopez Alarcon, Sonia

    2012-02-01

    The incorporation of adaptive optics to scanning ophthalmoscopes (AOSOs) has allowed for in vivo, noninvasive imaging of the human rod and cone photoreceptor mosaics. Light safety restrictions and power limitations of the current low-coherence light sources available for imaging result in each individual raw image having a low signal to noise ratio (SNR). To date, the only approach used to increase the SNR has been to collect large number of raw images (N >50), to register them to remove the distortions due to involuntary eye motion, and then to average them. The large amplitude of involuntary eye motion with respect to the AOSO field of view (FOV) dictates that an even larger number of images need to be collected at each retinal location to ensure adequate SNR over the feature of interest. Compensating for eye motion during image acquisition to keep the feature of interest within the FOV could reduce the number of raw frames required per retinal feature, therefore significantly reduce the imaging time, storage requirements, post-processing times and, more importantly, subject's exposure to light. In this paper, we present a particular implementation of an AOSO, termed the adaptive optics scanning light ophthalmoscope (AOSLO) equipped with a simple eye tracking system capable of compensating for eye drift by estimating the eye motion from the raw frames and by using a tip-tilt mirror to compensate for it in a closed-loop. Multiple control strategies were evaluated to minimize the image distortion introduced by the tracker itself. Also, linear, quadratic and Kalman filter motion prediction algorithms were implemented and tested and tested using both simulated motion (sinusoidal motion with varying frequencies) and human subjects. The residual displacement of the retinal features was used to compare the performance of the different correction strategies and prediction methods.

  20. Optimized phase mask to realize retro-reflection reduction for optical systems

    NASA Astrophysics Data System (ADS)

    He, Sifeng; Gong, Mali

    2017-10-01

    Aiming at the threats to the active laser detection systems of electro-optical devices due to the cat-eye effect, a novel solution is put forward to realize retro-reflection reduction in this paper. According to the demands of both cat-eye effect reduction and the image quality maintenance of electro-optical devices, a symmetric phase mask is achieved from a stationary phase method and a fast Fourier transform algorithm. Then, based on a comparison of peak normalized cross-correlation (PNCC) between the different defocus parameters, the optimal imaging position can be obtained. After modification with the designed phase mask, the cat-eye effect peak intensity can be reduced by two orders of magnitude while maintaining good image quality and high modulation transfer function (MTF). Furthermore, a practical design example is introduced to demonstrate the feasibility of our proposed approach.

  1. The image-forming mirror in the eye of the scallop

    NASA Astrophysics Data System (ADS)

    Palmer, Benjamin A.; Taylor, Gavin J.; Brumfeld, Vlad; Gur, Dvir; Shemesh, Michal; Elad, Nadav; Osherov, Aya; Oron, Dan; Weiner, Steve; Addadi, Lia

    2017-12-01

    Scallops possess a visual system comprising up to 200 eyes, each containing a concave mirror rather than a lens to focus light. The hierarchical organization of the multilayered mirror is controlled for image formation, from the component guanine crystals at the nanoscale to the complex three-dimensional morphology at the millimeter level. The layered structure of the mirror is tuned to reflect the wavelengths of light penetrating the scallop’s habitat and is tiled with a mosaic of square guanine crystals, which reduces optical aberrations. The mirror forms images on a double-layered retina used for separately imaging the peripheral and central fields of view. The tiled, off-axis mirror of the scallop eye bears a striking resemblance to the segmented mirrors of reflecting telescopes.

  2. Realization of the ergonomics design and automatic control of the fundus cameras

    NASA Astrophysics Data System (ADS)

    Zeng, Chi-liang; Xiao, Ze-xin; Deng, Shi-chao; Yu, Xin-ye

    2012-12-01

    The principles of ergonomics design in fundus cameras should be extending the agreeableness by automatic control. Firstly, a 3D positional numerical control system is designed for positioning the eye pupils of the patients who are doing fundus examinations. This system consists of a electronically controlled chin bracket for moving up and down, a lateral movement of binocular with the detector and the automatic refocusing of the edges of the eye pupils. Secondly, an auto-focusing device for the object plane of patient's fundus is designed, which collects the patient's fundus images automatically whether their eyes is ametropic or not. Finally, a moving visual target is developed for expanding the fields of the fundus images.

  3. A compact and lightweight off-axis lightguide prism in near to eye display

    NASA Astrophysics Data System (ADS)

    Zhuang, Zhenfeng; Cheng, Qijia; Surman, Phil; Zheng, Yuanjin; Sun, Xiao Wei

    2017-06-01

    We propose a method to improve the design of an off-axis lightguide configuration for near to eye displays (NED) using freeform optics technology. The advantage of this modified optical system, which includes an organic light-emitting diode (OLED), a doublet lens, an imaging lightguide prism and a compensation prism, is that it increases optical length path, offers a smaller size, as well as avoids the obstructed views, and matches the user's head shape. In this system, the light emitted from the OLED passes through the doublet lens and is refracted/reflected by the imaging lightguide prism, which is used to magnify the image from the microdisplay, while the compensation prism is utilized to correct the light ray shift so that a low-distortion image can be observed in a real-world setting. A NED with a 4 mm diameter exit pupil, 21.5° diagonal full field of view (FoV), 23 mm eye relief, and a size of 33 mm by 9.3 mm by 16 mm is designed. The developed system is compact, lightweight and suitable for entertainment and education application.

  4. Remote gaze tracking system on a large display.

    PubMed

    Lee, Hyeon Chang; Lee, Won Oh; Cho, Chul Woo; Gwon, Su Yeong; Park, Kang Ryoung; Lee, Heekyung; Cha, Jihun

    2013-10-07

    We propose a new remote gaze tracking system as an intelligent TV interface. Our research is novel in the following three ways: first, because a user can sit at various positions in front of a large display, the capture volume of the gaze tracking system should be greater, so the proposed system includes two cameras which can be moved simultaneously by panning and tilting mechanisms, a wide view camera (WVC) for detecting eye position and an auto-focusing narrow view camera (NVC) for capturing enlarged eye images. Second, in order to remove the complicated calibration between the WVC and NVC and to enhance the capture speed of the NVC, these two cameras are combined in a parallel structure. Third, the auto-focusing of the NVC is achieved on the basis of both the user's facial width in the WVC image and a focus score calculated on the eye image of the NVC. Experimental results showed that the proposed system can be operated with a gaze tracking accuracy of ±0.737°~±0.775° and a speed of 5~10 frames/s.

  5. Remote Gaze Tracking System on a Large Display

    PubMed Central

    Lee, Hyeon Chang; Lee, Won Oh; Cho, Chul Woo; Gwon, Su Yeong; Park, Kang Ryoung; Lee, Heekyung; Cha, Jihun

    2013-01-01

    We propose a new remote gaze tracking system as an intelligent TV interface. Our research is novel in the following three ways: first, because a user can sit at various positions in front of a large display, the capture volume of the gaze tracking system should be greater, so the proposed system includes two cameras which can be moved simultaneously by panning and tilting mechanisms, a wide view camera (WVC) for detecting eye position and an auto-focusing narrow view camera (NVC) for capturing enlarged eye images. Second, in order to remove the complicated calibration between the WVC and NVC and to enhance the capture speed of the NVC, these two cameras are combined in a parallel structure. Third, the auto-focusing of the NVC is achieved on the basis of both the user's facial width in the WVC image and a focus score calculated on the eye image of the NVC. Experimental results showed that the proposed system can be operated with a gaze tracking accuracy of ±0.737°∼±0.775° and a speed of 5∼10 frames/s. PMID:24105351

  6. Application of TrackEye in equine locomotion research.

    PubMed

    Drevemo, S; Roepstorff, L; Kallings, P; Johnston, C J

    1993-01-01

    TrackEye is an analysis system, which is applicable for equine biokinematic studies. It covers the whole process from digitizing of images, automatic target tracking and analysis. Key components in the system are an image work station for processing of video images and a high-resolution film-to-video scanner for 16-mm film. A recording module controls the input device and handles the capture of image sequences into a videodisc system, and a tracking module is able to follow reference markers automatically. The system offers a flexible analysis including calculations of markers displacements, distances and joint angles, velocities and accelerations. TrackEye was used to study effects of phenylbutazone on the fetlock and carpal joint angle movements in a horse with a mild lameness caused by osteo-arthritis in the fetlock joint of a forelimb. Significant differences, most evident before treatment, were observed in the minimum fetlock and carpal joint angles when contralateral limbs were compared (p < 0.001). The minimum fetlock angle and the minimum carpal joint angle were significantly greater in the lame limb before treatment compared to those 6, 37 and 49 h after the last treatment (p < 0.001).

  7. ASSOCIATION BETWEEN VISUAL FUNCTION AND SUBRETINAL DRUSENOID DEPOSITS IN NORMAL AND EARLY AGE-RELATED MACULAR DEGENERATION EYES.

    PubMed

    Neely, David; Zarubina, Anna V; Clark, Mark E; Huisingh, Carrie E; Jackson, Gregory R; Zhang, Yuhua; McGwin, Gerald; Curcio, Christine A; Owsley, Cynthia

    2017-07-01

    To examine the association between subretinal drusenoid deposits (SDDs) identified by multimodal retinal imaging and visual function in older eyes with normal macular health or in the earliest phases of age-related macular degeneration (AMD). Age-related macular degeneration status for each eye was defined according to the Age-Related Eye Disease Study (AREDS) 9-step classification system (normal = Step 1, early AMD = Steps 2-4) based on color fundus photographs. Visual functions measured were best-corrected photopic visual acuity, contrast and light sensitivity, mesopic visual acuity, low-luminance deficit, and rod-mediated dark adaptation. Subretinal drusenoid deposits were identified through multimodal imaging (color fundus photographs, infrared reflectance and fundus autofluorescence images, and spectral domain optical coherence tomography). The sample included 1,202 eyes (958 eyes with normal health and 244 eyes with early AMD). In normal eyes, SDDs were not associated with any visual function evaluated. In eyes with early AMD, dark adaptation was markedly delayed in eyes with SDDs versus no SDD (a 4-minute delay on average), P = 0.0213. However, this association diminished after age adjustment, P = 0.2645. Other visual functions in early AMD eyes were not associated with SDDs. In a study specifically focused on eyes in normal macular health and in the earliest phases of AMD, early AMD eyes with SDDs have slower dark adaptation, largely attributable to the older ages of eyes with SDD; they did not exhibit deficits in other visual functions. Subretinal drusenoid deposits in older eyes in normal macular health are not associated with any visual functions evaluated.

  8. DLP™-based dichoptic vision test system

    NASA Astrophysics Data System (ADS)

    Woods, Russell L.; Apfelbaum, Henry L.; Peli, Eli

    2010-01-01

    It can be useful to present a different image to each of the two eyes while they cooperatively view the world. Such dichoptic presentation can occur in investigations of stereoscopic and binocular vision (e.g., strabismus, amblyopia) and vision rehabilitation in clinical and research settings. Various techniques have been used to construct dichoptic displays. The most common and most flexible modern technique uses liquid-crystal (LC) shutters. When used in combination with cathode ray tube (CRT) displays, there is often leakage of light from the image intended for one eye into the view of the other eye. Such interocular crosstalk is 14% even in our state of the art CRT-based dichoptic system. While such crosstalk may have minimal impact on stereo movie or video game experiences, it can defeat clinical and research investigations. We use micromirror digital light processing (DLP™) technology to create a novel dichoptic visual display system with substantially lower interocular crosstalk (0.3% remaining crosstalk comes from the LC shutters). The DLP system normally uses a color wheel to display color images. Our approach is to disable the color wheel, synchronize the display directly to the computer's sync signal, allocate each of the three (former) color presentations to one or both eyes, and open and close the LC shutters in synchrony with those color events.

  9. Pediatric Eye Screening Instrumentation

    NASA Astrophysics Data System (ADS)

    Chen, Ying-Ling; Lewis, J. W. L.

    2001-11-01

    Computational evaluations are presented for binocular eye screening using the off-axis digital retinascope. The retinascope, such as the iScreen digital screening system, has been employed to perform pediatric binocular screening using a flash lamp and single-shot camera recording. The digital images are transferred electronically to a reading center for analysis. The method has been shown to detect refractive error, amblyopia, anisocoria, and ptosis. This computational work improves the performance of the system and forms the basis for automated data analysis. For this purpose, variouis published eye models are evaluated with simulated retinascope images. Two to ten million rays are traced in each image calculation. The poster will present the simulation results for a range of eye conditions of refractive error of -20 to +20 diopters with 0.5- to-1 diopter resolution, pupil size of 3 to 8 mm diameter (1-mm increment), and staring angle of 2 to 12 degree (2-degree increment). The variation of the results with the system conditions such as the off-axis distance of light source and the shutter size of camera are also evaluated. The quantitative analysis for each eye’s and system’s condition is then performed to obtain parameters for automatic reading. The summary of the system performance is given and performance-enhancement design modifications are presented.

  10. Adaptive optics optical coherence tomography with dynamic retinal tracking

    PubMed Central

    Kocaoglu, Omer P.; Ferguson, R. Daniel; Jonnal, Ravi S.; Liu, Zhuolin; Wang, Qiang; Hammer, Daniel X.; Miller, Donald T.

    2014-01-01

    Adaptive optics optical coherence tomography (AO-OCT) is a highly sensitive and noninvasive method for three dimensional imaging of the microscopic retina. Like all in vivo retinal imaging techniques, however, it suffers the effects of involuntary eye movements that occur even under normal fixation. In this study we investigated dynamic retinal tracking to measure and correct eye motion at KHz rates for AO-OCT imaging. A customized retina tracking module was integrated into the sample arm of the 2nd-generation Indiana AO-OCT system and images were acquired on three subjects. Analyses were developed based on temporal amplitude and spatial power spectra in conjunction with strip-wise registration to independently measure AO-OCT tracking performance. After optimization of the tracker parameters, the system was found to correct eye movements up to 100 Hz and reduce residual motion to 10 µm root mean square. Between session precision was 33 µm. Performance was limited by tracker-generated noise at high temporal frequencies. PMID:25071963

  11. Anterior segment and retinal OCT imaging with simplified sample arm using focus tunable lens technology (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Grulkowski, Ireneusz; Karnowski, Karol; Ruminski, Daniel; Wojtkowski, Maciej

    2016-03-01

    Availability of the long-depth-range OCT systems enables comprehensive structural imaging of the eye and extraction of biometric parameters characterizing the entire eye. Several approaches have been developed to perform OCT imaging with extended depth ranges. In particular, current SS-OCT technology seems to be suited to visualize both anterior and posterior eye in a single measurement. The aim of this study is to demonstrate integrated anterior segment and retinal SS-OCT imaging using a single instrument, in which the sample arm is equipped with the electrically tunable lens (ETL). ETL is composed of the optical liquid confined in the space by an elastic polymer membrane. The shape of the membrane, electrically controlled by a specific ring, defines the radius of curvature of the lens surface, thus it regulates the power of the lens. ETL can be also equipped with additional offset lens to adjust the tuning range of the optical power. We characterize the operation of the tunable lens using wavefront sensing. We develop the optimized optical set-up with two adaptive operational states of the ETL in order to focus the light either on the retina or on the anterior segment of the eye. We test the performance of the set-up by utilizing whole eye phantom as the object. Finally, we perform human eye in vivo imaging using the SS-OCT instrument with versatile imaging functionality that accounts for the optics of the eye and enables dynamic control of the optical beam focus.

  12. Matching between the light spots and lenslets of an artificial compound eye system

    NASA Astrophysics Data System (ADS)

    He, Jianzheng; Jian, Huijie; Zhu, Qitao; Ma, Mengchao; Wang, Keyi

    2017-10-01

    As the visual organ of many arthropods, the compound eye has attracted a lot of attention with the advantage of wide field-of-view, multi-channel imaging ability and high agility. Extended from this concept, a new kind of artificial compound eye device is developed. There are 141 lenslets which share one image sensor distributed evenly on a curved surface, thus it is difficult to distinguish the lenslets which the light spot belongs to during calibration and positioning process. Therefore, the matching algorithm is proposed based on the device structure and the principle of calibration and positioning. Region partition of lenslet array is performed at first. Each lenslet and its adjacent lenslets are defined as cluster eyes and constructed into an index table. In the calibration process, a polar coordinate system is established, and the matching can be accomplished by comparing the rotary table position in the polar coordinate system and the central light spot angle in the image. In the positioning process, the spot is paired to the correct region according to the spots distribution firstly, and the final results is determined by the dispersion of the distance from the target point to the incident ray in the region traversal matching. Finally, the experiment results show that the presented algorithms provide a feasible and efficient way to match the spot to the lenslet, and perfectly meet the needs in the practical application of the compound eye system.

  13. Improved high-resolution ultrasonic imaging of the eye.

    PubMed

    Silverman, Ronald H; Ketterling, Jeffrey A; Mamou, Jonathan; Coleman, D Jackson

    2008-01-01

    Currently, virtually all clinical diagnostic ultrasound systems used in ophthalmology are based on fixed-focus, single-element transducers. High-frequency (> or = 20-MHz) transducers introduced to ophthalmology during the last decade have led to improved resolution and diagnostic capabilities for assessment of the anterior segment and the retina. However, single-element transducers are restricted to a small depth of field, limiting their capacity to image the eye as a whole. We fabricated a 20-MHz annular array probe prototype consisting of 5 concentric transducer elements and scanned an ex vivo human eye. Synthetically focused images of the bank eye showed improved depth of field and sensitivity, allowing simultaneous display of the anterior and posterior segments and the full lens contour. This capability may be useful in assessment of vitreoretinal pathologies and investigation of the accommodative mechanism.

  14. DETECTION OF MICROVASCULAR CHANGES IN EYES OF PATIENTS WITH DIABETES BUT NOT CLINICAL DIABETIC RETINOPATHY USING OPTICAL COHERENCE TOMOGRAPHY ANGIOGRAPHY.

    PubMed

    de Carlo, Talisa E; Chin, Adam T; Bonini Filho, Marco A; Adhi, Mehreen; Branchini, Lauren; Salz, David A; Baumal, Caroline R; Crawford, Courtney; Reichel, Elias; Witkin, Andre J; Duker, Jay S; Waheed, Nadia K

    2015-11-01

    To evaluate the ability of optical coherence tomography angiography to detect early microvascular changes in eyes of diabetic individuals without clinical retinopathy. Prospective observational study of 61 eyes of 39 patients with diabetes mellitus and 28 control eyes of 22 age-matched healthy subjects that received imaging using optical coherence tomography angiography between August 2014 and March 2015. Eyes with concomitant retinal, optic nerve, and vitreoretinal interface diseases and/or poor-quality images were excluded. Foveal avascular zone size and irregularity, vessel beading and tortuosity, capillary nonperfusion, and microaneurysm were evaluated. Foveal avascular zone size measured 0.348 mm² (0.1085-0.671) in diabetic eyes and 0.288 mm² (0.07-0.434) in control eyes (P = 0.04). Foveal avascular zone remodeling was seen more often in diabetic than control eyes (36% and 11%, respectively; P = 0.01). Capillary nonperfusion was noted in 21% of diabetic eyes and 4% of control eyes (P = 0.03). Microaneurysms and venous beading were noted in less than 10% of both diabetic and control eyes. Both diabetic and healthy control eyes demonstrated tortuous vessels in 21% and 25% of eyes, respectively. Optical coherence tomography angiography was able to image foveal microvascular changes that were not detected by clinical examination in diabetic eyes. Changes to the foveal avascular zone and capillary nonperfusion were more prevalent in diabetic eyes, whereas vessel tortuosity was observed with a similar frequency in normal and diabetic eyes. Optical coherence tomography angiography may be able to detect diabetic eyes at risk of developing retinopathy and to screen for diabetes quickly and noninvasively before the systemic diagnosis is made.

  15. Remote vs. head-mounted eye-tracking: a comparison using radiologists reading mammograms

    NASA Astrophysics Data System (ADS)

    Mello-Thoms, Claudia; Gur, David

    2007-03-01

    Eye position monitoring has been used for decades in Radiology in order to determine how radiologists interpret medical images. Using these devices several discoveries about the perception/decision making process have been made, such as the importance of comparisons of perceived abnormalities with selected areas of the background, the likelihood that a true lesion will attract visual attention early in the reading process, and the finding that most misses attract prolonged visual dwell, often comparable to dwell in the location of reported lesions. However, eye position tracking is a cumbersome process, which often requires the observer to wear a helmet gear which contains the eye tracker per se and a magnetic head tracker, which allows for the computation of head position. Observers tend to complain of fatigue after wearing the gear for a prolonged time. Recently, with the advances made to remote eye-tracking, the use of head-mounted systems seemed destined to become a thing of the past. In this study we evaluated a remote eye tracking system, and compared it to a head-mounted system, as radiologists read a case set of one-view mammograms on a high-resolution display. We compared visual search parameters between the two systems, such as time to hit the location of the lesion for the first time, amount of dwell time in the location of the lesion, total time analyzing the image, etc. We also evaluated the observers' impressions of both systems, and what their perceptions were of the restrictions of each system.

  16. Anterior segment angiography of the normal canine eye: a comparison between indocyanine green and sodium fluorescein.

    PubMed

    Pirie, C G; Alario, A

    2014-03-01

    The objective of this study was to assess and compare indocyanine green (IG) and sodium fluorescein (SF) angiographic findings in the normal canine anterior segment using a digital single lens reflex (dSLR) camera adaptor. Images were obtained from 10 brown-eyed Beagles, free of ocular and systemic disease. All animals received butorphanol (0.2 mg/kg IM), maropitant citrate (1.0 mg/kg SC) and diphenhydramine (2.0 mg/kg SC) 20 min prior to propofol (4 mg/kg IV bolus, 0.2 mg/kg/min continuous rate infusion). Standard color imaging was performed prior to the administration of 0.25% IG (1 mg/kg IV). Imaging was performed using a full spectrum dSLR camera, dSLR camera adaptor, camera lens (Canon 60 mm f/2.8 Macro) and an accessory flash. Images were obtained at a rate of 1/s immediately following IG bolus for 30 s, then at 1, 2, 3, 4 and 5 min. Ten minutes later, 10% SF (20 mg/kg IV) was administered. Imaging was repeated using the same adaptor system and imaging sequence protocol. Arterial, capillary and venous phases were identified during anterior segment IG angiography (ASIGA) and their time sequences were recorded. ASIGA offered improved visualization of the iris vasculature in heavily pigmented eyes compared to anterior segment SF angiography (ASSFA), since visualization of the vascular pattern during ASSFA was not possible due to pigment masking. Leakage of SF was noted in a total of six eyes. The use of IG and SF was not associated with any observed adverse events. The adaptor described here provides a cost-effective alternative to existing imaging systems. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Clinical ophthalmic ultrasound improvements

    NASA Technical Reports Server (NTRS)

    Garrison, J. B.; Piro, P. A.

    1981-01-01

    The use of digital synthetic aperture techniques to obtain high resolution ultrasound images of eye and orbit was proposed. The parameters of the switched array configuration to reduce data collection time to a few milliseconds to avoid eye motion problems in the eye itself were established. An assessment of the effects of eye motion on the performance of the system was obtained. The principles of synthetic techniques are discussed. Likely applications are considered.

  18. En face projection imaging of the human choroidal layers with tracking SLO and swept source OCT angiography methods

    NASA Astrophysics Data System (ADS)

    Gorczynska, Iwona; Migacz, Justin; Zawadzki, Robert J.; Sudheendran, Narendran; Jian, Yifan; Tiruveedhula, Pavan K.; Roorda, Austin; Werner, John S.

    2015-07-01

    We tested and compared the capability of multiple optical coherence tomography (OCT) angiography methods: phase variance, amplitude decorrelation and speckle variance, with application of the split spectrum technique, to image the choroiretinal complex of the human eye. To test the possibility of OCT imaging stability improvement we utilized a real-time tracking scanning laser ophthalmoscopy (TSLO) system combined with a swept source OCT setup. In addition, we implemented a post- processing volume averaging method for improved angiographic image quality and reduction of motion artifacts. The OCT system operated at the central wavelength of 1040nm to enable sufficient depth penetration into the choroid. Imaging was performed in the eyes of healthy volunteers and patients diagnosed with age-related macular degeneration.

  19. Broadly Applicable Nanowafer Drug Delivery System for Treating Eye Injuries

    DTIC Science & Technology

    2014-09-01

    the drug molecular transport into the cornea. Intravital laser confocal imaging of the live mouse cornea demonstrating the presence of drug in the...vivo drug release in the mouse cornea by laser confocal fluorescence imaging study revealed that the nanowafers upon instillation on mouse eye were...C) 500nm; (D) 1µm; (E) 1.5µm; and (F) 3µm A B C D E F microscopy (SEM) for the feature integrity and uniformity. The SEM images revealed the presence

  20. Identification of cataract and post-cataract surgery optical images using artificial intelligence techniques.

    PubMed

    Acharya, Rajendra Udyavara; Yu, Wenwei; Zhu, Kuanyi; Nayak, Jagadish; Lim, Teik-Cheng; Chan, Joey Yiptong

    2010-08-01

    Human eyes are most sophisticated organ, with perfect and interrelated subsystems such as retina, pupil, iris, cornea, lens and optic nerve. The eye disorder such as cataract is a major health problem in the old age. Cataract is formed by clouding of lens, which is painless and developed slowly over a long period. Cataract will slowly diminish the vision leading to the blindness. At an average age of 65, it is most common and one third of the people of this age in world have cataract in one or both the eyes. A system for detection of the cataract and to test for the efficacy of the post-cataract surgery using optical images is proposed using artificial intelligence techniques. Images processing and Fuzzy K-means clustering algorithm is applied on the raw optical images to detect the features specific to three classes to be classified. Then the backpropagation algorithm (BPA) was used for the classification. In this work, we have used 140 optical image belonging to the three classes. The ANN classifier showed an average rate of 93.3% in detecting normal, cataract and post cataract optical images. The system proposed exhibited 98% sensitivity and 100% specificity, which indicates that the results are clinically significant. This system can also be used to test the efficacy of the cataract operation by testing the post-cataract surgery optical images.

  1. Optical Coherence Tomography Angiography of Pigmented Paravenous Retinochoroidal Atrophy.

    PubMed

    Cicinelli, Maria Vittoria; Giuffrè, Chiara; Rabiolo, Alessandro; Parodi, Maurizio Battaglia; Bandello, Francesco

    2018-05-01

    A 58-year-old man with bilateral pigmented paravenous retinochoroidal atrophy (PPRCA) associated with macular coloboma in the right eye underwent color fundus photography and fundus autofluorescence with the California ultra-widefield retinal imaging system (Optos, Dunfermline, UK), spectral-domain optical coherence tomography (SD-OCT) (Heidelberg Spectralis HRA + OCT; Heidelberg Engineering, Heidelberg, Germany), and en face OCT angiography (OCTA) (AngioPlex, Cirrus HD-OCT 5000; Carl Zeiss Meditec, Dublin, CA). The patient presented with a visual acuity of counting fingers in the right eye and 20/32 in the left eye. Fundus examination and SD-OCT showed typical PPRCA alterations in both eyes and a macular coloboma in the right eye. The OCTA showed relative sparing of the retinal capillary plexuses, with diffuse defects in the choriocapillaris. The authors concluded OCTA imaging of PPRCA suggests more insights of the pathogenesis of this disease, showing that the disease primarily affects the choroidal vascular network, with a relative sparing of the retinal vasculature. [Ophthalmic Surg Lasers Imaging Retina. 2018;49:381-383.]. Copyright 2018, SLACK Incorporated.

  2. On-demand stereoscopic 3D displays for avionic and military applications

    NASA Astrophysics Data System (ADS)

    Sarma, Kalluri; Lu, Kanghua; Larson, Brent; Schmidt, John; Cupero, Frank

    2010-04-01

    High speed AM LCD flat panels are evaluated for use in Field Sequential Stereoscopic (FSS) 3D displays for military and avionic applications. A 120 Hz AM LCD is used in field-sequential mode for constructing eyewear-based as well as autostereoscopic 3D display demonstrators for test and evaluation. The COTS eyewear-based system uses shutter glasses to control left-eye/right-eye images. The autostereoscopic system uses a custom backlight to generate illuminating pupils for left and right eyes. It is driven in synchronization with the images on the LCD. Both displays provide 3D effect in full-color and full-resolution in the AM LCD flat panel. We have realized luminance greater than 200 fL in 3D mode with the autostereoscopic system for sunlight readability. The characterization results and performance attributes of both systems are described.

  3. VisualEyes: a modular software system for oculomotor experimentation.

    PubMed

    Guo, Yi; Kim, Eun H; Kim, Eun; Alvarez, Tara; Alvarez, Tara L

    2011-03-25

    Eye movement studies have provided a strong foundation forming an understanding of how the brain acquires visual information in both the normal and dysfunctional brain.(1) However, development of a platform to stimulate and store eye movements can require substantial programming, time and costs. Many systems do not offer the flexibility to program numerous stimuli for a variety of experimental needs. However, the VisualEyes System has a flexible architecture, allowing the operator to choose any background and foreground stimulus, program one or two screens for tandem or opposing eye movements and stimulate the left and right eye independently. This system can significantly reduce the programming development time needed to conduct an oculomotor study. The VisualEyes System will be discussed in three parts: 1) the oculomotor recording device to acquire eye movement responses, 2) the VisualEyes software written in LabView, to generate an array of stimuli and store responses as text files and 3) offline data analysis. Eye movements can be recorded by several types of instrumentation such as: a limbus tracking system, a sclera search coil, or a video image system. Typical eye movement stimuli such as saccadic steps, vergent ramps and vergent steps with the corresponding responses will be shown. In this video report, we demonstrate the flexibility of a system to create numerous visual stimuli and record eye movements that can be utilized by basic scientists and clinicians to study healthy as well as clinical populations.

  4. Analysis of the Origin of Atypical Scanning Laser Polarimetry Patterns by Polarization-Sensitive Optical Coherence Tomography

    PubMed Central

    Götzinger, Erich; Pircher, Michael; Baumann, Bernhard; Hirn, Cornelia; Vass, Clemens; Hitzenberger, Christoph K.

    2010-01-01

    Purpose To analyze the physical origin of atypical scanning laser polarimetry (SLP) patterns. To compare polarization-sensitive optical coherence tomography (PS-OCT) scans to SLP images. To present a method to obtain pseudo-SLP images by PS-OCT that are free of atypical artifacts. Methods Forty-one eyes of healthy subjects, subjects with suspected glaucoma, and patients with glaucoma were imaged by SLP (GDx VCC) and a prototype spectral domain PS-OCT system. The PS-OCT system acquires three-dimensional (3D) datasets of intensity, retardation, and optic axis orientation simultaneously within 3 seconds. B-scans of intensity and retardation and en face maps of retinal nerve fiber layer (RNFL) retardation were derived from the 3D PS-OCT datasets. Results were compared with those obtained by SLP. Results Twenty-two eyes showed atypical retardation patterns, and 19 eyes showed normal patterns. From the 22 atypical eyes, 15 showed atypical patterns in both imaging modalities, five were atypical only in SLP images, and two were atypical only in PS-OCT images. In most (15 of 22) atypical cases, an increased penetration of the probing beam into the birefringent sclera was identified as the source of atypical patterns. In such cases, the artifacts could be eliminated in PS-OCT images by depth segmentation and exclusion of scleral signals. Conclusions PS-OCT provides deeper insight into the contribution of different fundus layers to SLP images. Increased light penetration into the sclera can distort SLP retardation patterns of the RNFL. PMID:19036999

  5. Integrated photoacoustic microscopy, optical coherence tomography, and fluorescence microscopy for multimodal chorioretinal imaging

    NASA Astrophysics Data System (ADS)

    Tian, Chao; Zhang, Wei; Nguyen, Van Phuc; Huang, Ziyi; Wang, Xueding; Paulus, Yannis M.

    2018-02-01

    Current clinical available retinal imaging techniques have limitations, including limited depth of penetration or requirement for the invasive injection of exogenous contrast agents. Here, we developed a novel multimodal imaging system for high-speed, high-resolution retinal imaging of larger animals, such as rabbits. The system integrates three state-of-the-art imaging modalities, including photoacoustic microscopy (PAM), optical coherence tomography (OCT), and fluorescence microscopy (FM). In vivo experimental results of rabbit eyes show that the PAM is able to visualize laser-induced retinal burns and distinguish individual eye blood vessels using a laser exposure dose of 80 nJ, which is well below the American National Standards Institute (ANSI) safety limit 160 nJ. The OCT can discern different retinal layers and visualize laser burns and choroidal detachments. The novel multi-modal imaging platform holds great promise in ophthalmic imaging.

  6. Analysis and design of a refractive virtual image system

    NASA Technical Reports Server (NTRS)

    Kahlbaum, W. M.

    1977-01-01

    The optical performance of a virtual image display system is evaluated. Observation of a two-element (unachromatized doublet) refractive system led to the conclusion that the major source of image degradation was lateral chromatic aberration. This conclusion was verified by computer analysis of the system. The lateral chromatic aberration is given in terms of the resolution of the phosphor dots on a standard shadow mask color cathode ray tube. Single wavelength considerations include: astigmatism, apparent image distance from the observer, binocular disparities and differences of angular magnification of the images presented to each of the observer's eyes. Where practical, these results are related to the performance of the human eye. All these techniques are applied to the previously mentioned doublet and a triplet refractive system. The triplet provides a 50-percent reduction in lateral chromatic aberration which was the design goal. Distortion was also reduced to a minimum over the field of view. The methods used in the design of the triplet are presented along with a method of relating classical aberration curves to image distance and binocular disparity.

  7. An optimized adaptive optics experimental setup for in vivo retinal imaging

    NASA Astrophysics Data System (ADS)

    Balderas-Mata, S. E.; Valdivieso González, L. G.; Ramírez Zavaleta, G.; López Olazagasti, E.; Tepichin Rodriguez, E.

    2012-10-01

    The use of Adaptive Optics (AO) in ophthalmologic instruments to image human retinas has been probed to improve the imaging lateral resolution, by correcting both static and dynamic aberrations inherent in human eyes. Typically, the configuration of the AO arm uses an infrared beam from a superluminescent diode (SLD), which is focused on the retina, acting as a point source. The back reflected light emerges through the eye optical system bringing with it the aberrations of the cornea. The aberrated wavefront is measured with a Shack - Hartmann wavefront sensor (SHWFS). However, the aberrations in the optical imaging system can reduced the performance of the wave front correction. The aim of this work is to present an optimized first stage AO experimental setup for in vivo retinal imaging. In our proposal, the imaging optical system has been designed in order to reduce spherical aberrations due to the lenses. The ANSI Standard is followed assuring the safety power levels. The performance of the system will be compared with a commercial aberrometer. This system will be used as the AO arm of a flood-illuminated fundus camera system for retinal imaging. We present preliminary experimental results showing the enhancement.

  8. Alternative images for perpendicular parking : a usability test of a multi-camera parking assistance system.

    DOT National Transportation Integrated Search

    2004-10-01

    The parking assistance system evaluated consisted of four outward facing cameras whose images could be presented on a monitor on the center console. The images presented varied in the location of the virtual eye point of the camera (the height above ...

  9. A Novel, Real-Time, In Vivo Mouse Retinal Imaging System.

    PubMed

    Butler, Mark C; Sullivan, Jack M

    2015-11-01

    To develop an efficient, low-cost instrument for robust real-time imaging of the mouse retina in vivo, and assess system capabilities by evaluating various animal models. Following multiple disappointing attempts to visualize the mouse retina during a subretinal injection using commercially available systems, we identified the key limitation to be inadequate illumination due to off axis illumination and poor optical train optimization. Therefore, we designed a paraxial illumination system for Greenough-type stereo dissecting microscope incorporating an optimized optical launch and an efficiently coupled fiber optic delivery system. Excitation and emission filters control spectral bandwidth. A color coupled-charged device (CCD) camera is coupled to the microscope for image capture. Although, field of view (FOV) is constrained by the small pupil aperture, the high optical power of the mouse eye, and the long working distance (needed for surgical manipulations), these limitations can be compensated by eye positioning in order to observe the entire retina. The retinal imaging system delivers an adjustable narrow beam to the dilated pupil with minimal vignetting. The optic nerve, vasculature, and posterior pole are crisply visualized and the entire retina can be observed through eye positioning. Normal and degenerative retinal phenotypes can be followed over time. Subretinal or intraocular injection procedures are followed in real time. Real-time, intravenous fluorescein angiography for the live mouse has been achieved. A novel device is established for real-time viewing and image capture of the small animal retina during subretinal injections for preclinical gene therapy studies.

  10. In Vivo Imaging of the Human Retinal Pigment Epithelial Mosaic Using Adaptive Optics Enhanced Indocyanine Green Ophthalmoscopy

    PubMed Central

    Tam, Johnny; Liu, Jianfei; Dubra, Alfredo; Fariss, Robert

    2016-01-01

    Purpose The purpose of this study was to establish that retinal pigment epithelial (RPE) cells take up indocyanine green (ICG) dye following systemic injection and that adaptive optics enhanced indocyanine green ophthalmoscopy (AO-ICG) enables direct visualization of the RPE mosaic in the living human eye. Methods A customized adaptive optics scanning light ophthalmoscope (AOSLO) was used to acquire high-resolution retinal fluorescence images of residual ICG dye in human subjects after intravenous injection at the standard clinical dose. Simultaneously, multimodal AOSLO images were also acquired, which included confocal reflectance, nonconfocal split detection, and darkfield. Imaging was performed in 6 eyes of three healthy subjects with no history of ocular or systemic diseases. In addition, histologic studies in mice were carried out. Results The AO-ICG channel successfully resolved individual RPE cells in human subjects at various time points, including 20 minutes and 2 hours after dye administration. Adaptive optics-ICG images of RPE revealed detail which could be correlated with AO dark-field images of the same cells. Interestingly, there was a marked heterogeneity in the fluorescence of individual RPE cells. Confirmatory histologic studies in mice corroborated the specific uptake of ICG by the RPE layer at a late time point after systemic ICG injection. Conclusions Adaptive optics-enhanced imaging of ICG dye provides a novel way to visualize and assess the RPE mosaic in the living human eye alongside images of the overlying photoreceptors and other cells. PMID:27564519

  11. In Vivo Imaging of the Human Retinal Pigment Epithelial Mosaic Using Adaptive Optics Enhanced Indocyanine Green Ophthalmoscopy.

    PubMed

    Tam, Johnny; Liu, Jianfei; Dubra, Alfredo; Fariss, Robert

    2016-08-01

    The purpose of this study was to establish that retinal pigment epithelial (RPE) cells take up indocyanine green (ICG) dye following systemic injection and that adaptive optics enhanced indocyanine green ophthalmoscopy (AO-ICG) enables direct visualization of the RPE mosaic in the living human eye. A customized adaptive optics scanning light ophthalmoscope (AOSLO) was used to acquire high-resolution retinal fluorescence images of residual ICG dye in human subjects after intravenous injection at the standard clinical dose. Simultaneously, multimodal AOSLO images were also acquired, which included confocal reflectance, nonconfocal split detection, and darkfield. Imaging was performed in 6 eyes of three healthy subjects with no history of ocular or systemic diseases. In addition, histologic studies in mice were carried out. The AO-ICG channel successfully resolved individual RPE cells in human subjects at various time points, including 20 minutes and 2 hours after dye administration. Adaptive optics-ICG images of RPE revealed detail which could be correlated with AO dark-field images of the same cells. Interestingly, there was a marked heterogeneity in the fluorescence of individual RPE cells. Confirmatory histologic studies in mice corroborated the specific uptake of ICG by the RPE layer at a late time point after systemic ICG injection. Adaptive optics-enhanced imaging of ICG dye provides a novel way to visualize and assess the RPE mosaic in the living human eye alongside images of the overlying photoreceptors and other cells.

  12. Course for undergraduate students: analysis of the retinal image quality of a human eye model

    NASA Astrophysics Data System (ADS)

    del Mar Pérez, Maria; Yebra, Ana; Fernández-Oliveras, Alicia; Ghinea, Razvan; Ionescu, Ana M.; Cardona, Juan C.

    2014-07-01

    In teaching of Vision Physics or Physiological Optics, the knowledge and analysis of the aberration that the human eye presents are of great interest, since this information allows a proper evaluation of the quality of the retinal image. The objective of the present work is that the students acquire the required competencies which will allow them to evaluate the optical quality of the human visual system for emmetropic and ammetropic eye, both with and without the optical compensation. For this purpose, an optical system corresponding to the Navarro-Escudero eye model, which allows calculating and evaluating the aberration of this eye model in different ammetropic conditions, was developed employing the OSLO LT software. The optical quality of the visual system will be assessed through determinations of the third and fifth order aberration coefficients, the impact diagram, wavefront analysis, calculation of the Point Spread Function and the Modulation Transfer Function for ammetropic individuals, with myopia or hyperopia, both with or without the optical compensation. This course is expected to be of great interest for student of Optics and Optometry Sciences, last courses of Physics or medical sciences related with human vision.

  13. High quality optical microangiography of ocular microcirculation and measurement of total retinal blood flow in mouse eye

    NASA Astrophysics Data System (ADS)

    Zhi, Zhongwei; Yin, Xin; Dziennis, Suzan; Alpers, Charles E.; Wang, Ruikang K.

    2013-03-01

    Visualization and measurement of retinal blood flow (RBF) is important to the diagnosis and management of different eye diseases, including diabetic retinopathy. Optical microangiography (OMAG) is developed for generating 3D dynamic microcirculation image and later refined into ultra-high sensitive OMAG (UHS-OMAG) for true capillary vessels imaging. Here, we present the application of OMAG imaging technique for visualization of depth-resolved vascular network within retina and choroid as well as measurement of total retinal blood flow in mice. A fast speed spectral domain OCT imaging system at 820nm with a line scan rate of 140 kHz was developed to image mouse posterior eye. By applying UHS-OMAG scanning protocol and processing algorithm, we achieved true capillary level imaging of retina and choroid vasculature in mouse eye. The vascular pattern within different retinal layers and choroid was presented. An en face Doppler OCT approach [1] without knowing Doppler angle was adopted for the measurement of total retinal blood flow. The axial blood flow velocity is measured in an en face plane by raster scanning and the flow is calculated by integrating over the vessel area of the central retinal artery.

  14. Quality of image of grating target placed in model of human eye with corneal aberrations as observed through multifocal intraocular lenses.

    PubMed

    Inoue, Makoto; Noda, Toru; Mihashi, Toshifumi; Ohnuma, Kazuhiko; Bissen-Miyajima, Hiroko; Hirakata, Akito

    2011-04-01

    To evaluate the quality of the image of a grating target placed in a model eye viewed through multifocal intraocular lenses (IOLs). Laboratory investigation. Refractive (NXG1 or PY60MV) or diffractive (ZM900 or SA60D3) multifocal IOLs were placed in a fluid-filled model eye with human corneal aberrations. A United States Air Force resolution target was placed on the posterior surface of the model eye. A flat contact lens or a wide-field contact lens was placed on the cornea. The contrasts of the gratings were evaluated under endoillumination and compared to those obtained through a monofocal IOL. The grating images were clear when viewed through the flat contact lens and through the central far-vision zone of the NXG1 and PY60MV, although those through the near-vision zone were blurred and doubled. The images observed through the central area of the ZM900 with flat contact lens were slightly defocused but the images in the periphery were very blurred. The contrast decreased significantly in low frequencies (P<.001). The images observed through the central diffractive zone of the SA60D3 were slightly blurred, although the images in the periphery were clearer than that of the ZM900. The images were less blurred in all of the refractive and diffractive IOLs with the wide-field contact lens. Refractive and diffractive multifocal IOLs blur the grating target but less with the wide-angle viewing system. The peripheral multifocal optical zone may be more influential on the quality of the images with contact lens system. Copyright © 2011 Elsevier Inc. All rights reserved.

  15. Towards eye-safe standoff Raman imaging systems

    NASA Astrophysics Data System (ADS)

    Glimtoft, Martin; Bââth, Petra; Saari, Heikki; Mäkynen, Jussi; Näsilä, Antti; Östmark, Henric

    2014-05-01

    Standoff Raman imaging systems have shown the ability to detect single explosives particles. However, in many cases, the laser intensities needed restrict the applications where they can be safely used. A new generation imaging Raman system has been developed based on a 355 nm UV laser that, in addition to eye safety, allows discrete and invisible measurements. Non-dangerous exposure levels for the eye are several orders of magnitude higher in UVA than in the visible range that previously has been used. The UV Raman system has been built based on an UV Fabry-Perot Interferometer (UV-FPI) developed by VTT. The design allows for precise selection of Raman shifts in combination with high out-of-band blocking. The stable operation of the UV-FPI module under varying environmental conditions is arranged by controlling the temperature of the module and using a closed loop control of the FPI air gap based on capacitive measurement. The system presented consists of a 3rd harmonics Nd:YAG laser with 1.5 W average output at 1000 Hz, a 200 mm Schmidt-Cassegrain telescope, UV-FPI filter and an ICCD camera for signal gating and detection. The design principal leads to a Raman spectrum in each image pixel. The system is designed for field use and easy manoeuvring. Preliminary results show that in measurements of <60 s on 10 m distance, single AN particles of <300 μm diameter can be identified.

  16. Simultaneous SLO/OCT imaging of the human retina with axial eye motion correction.

    PubMed

    Pircher, Michael; Baumann, Bernhard; Götzinger, Erich; Sattmann, Harald; Hitzenberger, Christoph K

    2007-12-10

    It has been shown that transversal scanning (or en-face) optical coherence tomography (TS-OCT) represents an imaging modality capable to record high isotropic resolution images of the human retina in vivo. However, axial eye motion still remains a challenging problem of this technique. In this paper we introduce a novel method to compensate for this eye motion. An auxiliary spectral domain partial coherence interferometer (SD-PCI) was integrated into an existing TS-OCT system and used to measure accurately the position of the cornea. A light source emitting at 1310nm was used in the additional interferometer which enabled a nearly loss free coupling of the two measurement beams via a dichroic mirror. The recorded corneal position was used to drive an additional voice coil translation stage in the reference arm of the TS-OCT system to correct for axial eye motion. Currently, the correction can be performed with an update rate of ~200Hz. The TS-OCT instrument is operated with a line scan rate of 4000 transversal lines per second which enables simultaneous SLO/OCT imaging at a frame rate of 40fps. 3D data of the human retina with isotropic high resolution, that was sufficient to visualize the human cone mosaic in vivo, is presented.

  17. Versatile optical coherence tomography for imaging the human eye

    PubMed Central

    Tao, Aizhu; Shao, Yilei; Zhong, Jianguang; Jiang, Hong; Shen, Meixiao; Wang, Jianhua

    2013-01-01

    We demonstrated the feasibility of a CMOS-based spectral domain OCT (SD-OCT) for versatile ophthalmic applications of imaging the corneal epithelium, limbus, ocular surface, contact lens, crystalline lens, retina, and full eye in vivo. The system was based on a single spectrometer and an alternating reference arm with four mirrors. A galvanometer scanner was used to switch the reference beam among the four mirrors, depending on the imaging application. An axial resolution of 7.7 μm in air, a scan depth of up to 37.7 mm in air, and a scan speed of up to 70,000 A-lines per second were achieved. The approach has the capability to provide high-resolution imaging of the corneal epithelium, contact lens, ocular surface, and tear meniscus. Using two reference mirrors, the zero delay lines were alternatively placed on the front cornea or on the back lens. The entire ocular anterior segment was imaged by registering and overlapping the two images. The full eye through the pupil was measured when the reference arm was switched among the four reference mirrors. After mounting a 60 D lens in the sample arm, this SD-OCT was used to image the retina, including the macula and optical nerve head. This system demonstrates versatility and simplicity for multi-purpose ophthalmic applications. PMID:23847729

  18. Retina imaging system with adaptive optics for the eye with or without myopia

    NASA Astrophysics Data System (ADS)

    Li, Chao; Xia, Mingliang; Jiang, Baoguang; Mu, Quanquan; Chen, Shaoyuan; Xuan, Li

    2009-04-01

    An adaptive optics system for the retina imaging is introduced in the paper. It can be applied to the eye with myopia from 0 to 6 diopters without any adjustment of the system. A high-resolution liquid crystal on silicon (LCOS) device is used as the wave-front corrector. The aberration is detected by a Shack-Harmann wave-front sensor (HASO) that has a Root Mean Square (RMS) measurement accuracy of λ/100 ( λ = 0.633 μm). And an equivalent scale model eye is constructed with a short focal length lens (˜18 mm) and a diffuse reflection object (paper screen) as the retina. By changing the distance between the paper screen and the lens, we simulate the eye with larger diopters than 5 and the depth of field. The RMS value both before and after correction is obtained by the wave-front sensor. After correction, the system reaches the diffraction-limited resolution approximately 230 cycles/mm at the object space. It is proved that if the myopia is smaller than 6 diopters and the depth of field is between -40 and +50 mm, the system can correct the aberration very well.

  19. Advanced endoscopic imaging to improve adenoma detection

    PubMed Central

    Neumann, Helmut; Nägel, Andreas; Buda, Andrea

    2015-01-01

    Advanced endoscopic imaging is revolutionizing our way on how to diagnose and treat colorectal lesions. Within recent years a variety of modern endoscopic imaging techniques was introduced to improve adenoma detection rates. Those include high-definition imaging, dye-less chromoendoscopy techniques and novel, highly flexible endoscopes, some of them equipped with balloons or multiple lenses in order to improve adenoma detection rates. In this review we will focus on the newest developments in the field of colonoscopic imaging to improve adenoma detection rates. Described techniques include high-definition imaging, optical chromoendoscopy techniques, virtual chromoendoscopy techniques, the Third Eye Retroscope and other retroviewing devices, the G-EYE endoscope and the Full Spectrum Endoscopy-system. PMID:25789092

  20. Automated radial basis function neural network based image classification system for diabetic retinopathy detection in retinal images

    NASA Astrophysics Data System (ADS)

    Anitha, J.; Vijila, C. Kezi Selva; Hemanth, D. Jude

    2010-02-01

    Diabetic retinopathy (DR) is a chronic eye disease for which early detection is highly essential to avoid any fatal results. Image processing of retinal images emerge as a feasible tool for this early diagnosis. Digital image processing techniques involve image classification which is a significant technique to detect the abnormality in the eye. Various automated classification systems have been developed in the recent years but most of them lack high classification accuracy. Artificial neural networks are the widely preferred artificial intelligence technique since it yields superior results in terms of classification accuracy. In this work, Radial Basis function (RBF) neural network based bi-level classification system is proposed to differentiate abnormal DR Images and normal retinal images. The results are analyzed in terms of classification accuracy, sensitivity and specificity. A comparative analysis is performed with the results of the probabilistic classifier namely Bayesian classifier to show the superior nature of neural classifier. Experimental results show promising results for the neural classifier in terms of the performance measures.

  1. MULTIMODAL IMAGING OF CHOROIDAL LESIONS IN DISSEMINATED MYCOBACTERIUM CHIMAERA INFECTION AFTER CARDIOTHORACIC SURGERY.

    PubMed

    Böni, Christian; Al-Sheikh, Mayss; Hasse, Barbara; Eberhard, Roman; Kohler, Philipp; Hasler, Pascal; Erb, Stefan; Hoffmann, Matthias; Barthelmes, Daniel; Zweifel, Sandrine A

    2017-12-04

    To explore morphologic characteristics of choroidal lesions in patients with disseminated Mycobacterium chimaera infection subsequent to open-heart surgery. Nine patients (18 eyes) with systemic M. chimaera infection were reviewed. Activity of choroidal lesions were evaluated using biomicroscopy, fundus autofluorescence, enhanced depth imaging optical coherence tomography, fluorescein angiography/indocyanine green angiography, and optical coherence tomography angiography. Relationships of choroidal findings to systemic disease activity were sought. All 9 male patients, aged between 49 and 66 years, were diagnosed with endocarditis and/or aortic graft infection. Mean follow-up was 17.6 months. Four patients had only inactive lesions (mild disease). In all five patients (10 eyes) with progressive ocular disease, indocyanine green angiography was superior to other tests for revealing new lesions and active lesions correlated with hyporeflective choroidal areas on enhanced depth imaging optical coherence tomography. One eye with a large choroidal granuloma developed choroidal neovascularization. Optical coherence tomography angiography showed areas with reduced perfusion at the inner choroid. All 5 patients with progressive ocular disease had evidence of systemic disease activity within ±6 weeks' duration. Choroidal manifestation of disseminated M. chimaera infection indicates systemic disease activity. Multimodal imaging is suitable to recognize progressive ocular disease. We propose ophthalmologic screening examinations for patients with M. chimaera infection.

  2. The effect of image sharpness on quantitative eye movement data and on image quality evaluation while viewing natural images

    NASA Astrophysics Data System (ADS)

    Vuori, Tero; Olkkonen, Maria

    2006-01-01

    The aim of the study is to test both customer image quality rating (subjective image quality) and physical measurement of user behavior (eye movements tracking) to find customer satisfaction differences in imaging technologies. Methodological aim is to find out whether eye movements could be quantitatively used in image quality preference studies. In general, we want to map objective or physically measurable image quality to subjective evaluations and eye movement data. We conducted a series of image quality tests, in which the test subjects evaluated image quality while we recorded their eye movements. Results show that eye movement parameters consistently change according to the instructions given to the user, and according to physical image quality, e.g. saccade duration increased with increasing blur. Results indicate that eye movement tracking could be used to differentiate image quality evaluation strategies that the users have. Results also show that eye movements would help mapping between technological and subjective image quality. Furthermore, these results give some empirical emphasis to top-down perception processes in image quality perception and evaluation by showing differences between perceptual processes in situations when cognitive task varies.

  3. Microfabricated ommatidia using a laser induced self-writing process for high resolution artificial compound eye optical systems.

    PubMed

    Jung, Hyukjin; Jeong, Ki-Hun

    2009-08-17

    A microfabricated compound eye, comparable to a natural compound eye shows a spherical arrangement of integrated optical units called artificial ommatidia. Each consists of a self-aligned microlens and waveguide. The increase of waveguide length is imperative to obtain high resolution images through an artificial compound eye for wide field-of - view imaging as well as fast motion detection. This work presents an effective method for increasing the waveguide length of artificial ommatidium using a laser induced self-writing process in a photosensitive polymer resin. The numerical and experimental results show the uniform formation of waveguides and the increment of waveguide length over 850 microm. (c) 2009 Optical Society of America

  4. Influence of changes in an eye's optical system on refraction

    NASA Astrophysics Data System (ADS)

    Bartkowska, Janina

    1998-10-01

    The optical system of eye is composed of cornea, lens, anterior chamber, and vitreous body. In the standard schematic eye there are 6 refracting surfaces. The changes of the curvature radii, of the distances between them, of the refractive indices influence the ametropia, refractive power of the eye and retinal image size. The influence of these changes can be appreciated by ray tracing or by an analytical method. There are presented simplified formulae for the differentials of ametropia and refractive power of the eye with respect to the surfaces curvatures, refracting power of cornea and lens, refractive indices. The relations are valid too for bigger changes if ametropia is measured in the cornea vertex. The formulae for the differentials with respect to distances, lens translation, eye axis length are valid if ametropia is measured in the object focus of the eye.

  5. Web GIS in practice VII: stereoscopic 3-D solutions for online maps and virtual globes

    PubMed Central

    Boulos, Maged N Kamel; Robinson, Larry R

    2009-01-01

    Because our pupils are about 6.5 cm apart, each eye views a scene from a different angle and sends a unique image to the visual cortex, which then merges the images from both eyes into a single picture. The slight difference between the right and left images allows the brain to properly perceive the 'third dimension' or depth in a scene (stereopsis). However, when a person views a conventional 2-D (two-dimensional) image representation of a 3-D (three-dimensional) scene on a conventional computer screen, each eye receives essentially the same information. Depth in such cases can only be approximately inferred from visual clues in the image, such as perspective, as only one image is offered to both eyes. The goal of stereoscopic 3-D displays is to project a slightly different image into each eye to achieve a much truer and realistic perception of depth, of different scene planes, and of object relief. This paper presents a brief review of a number of stereoscopic 3-D hardware and software solutions for creating and displaying online maps and virtual globes (such as Google Earth) in "true 3D", with costs ranging from almost free to multi-thousand pounds sterling. A practical account is also given of the experience of the USGS BRD UMESC (United States Geological Survey's Biological Resources Division, Upper Midwest Environmental Sciences Center) in setting up a low-cost, full-colour stereoscopic 3-D system. PMID:19849837

  6. Web GIS in practice VII: stereoscopic 3-D solutions for online maps and virtual globes.

    PubMed

    Boulos, Maged N Kamel; Robinson, Larry R

    2009-10-22

    Because our pupils are about 6.5 cm apart, each eye views a scene from a different angle and sends a unique image to the visual cortex, which then merges the images from both eyes into a single picture. The slight difference between the right and left images allows the brain to properly perceive the 'third dimension' or depth in a scene (stereopsis). However, when a person views a conventional 2-D (two-dimensional) image representation of a 3-D (three-dimensional) scene on a conventional computer screen, each eye receives essentially the same information. Depth in such cases can only be approximately inferred from visual clues in the image, such as perspective, as only one image is offered to both eyes. The goal of stereoscopic 3-D displays is to project a slightly different image into each eye to achieve a much truer and realistic perception of depth, of different scene planes, and of object relief. This paper presents a brief review of a number of stereoscopic 3-D hardware and software solutions for creating and displaying online maps and virtual globes (such as Google Earth) in "true 3D", with costs ranging from almost free to multi-thousand pounds sterling. A practical account is also given of the experience of the USGS BRD UMESC (United States Geological Survey's Biological Resources Division, Upper Midwest Environmental Sciences Center) in setting up a low-cost, full-colour stereoscopic 3-D system.

  7. Web GIS in practice VII: stereoscopic 3-D solutions for online maps and virtual globes

    USGS Publications Warehouse

    Boulos, Maged N.K.; Robinson, Larry R.

    2009-01-01

    Because our pupils are about 6.5 cm apart, each eye views a scene from a different angle and sends a unique image to the visual cortex, which then merges the images from both eyes into a single picture. The slight difference between the right and left images allows the brain to properly perceive the 'third dimension' or depth in a scene (stereopsis). However, when a person views a conventional 2-D (two-dimensional) image representation of a 3-D (three-dimensional) scene on a conventional computer screen, each eye receives essentially the same information. Depth in such cases can only be approximately inferred from visual clues in the image, such as perspective, as only one image is offered to both eyes. The goal of stereoscopic 3-D displays is to project a slightly different image into each eye to achieve a much truer and realistic perception of depth, of different scene planes, and of object relief. This paper presents a brief review of a number of stereoscopic 3-D hardware and software solutions for creating and displaying online maps and virtual globes (such as Google Earth) in "true 3D", with costs ranging from almost free to multi-thousand pounds sterling. A practical account is also given of the experience of the USGS BRD UMESC (United States Geological Survey's Biological Resources Division, Upper Midwest Environmental Sciences Center) in setting up a low-cost, full-colour stereoscopic 3-D system.

  8. Automatic screening and classification of diabetic retinopathy and maculopathy using fuzzy image processing.

    PubMed

    Rahim, Sarni Suhaila; Palade, Vasile; Shuttleworth, James; Jayne, Chrisina

    2016-12-01

    Digital retinal imaging is a challenging screening method for which effective, robust and cost-effective approaches are still to be developed. Regular screening for diabetic retinopathy and diabetic maculopathy diseases is necessary in order to identify the group at risk of visual impairment. This paper presents a novel automatic detection of diabetic retinopathy and maculopathy in eye fundus images by employing fuzzy image processing techniques. The paper first introduces the existing systems for diabetic retinopathy screening, with an emphasis on the maculopathy detection methods. The proposed medical decision support system consists of four parts, namely: image acquisition, image preprocessing including four retinal structures localisation, feature extraction and the classification of diabetic retinopathy and maculopathy. A combination of fuzzy image processing techniques, the Circular Hough Transform and several feature extraction methods are implemented in the proposed system. The paper also presents a novel technique for the macula region localisation in order to detect the maculopathy. In addition to the proposed detection system, the paper highlights a novel online dataset and it presents the dataset collection, the expert diagnosis process and the advantages of our online database compared to other public eye fundus image databases for diabetic retinopathy purposes.

  9. Simultaneous hand-held contact color fundus and SD-OCT imaging for pediatric retinal diseases (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Ruggeri, Marco; Hernandez, Victor; De Freitas, Carolina; Relhan, Nidhi; Silgado, Juan; Manns, Fabrice; Parel, Jean-Marie

    2016-03-01

    Hand-held wide-field contact color fundus photography is currently the standard method to acquire diagnostic images of children during examination under anesthesia and in the neonatal intensive care unit. The recent development of portable non-contact hand-held OCT retinal imaging systems has proved that OCT is of tremendous help to complement fundus photography in the management of pediatric patients. Currently, there is no commercial or research system that combines color wide-field digital fundus and OCT imaging in a contact-fashion. The contact of the probe with the cornea has the advantages of reducing motion experienced by the photographer during the imaging and providing fundus and OCT images with wider field of view that includes the periphery of the retina. In this study we produce proof of concept for a contact-type hand-held unit for simultaneous color fundus and OCT live view of the retina of pediatric patients. The front piece of the hand-held unit consists of a contact ophthalmoscopy lens integrating a circular light guide that was recovered from a digital fundus camera for pediatric imaging. The custom-made rear piece consists of the optics to: 1) fold the visible aerial image of the fundus generated by the ophthalmoscopy lens on a miniaturized level board digital color camera; 2) conjugate the eye pupil to the galvanometric scanning mirrors of an OCT delivery system. Wide-field color fundus and OCT images were simultaneously obtained in an eye model and sequentially obtained on the eye of a conscious 25 year-old human subject with healthy retina.

  10. The evolution of lenses.

    PubMed

    Land, Michael F

    2012-11-01

    Structures which bend light and so form images are present in all the major phyla. Lenses with a graded refractive index, and hence reduced spherical aberration, evolved in the vertebrates, arthropods, annelid worms, and several times in the molluscs. Even cubozoan jellyfish have lens eyes. In some vertebrate eyes, multiple focal lengths allow some correction for chromatic aberration. In land vertebrates the cornea took over the main ray-bending task, leaving accommodation as the main function of the lens. The spiders are the only other group to make use of a single cornea as the optical system in their main eyes, and some of these - the salticids - have evolved a remarkable system based on image scanning. Similar scanning arrangements are found in some crustaceans, sea-snails and insect larvae. © 2012 The College of Optometrists.

  11. Forward light scatter analysis of the eye in a spatially-resolved double-pass optical system.

    PubMed

    Nam, Jayoung; Thibos, Larry N; Bradley, Arthur; Himebaugh, Nikole; Liu, Haixia

    2011-04-11

    An optical analysis is developed to separate forward light scatter of the human eye from the conventional wavefront aberrations in a double pass optical system. To quantify the separate contributions made by these micro- and macro-aberrations, respectively, to the spot image blur in the Shark-Hartmann aberrometer, we develop a metric called radial variance for spot blur. We prove an additivity property for radial variance that allows us to distinguish between spot blurs from macro-aberrations and micro-aberrations. When the method is applied to tear break-up in the human eye, we find that micro-aberrations in the second pass accounts for about 87% of the double pass image blur in the Shack-Hartmann wavefront aberrometer under our experimental conditions. © 2011 Optical Society of America

  12. NASA Sees Cyclone Chapala Approaching Landfall in Yemen

    NASA Image and Video Library

    2017-12-08

    On Nov. 2, 2015 at 09:40 UTC (4:40 p.m. EDT) the Moderate Resolution Imaging Spectroradiometer or MODIS instrument aboard NASA's Aqua satellite captured an image of Tropical Cyclone Chapala as the eye of the storm was approaching the Yemen coast. Chapala maintained an eye, although it appeared cloud-covered. Animated multispectral satellite imagery shows the system has maintained a 15-nautical-mile-wide eye and structure. The image was created by the MODIS Rapid Response Team at NASA's Goddard Space Flight Center, Greenbelt, Maryland. Chapala weakened from category four intensity a couple days ago while maintaining a course that steers it toward Yemen. Credit: NASA Goddard MODIS Rapid Response Team Read more: www.nasa.gov/f…/goddard/chapala-northern-indian-ocean NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  13. Retina Image Analysis and Ocular Telehealth: The Oak Ridge National Laboratory-Hamilton Eye Institute Case Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karnowski, Thomas Paul; Giancardo, Luca; Li, Yaquin

    2013-01-01

    Automated retina image analysis has reached a high level of maturity in recent years, and thus the question of how validation is performed in these systems is beginning to grow in importance. One application of retina image analysis is in telemedicine, where an automated system could enable the automated detection of diabetic retinopathy and other eye diseases as a low-cost method for broad-based screening. In this work we discuss our experiences in developing a telemedical network for retina image analysis, including our progression from a manual diagnosis network to a more fully automated one. We pay special attention to howmore » validations of our algorithm steps are performed, both using data from the telemedicine network and other public databases.« less

  14. Comparison of EyeCam and anterior segment optical coherence tomography in detecting angle closure.

    PubMed

    Baskaran, Mani; Aung, Tin; Friedman, David S; Tun, Tin A; Perera, Shamira A

    2012-12-01

    To compare the diagnostic performance of EyeCam (Clarity Medical Systems, Pleasanton, CA, USA) and anterior segment optical coherence tomography (ASOCT, Visante; Carl Zeiss Meditec, Dublin, CA, USA) in detecting angle closure, using gonioscopy as the reference standard. Ninety-eight phakic patients, recruited from a glaucoma clinic, underwent gonioscopy by a single examiner, and EyeCam and ASOCT imaging by another examiner. Another observer, masked to gonioscopy findings, graded EyeCam and ASOCT images. For both gonioscopy and EyeCam, a closed angle in a particular quadrant was defined if the posterior trabecular meshwork was not visible. For ASOCT, angle closure was defined by any contact between the iris and angle anterior to the scleral spur. An eye was diagnosed as having angle closure if ≥2 quadrants were closed. Agreement and area under the receiver operating characteristic curves (AUC) were evaluated. The majority of subjects were Chinese (69/98, 70.4%) with a mean age of 60.6 years. Angle closure was diagnosed in 39/98 (39.8%) eyes with gonioscopy, 40/98 (40.8%) with EyeCam and 56/97 (57.7%) with ASOCT. The agreement (kappa statistic) for angle closure diagnosis for gonioscopy versus EyeCam was 0.89; gonioscopy versus ASOCT and EyeCam versus ASOCT were both 0.56. The AUC for detecting eyes with gonioscopic angle closure with EyeCam was 0.978 (95% CI: 0.93-1.0) and 0.847 (95% CI: 0.76-0.92, p < 0.01) for ASOCT. The diagnostic performance of EyeCam was better than ASOCT in detecting angle closure when gonioscopic grading was used as the reference standard. The agreement between the two imaging modalities was moderate. © 2012 The Authors. Acta Ophthalmologica © 2012 Acta Ophthalmologica Scandinavica Foundation.

  15. Automated analysis of angle closure from anterior chamber angle images.

    PubMed

    Baskaran, Mani; Cheng, Jun; Perera, Shamira A; Tun, Tin A; Liu, Jiang; Aung, Tin

    2014-10-21

    To evaluate a novel software capable of automatically grading angle closure on EyeCam angle images in comparison with manual grading of images, with gonioscopy as the reference standard. In this hospital-based, prospective study, subjects underwent gonioscopy by a single observer, and EyeCam imaging by a different operator. The anterior chamber angle in a quadrant was classified as closed if the posterior trabecular meshwork could not be seen. An eye was classified as having angle closure if there were two or more quadrants of closure. Automated grading of the angle images was performed using customized software. Agreement between the methods was ascertained by κ statistic and comparison of area under receiver operating characteristic curves (AUC). One hundred forty subjects (140 eyes) were included, most of whom were Chinese (102/140, 72.9%) and women (72/140, 51.5%). Angle closure was detected in 61 eyes (43.6%) with gonioscopy in comparison with 59 eyes (42.1%, P = 0.73) using manual grading, and 67 eyes (47.9%, P = 0.24) with automated grading of EyeCam images. The agreement for angle closure diagnosis between gonioscopy and both manual (κ = 0.88; 95% confidence interval [CI), 0.81-0.96) and automated grading of EyeCam images was good (κ = 0.74; 95% CI, 0.63-0.85). The AUC for detecting eyes with gonioscopic angle closure was comparable for manual and automated grading (AUC 0.974 vs. 0.954, P = 0.31) of EyeCam images. Customized software for automated grading of EyeCam angle images was found to have good agreement with gonioscopy. Human observation of the EyeCam images may still be needed to avoid gross misclassification, especially in eyes with extensive angle closure. Copyright 2014 The Association for Research in Vision and Ophthalmology, Inc.

  16. Fourier transform digital holographic adaptive optics imaging system

    PubMed Central

    Liu, Changgeng; Yu, Xiao; Kim, Myung K.

    2013-01-01

    A Fourier transform digital holographic adaptive optics imaging system and its basic principles are proposed. The CCD is put at the exact Fourier transform plane of the pupil of the eye lens. The spherical curvature introduced by the optics except the eye lens itself is eliminated. The CCD is also at image plane of the target. The point-spread function of the system is directly recorded, making it easier to determine the correct guide-star hologram. Also, the light signal will be stronger at the CCD, especially for phase-aberration sensing. Numerical propagation is avoided. The sensor aperture has nothing to do with the resolution and the possibility of using low coherence or incoherent illumination is opened. The system becomes more efficient and flexible. Although it is intended for ophthalmic use, it also shows potential application in microscopy. The robustness and feasibility of this compact system are demonstrated by simulations and experiments using scattering objects. PMID:23262541

  17. [Eye lens radiation exposure during ureteroscopy with and without a face protection shield: Investigations on a phantom model].

    PubMed

    Zöller, G; Figel, M; Denk, J; Schulz, K; Sabo, A

    2016-03-01

    Eye lens radiation exposure during radiologically-guided endoscopic procedures may result in radiation-induced cataracts; therefore, we investigated the ocular radiation exposure during ureteroscopy on a phantom model. Using an Alderson phantom model and eye lens dosimeters, we measured the ocular radiation exposure depending on the number of X-ray images and on the duration of fluoroscopic imaging. The measurements were done with and without using a face protection shield. We could demonstrate that a significant ocular radiation exposure can occur, depending on the number of X-ray images and on the duration time of fluoroscopy. Eye lens doses up to 0.025 mSv were recorded even using modern digital X-ray systems. Using face protection shields this ocular radiation exposure can be reduced to a minimum. The International Commission on Radiological Protection (ICRP) recommendations of a mean eye lens dosage of 20 mSv/year may be exceeded during repeated ureteroscopy by a high volume surgeon. Using a face protection shield, the eye lens dose during ureteroscopy could be reduced to a minimum in a phantom model. Further investigations will show whether these results can be transferred to real life ureteroscopic procedures.

  18. Research on the feature set construction method for spherical stereo vision

    NASA Astrophysics Data System (ADS)

    Zhu, Junchao; Wan, Li; Röning, Juha; Feng, Weijia

    2015-01-01

    Spherical stereo vision is a kind of stereo vision system built by fish-eye lenses, which discussing the stereo algorithms conform to the spherical model. Epipolar geometry is the theory which describes the relationship of the two imaging plane in cameras for the stereo vision system based on perspective projection model. However, the epipolar in uncorrected fish-eye image will not be a line but an arc which intersects at the poles. It is polar curve. In this paper, the theory of nonlinear epipolar geometry will be explored and the method of nonlinear epipolar rectification will be proposed to eliminate the vertical parallax between two fish-eye images. Maximally Stable Extremal Region (MSER) utilizes grayscale as independent variables, and uses the local extremum of the area variation as the testing results. It is demonstrated in literatures that MSER is only depending on the gray variations of images, and not relating with local structural characteristics and resolution of image. Here, MSER will be combined with the nonlinear epipolar rectification method proposed in this paper. The intersection of the rectified epipolar and the corresponding MSER region is determined as the feature set of spherical stereo vision. Experiments show that this study achieved the expected results.

  19. Assessment of Perfused Foveal Microvascular Density and Identification of Nonperfused Capillaries in Healthy and Vasculopathic Eyes

    PubMed Central

    Pinhas, Alexander; Razeen, Moataz; Dubow, Michael; Gan, Alexander; Chui, Toco Y.; Shah, Nishit; Mehta, Mitul; Gentile, Ronald C.; Weitz, Rishard; Walsh, Joseph B.; Sulai, Yusufu N.; Carroll, Joseph; Dubra, Alfredo; Rosen, Richard B.

    2014-01-01

    Purpose. To analyze the foveal microvasculature of young healthy eyes and older vasculopathic eyes, imaged using in vivo adaptive optics scanning light ophthalmoscope fluorescein angiography (AOSLO FA). Methods. AOSLO FA imaging of the superficial retinal microvasculature within an 800-μm radius from the foveal center was performed using simultaneous confocal infrared (IR) reflectance (790 nm) and fluorescence (488 nm) channels. Corresponding IR structural and FA perfusion maps were compared with each other to identify nonperfused capillaries adjacent to the foveal avascular zone. Microvascular densities were calculated from skeletonized FA perfusion maps. Results. Sixteen healthy adults (26 eyes; mean age 25 years, range, 21–29) and six patients with a retinal vasculopathy (six eyes; mean age 55 years, range, 44–70) were imaged. At least one nonperfused capillary was observed in five of the 16 healthy nonfellow eyes and in four of the six vasculopathic eyes. Compared with healthy eyes, capillary nonperfusion in the vasculopathic eyes was more extensive. Microvascular density of the 16 healthy nonfellow eyes was 42.0 ± 4.2 mm−1 (range, 33–50 mm−1). All six vasculopathic eyes had decreased microvascular densities. Conclusions. AOSLO FA provides an in vivo method for estimating foveal microvascular density and reveals occult nonperfused retinal capillaries. Nonperfused capillaries in healthy young adults may represent a normal variation and/or an early sign of pathology. Although limited, the normative data presented here is a step toward developing clinically useful microvascular parameters for ocular and/or systemic diseases. PMID:25414179

  20. Surprising characteristics of visual systems of invertebrates.

    PubMed

    González-Martín-Moro, J; Hernández-Verdejo, J L; Jiménez-Gahete, A E

    2017-01-01

    To communicate relevant and striking aspects about the visual system of some close invertebrates. Review of the related literature. The capacity of snails to regenerate a complete eye, the benefit of the oval shape of the compound eye of many flying insects as a way of stabilising the image during flight, the potential advantages related to the extreme refractive error that characterises the ocelli of many insects, as well as the ability to detect polarised light as a navigation system, are some of the surprising capabilities present in the small invertebrate eyes that are described in this work. The invertebrate eyes have capabilities and sensorial modalities that are not present in the human eye. The study of the eyes of these animals can help us to improve our understanding of our visual system, and inspire the development of optical devices. Copyright © 2016 Sociedad Española de Oftalmología. Publicado por Elsevier España, S.L.U. All rights reserved.

  1. Algorithms for High-Speed Noninvasive Eye-Tracking System

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Morookian, John-Michael; Lambert, James

    2010-01-01

    Two image-data-processing algorithms are essential to the successful operation of a system of electronic hardware and software that noninvasively tracks the direction of a person s gaze in real time. The system was described in High-Speed Noninvasive Eye-Tracking System (NPO-30700) NASA Tech Briefs, Vol. 31, No. 8 (August 2007), page 51. To recapitulate from the cited article: Like prior commercial noninvasive eyetracking systems, this system is based on (1) illumination of an eye by a low-power infrared light-emitting diode (LED); (2) acquisition of video images of the pupil, iris, and cornea in the reflected infrared light; (3) digitization of the images; and (4) processing the digital image data to determine the direction of gaze from the centroids of the pupil and cornea in the images. Most of the prior commercial noninvasive eyetracking systems rely on standard video cameras, which operate at frame rates of about 30 Hz. Such systems are limited to slow, full-frame operation. The video camera in the present system includes a charge-coupled-device (CCD) image detector plus electronic circuitry capable of implementing an advanced control scheme that effects readout from a small region of interest (ROI), or subwindow, of the full image. Inasmuch as the image features of interest (the cornea and pupil) typically occupy a small part of the camera frame, this ROI capability can be exploited to determine the direction of gaze at a high frame rate by reading out from the ROI that contains the cornea and pupil (but not from the rest of the image) repeatedly. One of the present algorithms exploits the ROI capability. The algorithm takes horizontal row slices and takes advantage of the symmetry of the pupil and cornea circles and of the gray-scale contrasts of the pupil and cornea with respect to other parts of the eye. The algorithm determines which horizontal image slices contain the pupil and cornea, and, on each valid slice, the end coordinates of the pupil and cornea. Information from multiple slices is then combined to robustly locate the centroids of the pupil and cornea images. The other of the two present algorithms is a modified version of an older algorithm for estimating the direction of gaze from the centroids of the pupil and cornea. The modification lies in the use of the coordinates of the centroids, rather than differences between the coordinates of the centroids, in a gaze-mapping equation. The equation locates a gaze point, defined as the intersection of the gaze axis with a surface of interest, which is typically a computer display screen (see figure). The expected advantage of the modification is to make the gaze computation less dependent on some simplifying assumptions that are sometimes not accurate

  2. A Novel, Real-Time, In Vivo Mouse Retinal Imaging System

    PubMed Central

    Butler, Mark C.; Sullivan, Jack M.

    2015-01-01

    Purpose To develop an efficient, low-cost instrument for robust real-time imaging of the mouse retina in vivo, and assess system capabilities by evaluating various animal models. Methods Following multiple disappointing attempts to visualize the mouse retina during a subretinal injection using commercially available systems, we identified the key limitation to be inadequate illumination due to off axis illumination and poor optical train optimization. Therefore, we designed a paraxial illumination system for Greenough-type stereo dissecting microscope incorporating an optimized optical launch and an efficiently coupled fiber optic delivery system. Excitation and emission filters control spectral bandwidth. A color coupled-charged device (CCD) camera is coupled to the microscope for image capture. Although, field of view (FOV) is constrained by the small pupil aperture, the high optical power of the mouse eye, and the long working distance (needed for surgical manipulations), these limitations can be compensated by eye positioning in order to observe the entire retina. Results The retinal imaging system delivers an adjustable narrow beam to the dilated pupil with minimal vignetting. The optic nerve, vasculature, and posterior pole are crisply visualized and the entire retina can be observed through eye positioning. Normal and degenerative retinal phenotypes can be followed over time. Subretinal or intraocular injection procedures are followed in real time. Real-time, intravenous fluorescein angiography for the live mouse has been achieved. Conclusions A novel device is established for real-time viewing and image capture of the small animal retina during subretinal injections for preclinical gene therapy studies. PMID:26551329

  3. NanoRacks Kestrel Eye (KE2M) Satellite Deployment

    NASA Image and Video Library

    2017-10-24

    iss053e130305 (Oct. 24, 2017) --- The Kestrel Eye IIM (KE2M) CubeSat is deployed from the tip of the Dextre attached to the Mobile Servicing System. The KE2M is carrying an optical imaging system payload that is being used to validate the concept of using microsatellites in low-Earth orbit to support critical operations.

  4. NanoRacks Kestrel Eye (KE2M) Satellite Deployment

    NASA Image and Video Library

    2017-10-24

    iss053e130267 (Oct. 24, 2017) --- The Kestrel Eye IIM (KE2M) CubeSat is deployed from the tip of the Dextre attached to the Mobile Servicing System. The KE2M is carrying an optical imaging system payload that is being used to validate the concept of using microsatellites in low-Earth orbit to support critical operations.

  5. Combining Image and Non-Image Data for Automatic Detection of Retina Disease in a Telemedicine Network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aykac, Deniz; Chaum, Edward; Fox, Karen

    A telemedicine network with retina cameras and automated quality control, physiological feature location, and lesion/anomaly detection is a low-cost way of achieving broad-based screening for diabetic retinopathy (DR) and other eye diseases. In the process of a routine eye-screening examination, other non-image data is often available which may be useful in automated diagnosis of disease. In this work, we report on the results of combining this non-image data with image data, using the protocol and processing steps of a prototype system for automated disease diagnosis of retina examinations from a telemedicine network. The system includes quality assessments, automated physiology detection,more » and automated lesion detection to create an archive of known cases. Non-image data such as diabetes onset date and hemoglobin A1c (HgA1c) for each patient examination are included as well, and the system is used to create a content-based image retrieval engine capable of automated diagnosis of disease into 'normal' and 'abnormal' categories. The system achieves a sensitivity and specificity of 91.2% and 71.6% using hold-one-out validation testing.« less

  6. Biotechnology

    NASA Image and Video Library

    2003-01-22

    ProVision Technologies, a NASA research partnership center at Sternis Space Center in Mississippi, has developed a new hyperspectral imaging (HSI) system that is much smaller than the original large units used aboard remote sensing aircraft and satellites. The new apparatus is about the size of a breadbox. HSI may be useful to ophthalmologists to study and diagnose eye health, both on Earth and in space, by examining the back of the eye to determine oxygen and blood flow quickly and without any invasion. ProVision's hyperspectral imaging system can scan the human eye and produce a graph showing optical density or light absorption, which can then be compared to a graph from a normal eye. Scans of the macula, optic disk or optic nerve head, and blood vessels can be used to detect anomalies and identify diseases in this delicate and important organ. ProVision has already developed a relationship with the University of Alabama at Birmingham, but is still on the lookout for a commercial partner in this application.

  7. Biotechnology

    NASA Image and Video Library

    2003-01-22

    ProVision Technologies, a NASA commercial space center at Sternis Space Center in Mississippi, has developed a new hyperspectral imaging (HSI) system that is much smaller than the original large units used aboard remote sensing aircraft and satellites. The new apparatus is about the size of a breadbox. HSI may be useful to ophthalmologists to study and diagnose eye health, both on Earth and in space, by examining the back of the eye to determine oxygen and blood flow quickly and without any invasion. ProVision's hyperspectral imaging system can scan the human eye and produce a graph showing optical density or light absorption, which can then be compared to a graph from a normal eye. Scans of the macula, optic disk or optic nerve head, and blood vessels can be used to detect anomalies and identify diseases in this delicate and important organ. ProVision has already developed a relationship with the University of Alabama at Birmingham, but is still on the lookout for a commercial partner in this application.

  8. Unprocessed real-time imaging of vitreoretinal surgical maneuvers using a microscope-integrated spectral-domain optical coherence tomography system.

    PubMed

    Hahn, Paul; Migacz, Justin; O'Connell, Rachelle; Izatt, Joseph A; Toth, Cynthia A

    2013-01-01

    We have recently developed a microscope-integrated spectral-domain optical coherence tomography (MIOCT) device towards intrasurgical cross-sectional imaging of surgical maneuvers. In this report, we explore the capability of MIOCT to acquire real-time video imaging of vitreoretinal surgical maneuvers without post-processing modifications. Standard 3-port vitrectomy was performed in human during scheduled surgery as well as in cadaveric porcine eyes. MIOCT imaging of human subjects was performed in healthy normal volunteers and intraoperatively at a normal pause immediately following surgical manipulations, under an Institutional Review Board-approved protocol, with informed consent from all subjects. Video MIOCT imaging of live surgical manipulations was performed in cadaveric porcine eyes by carefully aligning B-scans with instrument orientation and movement. Inverted imaging was performed by lengthening of the reference arm to a position beyond the choroid. Unprocessed MIOCT imaging was successfully obtained in healthy human volunteers and in human patients undergoing surgery, with visualization of post-surgical changes in unprocessed single B-scans. Real-time, unprocessed MIOCT video imaging was successfully obtained in cadaveric porcine eyes during brushing of the retina with the Tano scraper, peeling of superficial retinal tissue with intraocular forceps, and separation of the posterior hyaloid face. Real-time inverted imaging enabled imaging without complex conjugate artifacts. MIOCT is capable of unprocessed imaging of the macula in human patients undergoing surgery and of unprocessed, real-time, video imaging of surgical maneuvers in model eyes. These capabilities represent an important step towards development of MIOCT for efficient, real-time imaging of manipulations during human surgery.

  9. Relevance of wide-field autofluorescence imaging in Birdshot retinochoroidopathy: descriptive analysis of 76 eyes.

    PubMed

    Piffer, Anne-Laure Le; Boissonnot, Michèle; Gobert, Frédéric; Zenger, Anita; Wolf, Sebastian; Wolf, Ute; Korobelnik, Jean-François; Rougier, Marie-Bénédicte

    2014-09-01

    To study and classify retinal lesions in patients with birdshot disease using wide-field autofluorescence imaging and correlate them according to patients' visual status. A multicentre study was carried out on 76 eyes of 39 patients with birdshot disease, analysing colour images and under autofluorescence using the wide-field Optomap(®) imaging system. This was combined with a complete clinical exam and analysis of the macula with OCT. In over 80% of the eyes, a chorioretinal lesion has been observed under autofluorescence with a direct correlation between the extent of the lesion and visual status. The presence of macular hypo-autofluorescence was correlated with a decreased visual acuity, due to the presence of a macular oedema, active clinical inflammation or an epiretinal membrane. The hypo-autofluorescence observed correlated with the duration of the disease and the degree of inflammation in the affected eye, indicating a secondary lesion in the pigment epithelium in relation to the choroid. The pigment epithelium was affected in a diffuse manner, as in almost 50% of the eyes the wider peripheral retina was affected. Wide-field autofluorescence imaging could appear to be a useful examination when monitoring patients, to look for areas of macular hypo-autofluorescence responsible for an irreversible loss of vision. © 2013 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  10. Tracking scanning laser ophthalmoscope (TSLO)

    NASA Astrophysics Data System (ADS)

    Hammer, Daniel X.; Ferguson, R. Daniel; Magill, John C.; White, Michael A.; Elsner, Ann E.; Webb, Robert H.

    2003-07-01

    The effectiveness of image stabilization with a retinal tracker in a multi-function, compact scanning laser ophthalmoscope (TSLO) was demonstrated in initial human subject tests. The retinal tracking system uses a confocal reflectometer with a closed loop optical servo system to lock onto features in the fundus. The system is modular to allow configuration for many research and clinical applications, including hyperspectral imaging, multifocal electroretinography (MFERG), perimetry, quantification of macular and photo-pigmentation, imaging of neovascularization and other subretinal structures (drusen, hyper-, and hypo-pigmentation), and endogenous fluorescence imaging. Optical hardware features include dual wavelength imaging and detection, integrated monochromator, higher-order motion control, and a stimulus source. The system software consists of a real-time feedback control algorithm and a user interface. Software enhancements include automatic bias correction, asymmetric feature tracking, image averaging, automatic track re-lock, and acquisition and logging of uncompressed images and video files. Normal adult subjects were tested without mydriasis to optimize the tracking instrumentation and to characterize imaging performance. The retinal tracking system achieves a bandwidth of greater than 1 kHz, which permits tracking at rates that greatly exceed the maximum rate of motion of the human eye. The TSLO stabilized images in all test subjects during ordinary saccades up to 500 deg/sec with an inter-frame accuracy better than 0.05 deg. Feature lock was maintained for minutes despite subject eye blinking. Successful frame averaging allowed image acquisition with decreased noise in low-light applications. The retinal tracking system significantly enhances the imaging capabilities of the scanning laser ophthalmoscope.

  11. Wide-field fundus imaging with trans-palpebral illumination.

    PubMed

    Toslak, Devrim; Thapa, Damber; Chen, Yanjun; Erol, Muhammet Kazim; Paul Chan, R V; Yao, Xincheng

    2017-01-28

    In conventional fundus imaging devices, transpupillary illumination is used for illuminating the inside of the eye. In this method, the illumination light is directed into the posterior segment of the eye through the cornea and passes the pupillary area. As a result of sharing the pupillary area for the illumination beam and observation path, pupil dilation is typically necessary for wide-angle fundus examination, and the field of view is inherently limited. An alternative approach is to deliver light from the sclera. It is possible to image a wider retinal area with transcleral-illumination. However, the requirement of physical contact between the illumination probe and the sclera is a drawback of this method. We report here trans-palpebral illumination as a new method to deliver the light through the upper eyelid (palpebra). For this study, we used a 1.5 mm diameter fiber with a warm white LED light source. To illuminate the inside of the eye, the fiber illuminator was placed at the location corresponding to the pars plana region. A custom designed optical system was attached to a digital camera for retinal imaging. The optical system contained a 90 diopter ophthalmic lens and a 25 diopter relay lens. The ophthalmic lens collected light coming from the posterior of the eye and formed an aerial image between the ophthalmic and relay lenses. The aerial image was captured by the camera through the relay lens. An adequate illumination level was obtained to capture wide angle fundus images within ocular safety limits, defined by the ISO 15004-2: 2007 standard. This novel trans-palpebral illumination approach enables wide-angle fundus photography without eyeball contact and pupil dilation.

  12. Microscope-Integrated OCT Feasibility and Utility With the EnFocus System in the DISCOVER Study.

    PubMed

    Runkle, Anne; Srivastava, Sunil K; Ehlers, Justis P

    2017-03-01

    To evaluate the feasibility and utility of a novel microscope-integrated intraoperative optical coherence tomography (OCT) system. The DISCOVER study is an investigational device study evaluating microscope-integrated intraoperative OCT systems for ophthalmic surgery. This report focuses on subjects imaged with the EnFocus prototype system (Leica Microsystems/Bioptigen, Morrisville, NC). OCT was performed at surgeon-directed milestones. Surgeons completed a questionnaire after each case to evaluate the impact of OCT on intraoperative management. Fifty eyes underwent imaging with the EnFocus system. Successful imaging was obtained in 46 of 50 eyes (92%). In eight cases (16%), surgical management was changed based on intraoperative OCT findings. In membrane peeling procedures, intraoperative OCT findings were discordant from the surgeon's initial impression in seven of 20 cases (35%). This study demonstrates the feasibility of microscope-integrated intraoperative OCT using the Bioptigen EnFocus system. Intraoperative OCT may provide surgeons with additional information that may influence surgical decision-making. [Ophthalmic Surg Lasers Imaging Retina. 2017;48:216-222.]. Copyright 2017, SLACK Incorporated.

  13. The Nine-Step Minnesota Grading System for Eyebank Eyes With Age Related Macular Degeneration: A Systematic Approach to Study Disease Stages.

    PubMed

    Olsen, Timothy W; Liao, Albert; Robinson, Hershonna S; Palejwala, Neal V; Sprehe, Nicholas

    2017-10-01

    To refine the Minnesota Grading System (MGS) using definitions from the Age-Related Eye Disease Studies (AREDS) into a nine-step grading scale (MGS-9). A nine-step grading scale descriptive analysis using three key phenotypic features (total drusen area, increased, and decreased pigmentation) of human eyebank eyes that were graded according to definitions from the AREDS criteria in order to harmonize studies of disease progression for research involving human tissue. From 2005 through February 2017, we have analyzed 1159 human eyes, procured from two eyebanks. Each macula was imaged using high-resolution, stereoscopic color fundus photography with both direct- and transillumination. Fundus images were digitally overlaid with a grading template and triangulated for foveal centration. We documented and stratified risk for each globe by applying the AREDS nine-step grading scale to the key clinical features from the MGS-9. We found a good distribution within the MGS categories (1-9) with few level eight globes. Eyes were processed within 12.1 ± 6.3, hours from the time of death through imaging, dissection, and freezing or fixation. Applying the MGS-9 to 331 pairs (662 eyes were simultaneously graded), 84% were within one-grading step and 93% within two steps of the fellow eye. We also document reticular pseudodrusen, basal laminar drusen, and pattern dystrophy. The MGS nine-step grading scale enables researchers using human tissue to refine the risk assessment of donor tissue. This analysis will harmonize results among researchers when grading human tissue using MGS criteria. Most importantly, the MGS-9 links directly to the known risk for progression from the AREDS.

  14. Eye movement related brain responses to emotional scenes during free viewing

    PubMed Central

    Simola, Jaana; Torniainen, Jari; Moisala, Mona; Kivikangas, Markus; Krause, Christina M.

    2013-01-01

    Emotional stimuli are preferentially processed over neutral stimuli. Previous studies, however, disagree on whether emotional stimuli capture attention preattentively or whether the processing advantage is dependent on allocation of attention. The present study investigated attention and emotion processes by measuring brain responses related to eye movement events while 11 participants viewed images selected from the International Affective Picture System (IAPS). Brain responses to emotional stimuli were compared between serial and parallel presentation. An “emotional” set included one image with high positive or negative valence among neutral images. A “neutral” set comprised four neutral images. The participants were asked to indicate which picture—if any—was emotional and to rate that picture on valence and arousal. In the serial condition, the event-related potentials (ERPs) were time-locked to the stimulus onset. In the parallel condition, the ERPs were time-locked to the first eye entry on an image. The eye movement results showed facilitated processing of emotional, especially unpleasant information. The EEG results in both presentation conditions showed that the LPP (“late positive potential”) amplitudes at 400–500 ms were enlarged for the unpleasant and pleasant pictures as compared to neutral pictures. Moreover, the unpleasant scenes elicited stronger responses than pleasant scenes. The ERP results did not support parafoveal emotional processing, although the eye movement results suggested faster attention capture by emotional stimuli. Our findings, thus, suggested that emotional processing depends on overt attentional resources engaged in the processing of emotional content. The results also indicate that brain responses to emotional images can be analyzed time-locked to eye movement events, although the response amplitudes were larger during serial presentation. PMID:23970856

  15. Hybrid Feature Extraction-based Approach for Facial Parts Representation and Recognition

    NASA Astrophysics Data System (ADS)

    Rouabhia, C.; Tebbikh, H.

    2008-06-01

    Face recognition is a specialized image processing which has attracted a considerable attention in computer vision. In this article, we develop a new facial recognition system from video sequences images dedicated to person identification whose face is partly occulted. This system is based on a hybrid image feature extraction technique called ACPDL2D (Rouabhia et al. 2007), it combines two-dimensional principal component analysis and two-dimensional linear discriminant analysis with neural network. We performed the feature extraction task on the eyes and the nose images separately then a Multi-Layers Perceptron classifier is used. Compared to the whole face, the results of simulation are in favor of the facial parts in terms of memory capacity and recognition (99.41% for the eyes part, 98.16% for the nose part and 97.25 % for the whole face).

  16. Analysis of Retinal Thinning Using Spectral-domain Optical Coherence Tomography Imaging of Sickle Cell Retinopathy Eyes Compared to Age- and Race-Matched Control Eyes.

    PubMed

    Lim, Jennifer I; Cao, Dingcai

    2018-03-17

    To determine whether the retina is thinner in sickle cell patients than in race- and age-matched controls, and, if it is thinner, whether there is any association with systemic diseases. Sickle cell and control (age- and race-matched) patients were prospectively enrolled from a university retina clinic into this observational study. Participants underwent visual acuity testing, slit-lamp biomicroscopy, dilated ophthalmoscopy, and spectral-domain optical coherence tomography imaging. Sickle cell retinal lesions, degree of vascular tortuosity, caliber of arteriovenous anastomosis, and stage of retinopathy were noted. Early Treatment Diabetic Retinopathy Study (ETDRS) subfield measurements were compared between sickle cell and control subjects and also among sickle cell hemoglobin subtypes. Associations between ETDRS subfield measurements and hemoglobin subtype, retinopathy stage, and systemic diseases were assessed. A total of 513 sickle cell eyes (260 patients) and 75 control eyes (39 patients) had median visual acuities of 20/20. ETDRS central (P = .002), inner (nasal P = .009, superior P = .021, temporal P < .001, inferior P = .017), and temporal outer (P = .012) subfield measurements were thinner in sickle cell eyes compared to control eyes. Hemoglobin SS eyes had significantly thinner inner ETDRS subfield measurements compared to SC and SThal eyes. Retinal thinning in all subfields was associated with age (P = .017) for sickle cell and control eyes. No association was found between retinal thinning and hydroxyurea use or arteriovenous anastomosis caliber. The macula is thinner in sickle cell eyes compared to control eyes; retinal thickness decreases with increasing age and sickle cell retinopathy stage and is most severe in hemoglobin SS subtypes. Copyright © 2018 Elsevier Inc. All rights reserved.

  17. Clinical-Radiologic Correlation of Extraocular Eye Movement Disorders: Seeing beneath the Surface.

    PubMed

    Thatcher, Joshua; Chang, Yu-Ming; Chapman, Margaret N; Hovis, Keegan; Fujita, Akifumi; Sobel, Rachel; Sakai, Osamu

    2016-01-01

    Extraocular eye movement disorders are relatively common and may be a significant source of discomfort and morbidity for patients. The presence of restricted eye movement can be detected clinically with quick, easily performed, noninvasive maneuvers that assess medial, lateral, upward, and downward gaze. However, detecting the presence of ocular dysmotility may not be sufficient to pinpoint the exact cause of eye restriction. Imaging plays an important role in excluding, in some cases, and detecting, in others, a specific cause responsible for the clinical presentation. However, the radiologist should be aware that the imaging findings in many of these conditions when taken in isolation from the clinical history and symptoms are often nonspecific. Normal eye movements are directly controlled by the ocular motor cranial nerves (CN III, IV, and VI) in coordination with indirect input or sensory stimuli derived from other cranial nerves. Specific causes of ocular dysmotility can be localized to the cranial nerve nuclei in the brainstem, the cranial nerve pathways in the peripheral nervous system, and the extraocular muscles in the orbit, with disease at any of these sites manifesting clinically as an eye movement disorder. A thorough understanding of central nervous system anatomy, cranial nerve pathways, and orbital anatomy, as well as familiarity with patterns of eye movement restriction, are necessary for accurate detection of radiologic abnormalities that support a diagnostic source of the suspected extraocular movement disorder. © RSNA, 2016.

  18. Demonstration of angle widening using EyeCam after laser peripheral iridotomy in eyes with angle closure.

    PubMed

    Perera, Shamira A; Quek, Desmond T; Baskaran, Mani; Tun, Tin A; Kumar, Rajesh S; Friedman, David S; Aung, Tin

    2010-06-01

    To evaluate EyeCam in detecting changes in angle configuration after laser peripheral iridotomy (LPI) in comparison to gonioscopy, the reference standard. Prospective comparative study. Twenty-four subjects (24 eyes) with primary angle-closure glaucoma (PACG) were recruited. Gonioscopy and EyeCam (Clarity Medical Systems) imaging of all 4 angle quadrants were performed, before and 2 weeks after LPI. Images were graded according to angle structures visible by an observer masked to clinical data or the status of LPI, and were performed in a random order. Angle closure in a quadrant was defined as the inability to visualize the posterior trabecular meshwork. We determined the number of quadrants with closed angles and the mean number of clock hours of angle closure before and after LPI in comparison to gonioscopy. Using EyeCam, all 24 eyes showed at least 1 quadrant of angle widening after LPI. The mean number of clock hours of angle closure decreased significantly, from 8.15 +/- 3.47 clock hours before LPI to 1.75 +/- 2.27 clock hours after LPI (P < .0001, Wilcoxon signed rank test). Overall, gonioscopy showed 1.0 +/- 1.41 (95% CI, 0.43-1.57) quadrants opening from closed to open after LPI compared to 2.0 +/- 1.28 (95% CI, 1.49-2.51, P = .009) quadrants with EyeCam. Intra-observer reproducibility of grading the extent of angle closure in clock hours in EyeCam images was moderate to good (intraclass correlation coefficient 0.831). EyeCam may be used to document changes in angle configuration after LPI in eyes with PACG. Copyright 2010 Elsevier Inc. All rights reserved.

  19. Intraocular lens based on double-liquid variable-focus lens.

    PubMed

    Peng, Runling; Li, Yifan; Hu, Shuilan; Wei, Maowei; Chen, Jiabi

    2014-01-10

    In this work, the crystalline lens in the Gullstrand-Le Grand human eye model is replaced by a double-liquid variable-focus lens, the structure data of which are based on theoretical analysis and experimental results. When the pseudoaphakic eye is built in Zemax, aspherical surfaces are introduced to the double-liquid variable-focus lens to reduce the axial spherical aberration existent in the system. After optimization, the zoom range of the pseudoaphakic eye greatly exceeds that of normal human eyes, and the spot size on an image plane basically reaches the normal human eye's limit of resolution.

  20. Accuracy of Diagnostic Imaging Modalities for Classifying Pediatric Eyes as Papilledema Versus Pseudopapilledema.

    PubMed

    Chang, Melinda Y; Velez, Federico G; Demer, Joseph L; Bonelli, Laura; Quiros, Peter A; Arnold, Anthony C; Sadun, Alfredo A; Pineles, Stacy L

    2017-12-01

    To identify the most accurate diagnostic imaging modality for classifying pediatric eyes as papilledema (PE) or pseudopapilledema (PPE). Prospective observational study. Nineteen children between the ages of 5 and 18 years were recruited. Five children (10 eyes) with PE, 11 children (19 eyes) with PPE owing to suspected buried optic disc drusen (ODD), and 3 children (6 eyes) with PPE owing to superficial ODD were included. All subjects underwent imaging with B-scan ultrasonography, fundus photography, autofluorescence, fluorescein angiography (FA), optical coherence tomography (OCT) of the retinal nerve fiber layer (RNFL), and volumetric OCT scans through the optic nerve head with standard spectral-domain (SD OCT) and enhanced depth imaging (EDI OCT) settings. Images were read by 3 masked neuro-ophthalmologists, and the final image interpretation was based on 2 of 3 reads. Image interpretations were compared with clinical diagnosis to calculate accuracy and misinterpretation rates of each imaging modality. Accuracy of each imaging technique for classifying eyes as PE or PPE, and misinterpretation rates of each imaging modality for PE and PPE. Fluorescein angiography had the highest accuracy (97%, 34 of 35 eyes, 95% confidence interval 92%-100%) for classifying an eye as PE or PPE. FA of eyes with PE showed leakage of the optic nerve, whereas eyes with suspected buried ODD demonstrated no hyperfluorescence, and eyes with superficial ODD showed nodular staining. Other modalities had substantial likelihood (30%-70%) of misinterpretation of PE as PPE. The best imaging technique for correctly classifying pediatric eyes as PPE or PE is FA. Other imaging modalities, if used in isolation, are more likely to lead to misinterpretation of PE as PPE, which could potentially result in failure to identify a life-threatening disorder causing elevated intracranial pressure and papilledema. Copyright © 2017 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  1. Light and portable novel device for diabetic retinopathy screening.

    PubMed

    Ting, Daniel S W; Tay-Kearney, Mei Ling; Kanagasingam, Yogesan

    2012-01-01

    To validate the use of an economical portable multipurpose ophthalmic imaging device, EyeScan (Ophthalmic Imaging System, Sacramento, CA, USA), for diabetic retinopathy screening. Evaluation of a diagnostic device. One hundred thirty-six (272 eyes) were recruited from diabetic retinopathy screening clinic of Royal Perth Hospital, Western Australia, Australia. All patients underwent three-field (optic disc, macular and temporal view) mydriatic retinal digital still photography captured by EyeScan and FF450 plus (Carl Zeiss Meditec, North America) and were subsequently examined by a senior consultant ophthalmologist using the slit-lamp biomicroscopy (reference standard). All retinal images were interpreted by a consultant ophthalmologist and a medical officer. The sensitivity, specificity and kappa statistics of EyeScan and FF450 plus with reference to the slit-lamp examination findings by a senior consultant ophthalmologist. For detection of any grade of diabetic retinopathy, EyeScan had a sensitivity and specificity of 93 and 98%, respectively (ophthalmologist), and 92 and 95%, respectively (medical officer). In contrast, FF450 plus images had a sensitivity and specificity of 95 and 99%, respectively (ophthalmologist), and 92 and 96%, respectively (medical officer). The overall kappa statistics for diabetic retinopathy grading for EyeScan and FF450 plus were 0.93 and 0.95 for ophthalmologist and 0.88 and 0.90 for medical officer, respectively. Given that the EyeScan requires minimal training to use and has excellent diagnostic accuracy in screening for diabetic retinopathy, it could be potentially utilized by the primary eye care providers to widely screen for diabetic retinopathy in the community. © 2011 The Authors. Clinical and Experimental Ophthalmology © 2011 Royal Australian and New Zealand College of Ophthalmologists.

  2. New trends in intraocular lens imaging

    NASA Astrophysics Data System (ADS)

    Millán, María S.; Alba-Bueno, Francisco; Vega, Fidel

    2011-08-01

    As a result of modern technological advances, cataract surgery can be seen as not only a rehabilitative operation, but a customized procedure to compensate for important sources of image degradation in the visual system of a patient, such as defocus and some aberrations. With the development of new materials, instruments and surgical techniques in ophthalmology, great progress has been achieved in the imaging capability of a pseudophakic eye implanted with an intraocular lens (IOL). From the very beginning, optical design has played an essential role in this progress. New IOL designs need, on the one hand, theoretical eye models able to predict optical imaging performance and on the other hand, testing methods, verification through in vitro and in vivo measurements, and clinical validation. The implant of an IOL requires a precise biometry of the eye, a prior calculation from physiological data, and an accurate position inside the eye. Otherwise, the effects of IOL calculation errors or misplacements degrade the image very quickly. The incorporation of wavefront aberrometry into clinical ophthalmology practice has motivated new designs of IOLs to compensate for high order aberrations in some extent. Thus, for instance, IOLs with an aspheric design have the potential to improve optical performance and contrast sensitivity by reducing the positive spherical aberration of human cornea. Monofocal IOLs cause a complete loss of accommodation that requires further correction for either distance or near vision. Multifocal IOLs address this limitation using the principle of simultaneous vision. Some multifocal IOLs include a diffractive zone that covers the aperture in part or totally. Reduced image contrast and undesired visual phenomena, such as halos and glare, have been associated to the performance of multifocal IOLs. Based on a different principle, accommodating IOLs rely on the effort of the ciliary body to increase the effective power of the optical system of the eye in near vision. Finally, we present a theoretical approach that considers the modification of less conventional ocular parameters to compensate for possible refractive errors after the IOL implant.

  3. The Minnesota Grading System of eye bank eyes for age-related macular degeneration.

    PubMed

    Olsen, Timothy W; Feng, Xiao

    2004-12-01

    The Minnesota Grading System (MGS) is a method to evaluate human eye bank eyes and determine the level of age-related macular degeneration (AMD), by using criteria and definitions from the Age-Related Eye Disease Study (AREDS). Donor eyes (108 pairs) from the Minnesota Lions Eye Bank were cut circumferentially at the pars plana to remove the anterior segment. A 1000 +/- 2.5-microm ruby sphere was placed on the optic nerve as a size reference. A digital, high-resolution, color macular photograph was taken through a dissecting microscope. The neurosensory retina was removed from one globe of the pair. The underlying retinal pigment epithelium was rephotographed, localizing the fovea with a proportional triangle. A grid was superimposed in the macular photographs and images were graded according to AREDS criteria. Twenty pairs were dissected bilaterally and graded for symmetry. Eighty-eight globes were graded into one of four MGS categories. Nineteen (95%) of 20 globes had symmetric grades. The MGS provides a methodology to grade donor tissue from eye bank eyes to correspond to the AREDS classification system. Donor tissue may be used for subsequent molecular analysis, including genomics and proteomics.

  4. In utero mouse embryonic imaging with OCT for ophthalmologic research

    NASA Astrophysics Data System (ADS)

    Syed, Saba H.; Larina, Irina V.; Dickinson, Mary E.; Larin, Kirill V.

    2011-03-01

    Live imaging of an eye during embryonic development in mammalian model is important for understanding dynamic aspects of normal and abnormal eye morphogenesis. In this study, we used Swept Source Optical Coherence Tomography (SS-OCT) for live structural imaging of mouse embryonic eye through the uterine wall. The eye structure was reconstructed in mouse embryos at 13.5 to 17.5 days post coitus (dpc). Despite the limited imaging depth of OCT in turbid tissues, we were able to visualize the whole eye globe at these stages. These results suggest that live in utero OCT imaging is a useful tool to study embryonic eye development in the mouse model.

  5. Identification of the optic nerve head with genetic algorithms.

    PubMed

    Carmona, Enrique J; Rincón, Mariano; García-Feijoó, Julián; Martínez-de-la-Casa, José M

    2008-07-01

    This work proposes creating an automatic system to locate and segment the optic nerve head (ONH) in eye fundus photographic images using genetic algorithms. Domain knowledge is used to create a set of heuristics that guide the various steps involved in the process. Initially, using an eye fundus colour image as input, a set of hypothesis points was obtained that exhibited geometric properties and intensity levels similar to the ONH contour pixels. Next, a genetic algorithm was used to find an ellipse containing the maximum number of hypothesis points in an offset of its perimeter, considering some constraints. The ellipse thus obtained is the approximation to the ONH. The segmentation method is tested in a sample of 110 eye fundus images, belonging to 55 patients with glaucoma (23.1%) and eye hypertension (76.9%) and random selected from an eye fundus image base belonging to the Ophthalmology Service at Miguel Servet Hospital, Saragossa (Spain). The results obtained are competitive with those in the literature. The method's generalization capability is reinforced when it is applied to a different image base from the one used in our study and a discrepancy curve is obtained very similar to the one obtained in our image base. In addition, the robustness of the method proposed can be seen in the high percentage of images obtained with a discrepancy delta<5 (96% and 99% in our and a different image base, respectively). The results also confirm the hypothesis that the ONH contour can be properly approached with a non-deformable ellipse. Another important aspect of the method is that it directly provides the parameters characterising the shape of the papilla: lengths of its major and minor axes, its centre of location and its orientation with regard to the horizontal position.

  6. Quantitative phase imaging of retinal cells (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    LaForest, Timothé; Carpentras, Dino; Kowalczuk, Laura; Behar-Cohen, Francine; Moser, Christophe

    2017-02-01

    Vision process is ruled by several cells layers of the retina. Before reaching the photoreceptors, light entering the eye has to pass through a few hundreds of micrometers thick layer of ganglion and neurons cells. Macular degeneration is a non-curable disease of themacula occurring with age. This disease can be diagnosed at an early stage by imaging neuronal cells in the retina and observing their death chronically. These cells are phase objects locatedon a background that presents an absorption pattern and so difficult to see with standard imagingtechniques in vivo. Phase imaging methods usually need the illumination system to be on the opposite side of the sample with respect to theimaging system. This is a constraintand a challenge for phase imaging in-vivo. Recently, the possibility of performing phase contrast imaging from one side using properties of scattering media has been shown. This phase contrast imaging is based on the back illumination generated by the sample itself. Here, we present a reflection phase imaging technique based on oblique back-illumination. The oblique back-illumination creates a dark field image of the sample. Generating asymmetric oblique illumination allows obtaining differential phase contrast image, which in turn can be processed to recover a quantitative phase image. In the case of the eye, a transcleral illumination can generate oblique incident light on the retina and the choroidal layer.The back reflected light is then collected by the eye lens to produce dark field image. We show experimental results of retinal phase imagesin ex vivo samples of human and pig retina.

  7. High-Frequency Ultrasonic Imaging of the Anterior Segment Using an Annular Array Transducer

    PubMed Central

    Silverman, Ronald H.; Ketterling, Jeffrey A.; Coleman, D. Jackson

    2006-01-01

    Objective Very-high-frequency (>35 MHz) ultrasound (VHFU) allows imaging of anterior segment structures of the eye with a resolution of less than 40-μm. The low focal ratio of VHFU transducers, however, results in a depth-of-field (DOF) of less than 1-mm. Our aim was to develop a high-frequency annular array transducer for ocular imaging with improved DOF, sensitivity and resolution compared to conventional transducers. Design Experimental Study Participants Cadaver eyes, ex vivo cow eyes, in vivo rabbit eyes. Methods A spherically curved annular array ultrasound transducer was fabricated. The array consisted of five concentric rings of equal area, had an overall aperture of 6 mm and a geometric focus of 12 mm. The nominal center frequency of all array elements was 40 MHz. An experimental system was designed in which a single array element was pulsed and echo data recorded from all elements. By sequentially pulsing each element, echo data were acquired for all 25 transmit/receive annuli combinations. The echo data were then synthetically focused and composite images produced. Transducer operation was tested by scanning a test object consisting of a series of 25-μm diameter wires spaced at increasing range from the transducer. Imaging capabilities of the annular array were demonstrated in ex vivo bovine, in vivo rabbit and human cadaver eyes. Main Outcome Measures Depth of field, resolution and sensitivity. Results The wire scans verified the operation of the array and demonstrated a 6.0 mm DOF compared to the 1.0 mm DOF of a conventional single-element transducer of comparable frequency, aperture and focal length. B-mode images of ex vivo bovine, in vivo rabbit and cadaver eyes showed that while the single-element transducer had high sensitivity and resolution within 1–2 mm of its focus, the array with synthetic focusing maintained this quality over a 6 mm DOF. Conclusion An annular array for high-resolution ocular imaging has been demonstrated. This technology offers improved depth-of-field, sensitivity and lateral resolution compared to single-element fixed focus transducers currently used for VHFU imaging of the eye. PMID:17141314

  8. High-frequency ultrasonic imaging of the anterior segment using an annular array transducer.

    PubMed

    Silverman, Ronald H; Ketterling, Jeffrey A; Coleman, D Jackson

    2007-04-01

    Very high-frequency ultrasound (VHFU; >35 megahertz [MHz]) allows imaging of anterior segment structures of the eye with a resolution of less than 40 microm. The low focal ratio of VHFU transducers, however, results in a depth of field (DOF) of less than 1 mm. The aim was to develop a high-frequency annular array transducer for ocular imaging with improved DOF, sensitivity, and resolution compared with conventional transducers. Experimental study. Cadaver eyes, ex vivo cow eyes, in vivo rabbit eyes. A spherically curved annular array ultrasound transducer was fabricated. The array consisted of 5 concentric rings of equal area, had an overall aperture of 6 mm, and a geometric focus of 12 mm. The nominal center frequency of all array elements was 40 MHz. An experimental system was designed in which a single array element was pulsed and echo data were recorded from all elements. By sequentially pulsing each element, echo data were acquired for all 25 transmit-and-receive annuli combinations. The echo data then were focused synthetically and composite images were produced. Transducer operation was tested by scanning a test object consisting of a series of 25-microm diameter wires spaced at increasing range from the transducer. Imaging capabilities of the annular array were demonstrated in ex vivo bovine, in vivo rabbit, and human cadaver eyes. Depth of field, resolution, and sensitivity. The wire scans verified the operation of the array and demonstrated a 6.0-mm DOF, compared with the 1.0-mm DOF of a conventional single-element transducer of comparable frequency, aperture, and focal length. B-mode images of ex vivo bovine, in vivo rabbit, and cadaver eyes showed that although the single-element transducer had high sensitivity and resolution within 1 to 2 mm of its focus, the array with synthetic focusing maintained this quality over a 6-mm DOF. An annular array for high-resolution ocular imaging has been demonstrated. This technology offers improved DOF, sensitivity, and lateral resolution compared with single-element fixed focus transducers currently used for VHFU imaging of the eye.

  9. Qualitative Assessment of Ultrasound Biomicroscopic Images Using Standard Photographs: The Liwan Eye Study

    PubMed Central

    Jiang, Yuzhen; Huang, Wenyong; Huang, Qunxiao; Zhang, Jian; Foster, Paul J.

    2010-01-01

    Objective. To classify anatomic features related to anterior chamber angles by a qualitative assessment system based on ultrasound biomicroscopy (UBM) images. Methods. Cases of primary angle-closure suspect (PACS), defined by pigmented trabecular meshwork that is not visible in two or more quadrants on static gonioscopy (cases) and systematically selected subjects (1 of every 10) who did not meet this criterion (controls) were enrolled during a population-based survey in Guangzhou, China. All subjects underwent UBM examination. A set of standard UBM images was used to qualitatively classify anatomic features related to the angle configuration, including iris thickness, iris convexity, iris angulation, ciliary body size, and ciliary process position. All analysis was conducted on right eye images. Results. Based on the qualitative grades, the difference in overall iris thickness between gonioscopically narrow eyes (n = 117) and control eyes (n = 57) was not statistically significant. The peripheral one third of the iris tended to be thicker in all quadrants of the PACS eyes, although the difference was statistically significant only in the superior quadrant (P = 0.008). No significant differences were found in the qualitative classifications of iris insertion, iris angulation, ciliary body size, and ciliary process position. The findings were similar when compared with the control group of eyes with wide angles in all quadrants. Conclusions. Basal iris thickness seems to be more relevant to narrow angle configuration than to overall iris thickness. Otherwise, the anterior rotation and size of the ciliary body, the iris insertion, and the overall iris thickness are comparable in narrow- and wide-angle eyes. PMID:19834039

  10. Minimizing camera-eye optical aberrations during the 3D reconstruction of retinal structures

    NASA Astrophysics Data System (ADS)

    Aldana-Iuit, Javier; Martinez-Perez, M. Elena; Espinosa-Romero, Arturo; Diaz-Uribe, Rufino

    2010-05-01

    3D reconstruction of blood vessels is a powerful visualization tool for physicians, since it allows them to refer to qualitative representation of their subject of study. In this paper we propose a 3D reconstruction method of retinal vessels from fundus images. The reconstruction method propose herein uses images of the same retinal structure in epipolar geometry. Images are preprocessed by RISA system for segmenting blood vessels and obtaining feature points for correspondences. The correspondence points process is solved using correlation. The LMedS analysis and Graph Transformation Matching algorithm are used for outliers suppression. Camera projection matrices are computed with the normalized eight point algorithm. Finally, we retrieve 3D position of the retinal tree points by linear triangulation. In order to increase the power of visualization, 3D tree skeletons are represented by surfaces via generalized cylinders whose radius correspond to morphological measurements obtained by RISA. In this paper the complete calibration process including the fundus camera and the optical properties of the eye, the so called camera-eye system is proposed. On one hand, the internal parameters of the fundus camera are obtained by classical algorithms using a reference pattern. On the other hand, we minimize the undesirable efects of the aberrations induced by the eyeball optical system assuming that contact enlarging lens corrects astigmatism, spherical and coma aberrations are reduced changing the aperture size and eye refractive errors are suppressed adjusting camera focus during image acquisition. Evaluation of two self-calibration proposals and results of 3D blood vessel surface reconstruction are presented.

  11. Anisotropic responses to motion toward and away from the eye

    NASA Technical Reports Server (NTRS)

    Perrone, John A.

    1986-01-01

    When a rigid object moves toward the eye, it is usually perceived as being rigid. However, in the case of motion away from the eye, the motion and structure of the object are perceived nonveridically, with the percept tending to reflect the nonrigid transformations that are present in the retinal image. This difference in response to motion to and from the observer was quantified in an experiment using wire-frame computer-generated boxes which moved toward and away from the eye. Two theoretical systems are developed by which uniform three-dimensional velocity can be recovered from an expansion pattern of nonuniform velocity vectors. It is proposed that the human visual system uses two similar systems for processing motion in depth. The mechanism used for motion away from the eye produces perceptual errors because it is not suited to objects with a depth component.

  12. Design of an ultra-thin near-eye display with geometrical waveguide and freeform optics

    NASA Astrophysics Data System (ADS)

    Tsai, Meng-Che; Lee, Tsung-Xian

    2017-02-01

    Due to the worldwide portable devices and illumination technology trends, researches interest in laser diodes applications are booming in recent years. One of the popular and potential LDs applications is near-eye display used in VR/AR. An ideal near-eye display needs to provide high resolution, wide FOV imagery with compact magnifying optics, and long battery life for prolonged use. However, previous studies still cannot reach high light utilization efficiency in illumination and imaging optical systems which should be raised as possible to increase wear comfort. To meet these needs, a waveguide illumination system of near-eye display is presented in this paper. We focused on proposing a high efficiency RGB LDs light engine which could reduce power consumption and increase flexibility of mechanism design by using freeform TIR reflectors instead of beam splitters. By these structures, the total system efficiency of near-eye display is successfully increased, and the improved results in efficiency and fabrication tolerance of near-eye displays are shown in this paper.

  13. Holocamera for 3-D micrography of the alert human eye

    NASA Astrophysics Data System (ADS)

    Tokuda, A. R.; Auth, D. C.; Bruckner, A. P.

    1980-07-01

    A holocamera that safely records holograms of the full depth of the alert human eye with a spatial resolution of about 20 microns is described. A single-mode argon-ion laser generating 2 W at 5145 A serves as the illuminating source. Holographic exposure times of 0.3 msec are achieved by means of a fail-safe electromechanical shutter system. Integrated retinal irradiance levels are well under the American National Standards Institute safety standards. Reconstructed real images are projected directly onto the vidicon faceplate of a closed-circuit TV system, enabling convenient scanning in the x-y-z dimensions of the reconstructed eyeball. Serially reconstructed holograms of cataractous rabbit eyes and normal human eyes are presented.

  14. Automated on-line fecal detection - digital eye guards against fecal contamination

    USDA-ARS?s Scientific Manuscript database

    Agricultural Research Service scientists in Athens, GA., have been granted a patent on a method to detect contaminants on food surfaces with imaging systems. Using a real-time imaging system in the processing plant, researchers Bob Windham, Kurt, Lawrence, Bosoon Park, and Doug Smith in the ARS Poul...

  15. Molecular Evolution of Spider Vision: New Opportunities, Familiar Players.

    PubMed

    Morehouse, Nathan I; Buschbeck, Elke K; Zurek, Daniel B; Steck, Mireille; Porter, Megan L

    2017-08-01

    Spiders are among the world's most species-rich animal lineages, and their visual systems are likewise highly diverse. These modular visual systems, composed of four pairs of image-forming "camera" eyes, have taken on a huge variety of forms, exhibiting variation in eye size, eye placement, image resolution, and field of view, as well as sensitivity to color, polarization, light levels, and motion cues. However, despite this conspicuous diversity, our understanding of the genetic underpinnings of these visual systems remains shallow. Here, we review the current literature, analyze publicly available transcriptomic data, and discuss hypotheses about the origins and development of spider eyes. Our efforts highlight that there are many new things to discover from spider eyes, and yet these opportunities are set against a backdrop of deep homology with other arthropod lineages. For example, many (but not all) of the genes that appear important for early eye development in spiders are familiar players known from the developmental networks of other model systems (e.g., Drosophila). Similarly, our analyses of opsins and related phototransduction genes suggest that spider photoreceptors employ many of the same genes and molecular mechanisms known from other arthropods, with a hypothesized ancestral spider set of four visual and four nonvisual opsins. This deep homology provides a number of useful footholds into new work on spider vision and the molecular basis of its extant variety. We therefore discuss what some of these first steps might be in the hopes of convincing others to join us in studying the vision of these fascinating creatures.

  16. DUSTER: demonstration of an integrated LWIR-VNIR-SAR imaging system

    NASA Astrophysics Data System (ADS)

    Wilson, Michael L.; Linne von Berg, Dale; Kruer, Melvin; Holt, Niel; Anderson, Scott A.; Long, David G.; Margulis, Yuly

    2008-04-01

    The Naval Research Laboratory (NRL) and Space Dynamics Laboratory (SDL) are executing a joint effort, DUSTER (Deployable Unmanned System for Targeting, Exploitation, and Reconnaissance), to develop and test a new tactical sensor system specifically designed for Tier II UAVs. The system is composed of two coupled near-real-time sensors: EyePod (VNIR/LWIR ball gimbal) and NuSAR (L-band synthetic aperture radar). EyePod consists of a jitter-stabilized LWIR sensor coupled with a dual focal-length optical system and a bore-sighted high-resolution VNIR sensor. The dual focal-length design coupled with precision pointing an step-stare capabilities enable EyePod to conduct wide-area survey and high resolution inspection missions from a single flight pass. NuSAR is being developed with partners Brigham Young University (BYU) and Artemis, Inc and consists of a wideband L-band SAR capable of large area survey and embedded real-time image formation. Both sensors employ standard Ethernet interfaces and provide geo-registered NITFS output imagery. In the fall of 2007, field tests were conducted with both sensors, results of which will be presented.

  17. Lightweight helmet-mounted eye movement measurement system

    NASA Technical Reports Server (NTRS)

    Barnes, J. A.

    1978-01-01

    The helmet-mounted eye movement measuring system, weighs 1,530 grams; the weight of the present aviators' helmet in standard form with the visor is 1,545 grams. The optical head is standard NAC Eye-Mark. This optical head was mounted on a magnesium yoke which in turn was attached to a slide cam mounted on the flight helmet. The slide cam allows one to adjust the eye-to-optics system distance quite easily and to secure it so that the system will remain in calibration. The design of the yoke and slide cam is such that the subject can, in an emergency, move the optical head forward and upward to the stowed and locked position atop the helmet. This feature was necessary for flight safety. The television camera that is used in the system is a solid state General Electric TN-2000 with a charged induced device imager used as the vidicon.

  18. Intensity Changes in Typhoon Sinlaku and Typhoon Jangmi in Response to Varying Ocean and Atmospheric Conditions

    DTIC Science & Technology

    2011-03-01

    FIGURES Figure 1.  Radar image of the eye of Typhoon Cobra on 18 December 1944 from a ship located at the center of the area shown (from NOAA Library at...System Research and Predictability Experiment T- PARC : THORPEX-Pacific Asian Regional Campaign TS: Tropical Storm TUTT: Tropical Upper...Figure 1. Radar image of the eye of Typhoon Cobra on 18 December 1944 from a ship located at the center of the area shown (from NOAA Library at

  19. High-resolution ultrasound imaging of the eye – a review

    PubMed Central

    Silverman, Ronald H

    2009-01-01

    This report summarizes the physics, technology and clinical application of ultrasound biomicroscopy (UBM) of the eye, in which frequencies of 35 MHz and above provide over a threefold improvement in resolution compared with conventional ophthalmic ultrasound systems. UBM allows imaging of anatomy and pathology involving the anterior segment, including regions obscured by overlying optically opaque anatomic or pathologic structures. UBM provides diagnostically significant information in conditions such as glaucoma, cysts and neoplasms, trauma and foreign bodies. UBM also can provide crucial biometric information regarding anterior segment structures, including the cornea and its constituent layers and the anterior and posterior chambers. Although UBM has now been in use for over 15 years, new technologies, including transducer arrays, pulse encoding and combination of ultrasound with light, offer the potential for significant advances in high-resolution diagnostic imaging of the eye. PMID:19138310

  20. Interactive stereo games to improve vision in children with amblyopia using dichoptic stimulation

    NASA Astrophysics Data System (ADS)

    Herbison, Nicola; Ash, Isabel M.; MacKeith, Daisy; Vivian, Anthony; Purdy, Jonathan H.; Fakis, Apostolos; Cobb, Sue V.; Hepburn, Trish; Eastgate, Richard M.; Gregson, Richard M.; Foss, Alexander J. E.

    2015-03-01

    Amblyopia is a common condition affecting 2% of all children and traditional treatment consists of either wearing a patch or penalisation. We have developed a treatment using stereo technology, not to provide a 3D image but to allow dichoptic stimulation. This involves presenting an image with the same background to both eyes but with features of interest removed from the image presented to the normal eye with the aim to preferentially stimulated visual development in the amblyopic, or lazy, eye. Our system, called I-BiT can use either a game or a video (DVD) source as input. Pilot studies show that this treatment is effective with short treatment times and has proceeded to randomised controlled clinical trial. The early indications are that the treatment has a high degree of acceptability and corresponding good compliance.

  1. Characteristics of the retinal images of the eye optical systems with implanted intraocular lenses

    NASA Astrophysics Data System (ADS)

    Siedlecki, Damian; Zając, Marek; Nowak, Jerzy

    2007-04-01

    Cataract, or opacity of crystalline lens in the human eye is one of the most frequent reasons of blindness nowadays. Removing the pathologically altered crystalline lens and replacing it with artificial implantable intraocular lens (IOL) is practically the only therapy in this illness. There exist a wide variety of artificial IOL types on the medical market, differing in their material and design (shape). In this paper six exemplary models of IOL's made of PMMA, acrylic and silicone are considered. The retinal image quality is analyzed numerically on the basis of Liou-Brennan eye model with these IOL's inserted. Chromatic aberration as well as polychromatic Point Spread Function and Modulation Transfer Function are calculated as most adequate image quality measures. The calculations made with Zemax TM software show the importance of chromatic aberration correction.

  2. High-resolution ultrasound imaging of the eye - a review.

    PubMed

    Silverman, Ronald H

    2009-01-01

    This report summarizes the physics, technology and clinical application of ultrasound biomicroscopy (UBM) of the eye, in which frequencies of 35 MHz and above provide over a threefold improvement in resolution compared with conventional ophthalmic ultrasound systems. UBM allows imaging of anatomy and pathology involving the anterior segment, including regions obscured by overlying optically opaque anatomic or pathologic structures. UBM provides diagnostically significant information in conditions such as glaucoma, cysts and neoplasms, trauma and foreign bodies. UBM also can provide crucial biometric information regarding anterior segment structures, including the cornea and its constituent layers and the anterior and posterior chambers. Although UBM has now been in use for over 15 years, new technologies, including transducer arrays, pulse encoding and combination of ultrasound with light, offer the potential for significant advances in high-resolution diagnostic imaging of the eye.

  3. Aqueous Angiography–Mediated Guidance of Trabecular Bypass Improves Angiographic Outflow in Human Enucleated Eyes

    PubMed Central

    Huang, Alex S.; Saraswathy, Sindhu; Dastiridou, Anna; Begian, Alan; Mohindroo, Chirayu; Tan, James C. H.; Francis, Brian A.; Hinton, David R.; Weinreb, Robert N.

    2016-01-01

    Purpose To assess the ability of trabecular micro-bypass stents to improve aqueous humor outflow (AHO) in regions initially devoid of AHO as assessed by aqueous angiography. Methods Enucleated human eyes (14 total from 7 males and 3 females [ages 52–84]) were obtained from an eye bank within 48 hours of death. Eyes were oriented by inferior oblique insertion, and aqueous angiography was performed with indocyanine green (ICG; 0.4%) or fluorescein (2.5%) at 10 mm Hg. With an angiographer, infrared and fluorescent images were acquired. Concurrent anterior segment optical coherence tomography (OCT) was performed, and fixable fluorescent dextrans were introduced into the eye for histologic analysis of angiographically positive and negative areas. Experimentally, some eyes (n = 11) first received ICG aqueous angiography to determine angiographic patterns. These eyes then underwent trabecular micro-bypass sham or stent placement in regions initially devoid of angiographic signal. This was followed by fluorescein aqueous angiography to query the effects. Results Aqueous angiography in human eyes yielded high-quality images with segmental patterns. Distally, angiographically positive but not negative areas demonstrated intrascleral lumens on OCT images. Aqueous angiography with fluorescent dextrans led to their trapping in AHO pathways. Trabecular bypass but not sham in regions initially devoid of ICG aqueous angiography led to increased aqueous angiography as assessed by fluorescein (P = 0.043). Conclusions Using sequential aqueous angiography in an enucleated human eye model system, regions initially without angiographic flow or signal could be recruited for AHO using a trabecular bypass stent. PMID:27588614

  4. SU-C-BRB-06: Utilizing 3D Scanner and Printer for Dummy Eye-Shield: Artifact-Free CT Images of Tungsten Eye-Shield for Accurate Dose Calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, J; Lee, J; Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul

    Purpose: To evaluate the effect of a tungsten eye-shield on the dose distribution of a patient. Methods: A 3D scanner was used to extract the dimension and shape of a tungsten eye-shield in the STL format. Scanned data was transferred into a 3D printer. A dummy eye shield was then produced using bio-resin (3D systems, VisiJet M3 Proplast). For a patient with mucinous carcinoma, the planning CT was obtained with the dummy eye-shield placed on the patient’s right eye. Field shaping of 6 MeV was performed using a patient-specific cerrobend block on the 15 x 15 cm{sup 2} applicator. Themore » gantry angle was 330° to cover the planning target volume near by the lens. EGS4/BEAMnrc was commissioned from our measurement data from a Varian 21EX. For the CT-based dose calculation using EGS4/DOSXYZnrc, the CT images were converted to a phantom file through the ctcreate program. The phantom file had the same resolution as the planning CT images. By assigning the CT numbers of the dummy eye-shield region to 17000, the real dose distributions below the tungsten eye-shield were calculated in EGS4/DOSXYZnrc. In the TPS, the CT number of the dummy eye-shield region was assigned to the maximum allowable CT number (3000). Results: As compared to the maximum dose, the MC dose on the right lens or below the eye shield area was less than 2%, while the corresponding RTP calculated dose was an unrealistic value of approximately 50%. Conclusion: Utilizing a 3D scanner and a 3D printer, a dummy eye-shield for electron treatment can be easily produced. The artifact-free CT images were successfully incorporated into the CT-based Monte Carlo simulations. The developed method was useful in predicting the realistic dose distributions around the lens blocked with the tungsten shield.« less

  5. Elimination of coherent noise in a coherent light imaging system

    NASA Technical Reports Server (NTRS)

    Grebowsky, G. J.; Hermann, R. L.; Paull, H. B.; Shulman, A. R.

    1970-01-01

    Optical imaging systems using coherent light introduce objectionable noise into the output image plane. Dust and bubbles on and in lenses cause most of the noise in the output image. This noise usually appears as bull's-eye diffraction patterns in the image. By rotating the lens about the optical axis these diffraction patterns can be essentially eliminated. The technique does not destroy the spatial coherence of the light and permits spatial filtering of the input plane.

  6. A novel biometric X-ray backscatter inspection of dangerous materials based on a lobster-eye objective

    NASA Astrophysics Data System (ADS)

    Xu, Jie; Wang, Xin; Mu, Baozhong; Zhan, Qi; Xie, Qing; Li, Yaran; Chen, Yifan; He, Yanan

    2016-10-01

    In order to counter drug-related crimes effectively, and to safeguard homeland security as well as public safety, it is important to inspect drugs, explosives and other contraband quickly and accurately from the express mail system, luggage, vehicles and other objects. In this paper, we discuss X-ray backscatter inspection system based on a novel lobster-eye X-ray objective, which is an effective inspection technology for drugs, explosives and other contraband inspection. Low atomic number materials, such as drugs and explosives, leads to strong Compton scattering after irradiated by X-ray, which is much stronger than high atomic number material, such as common metals, etc. By detecting the intensity of scattering signals, it is possible to distinguish between organics and inorganics. The lobster-eye X-ray optical system imitates the reflective eyes of lobsters, which field of view can be made as large as desired and it is practical to achieve spatial resolution of several millimeters for finite distance detection. A novel lobster-eye X-ray objective is designed based on modifying Schmidt geometry by using multi-lens structure, so as to reduce the difference of resolution between the horizontal and vertical directions. The demonstration experiments of X-ray backscattering imaging were carried out. A suitcase, a wooden box and a tire with several typical samples hidden in them were imaged by the X-ray backscattering inspection system based on a lobster-eye X-ray objective. The results show that this X-ray backscattering inspection system can get a resolution of less than five millimeters under the FOV of more than two hundred millimeters with 0.5 meter object distance, which can still be improved.

  7. Methodological Aspects of Cognitive Rehabilitation with Eye Movement Desensitization and Reprocessing (EMDR)

    PubMed Central

    Zarghi, Afsaneh; Zali, Alireza; Tehranidost, Mehdi

    2013-01-01

    A variety of nervous system components such as medulla, pons, midbrain, cerebellum, basal ganglia, parietal, frontal and occipital lobes have role in Eye Movement Desensitization and Reprocessing (EMDR) processes. The eye movement is done simultaneously for attracting client's attention to an external stimulus while concentrating on a certain internal subject. Eye movement guided by therapist is the most common attention stimulus. The role of eye movement has been documented previously in relation with cognitive processing mechanisms. A series of systemic experiments have shown that the eyes’ spontaneous movement is associated with emotional and cognitive changes and results in decreased excitement, flexibility in attention, memory processing, and enhanced semantic recalling. Eye movement also decreases the memory's image clarity and the accompanying excitement. By using EMDR, we can reach some parts of memory which were inaccessible before and also emotionally intolerable. Various researches emphasize on the effectiveness of EMDR in treating and curing phobias, pains, and dependent personality disorders. Consequently, due to the involvement of multiple neural system components, this palliative method of treatment can also help to rehabilitate the neuro-cognitive system. PMID:25337334

  8. Range imaging pulsed laser sensor with two-dimensional scanning of transmitted beam and scanless receiver using high-aspect avalanche photodiode array for eye-safe wavelength

    NASA Astrophysics Data System (ADS)

    Tsuji, Hidenobu; Imaki, Masaharu; Kotake, Nobuki; Hirai, Akihito; Nakaji, Masaharu; Kameyama, Shumpei

    2017-03-01

    We demonstrate a range imaging pulsed laser sensor with two-dimensional scanning of a transmitted beam and a scanless receiver using a high-aspect avalanche photodiode (APD) array for the eye-safe wavelength. The system achieves a high frame rate and long-range imaging with a relatively simple sensor configuration. We developed a high-aspect APD array for the wavelength of 1.5 μm, a receiver integrated circuit, and a range and intensity detector. By combining these devices, we realized 160×120 pixels range imaging with a frame rate of 8 Hz at a distance of about 50 m.

  9. Real-time eye motion correction in phase-resolved OCT angiography with tracking SLO

    PubMed Central

    Braaf, Boy; Vienola, Kari V.; Sheehy, Christy K.; Yang, Qiang; Vermeer, Koenraad A.; Tiruveedhula, Pavan; Arathorn, David W.; Roorda, Austin; de Boer, Johannes F.

    2012-01-01

    In phase-resolved OCT angiography blood flow is detected from phase changes in between A-scans that are obtained from the same location. In ophthalmology, this technique is vulnerable to eye motion. We address this problem by combining inter-B-scan phase-resolved OCT angiography with real-time eye tracking. A tracking scanning laser ophthalmoscope (TSLO) at 840 nm provided eye tracking functionality and was combined with a phase-stabilized optical frequency domain imaging (OFDI) system at 1040 nm. Real-time eye tracking corrected eye drift and prevented discontinuity artifacts from (micro)saccadic eye motion in OCT angiograms. This improved the OCT spot stability on the retina and consequently reduced the phase-noise, thereby enabling the detection of slower blood flows by extending the inter-B-scan time interval. In addition, eye tracking enabled the easy compounding of multiple data sets from the fovea of a healthy volunteer to create high-quality eye motion artifact-free angiograms. High-quality images are presented of two distinct layers of vasculature in the retina and the dense vasculature of the choroid. Additionally we present, for the first time, a phase-resolved OCT angiogram of the mesh-like network of the choriocapillaris containing typical pore openings. PMID:23304647

  10. In vivo photothermal optical coherence tomography of gold nanorods in the mouse eye (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Lapierre-Landry, Maryse; Gordon, Andrew Y.; Penn, John S.; Skala, Melissa C.

    2017-02-01

    Optical coherence tomography (OCT) has become standard in retinal imaging at the pre-clinical and clinical level by allowing non-invasive, three-dimensional imaging of the tissue structure. However, OCT lacks specificity to contrast agents that could be used for in vivo molecular imaging. We have performed in vivo photothermal optical coherence tomography (PT-OCT) of targeted gold nanorods in the mouse retina after the mice were injected systemically with the contrast agent. To our knowledge, we are the first to perform PT-OCT in the eye and image targeted gold nanorods with this technology. As a model of age-related wet macular degeneration, lesions were induced by laser photocoagulation in each mouse retina (n=12 eyes). Untargeted and targeted (anti-mouse CD102 antibody, labeling neovasculature) gold nanorods (peak absorption λ=750nm) were injected intravenously by tail-vein injection five days after lesion induction, and imaged the same day with PT-OCT. Our instrument is a spectral domain OCT system (λ=860nm) with a Titanium:Sapphire laser (λ=750nm) added to the beam path using a 50:50 coupler to heat the gold nanorods. We acquired PT-OCT volumes of one lesion per mouse eye. There was a significant increase in photothermal intensity per unit area of the lesion in the targeted gold nanorods group versus the saline control group and the untargeted gold nanorods group. This experiment demonstrates the feasibility of PT-OCT to image the distribution of molecular contrast agents in the mouse retina, including in highly scattering lesions. In the future we will use this method to identify new biomarkers linked with retinal disease.

  11. Anterior Eye Imaging with Optical Coherence Tomography

    NASA Astrophysics Data System (ADS)

    Huang, David; Li, Yan; Tang, Maolong

    The development of corneal and anterior segment optical coherence tomography (OCT) technology has advanced rapidly in recently years. The scan geometry and imaging wavelength are both important choices to make in designing anterior segment OCT systems. Rectangular scan geometry offers the least image distortion and is now used in most anterior OCT systems. The wavelength of OCT light source affects resolution and penetration. An optimal choice of the OCT imaging wavelength (840, 1,050, or 1,310 nm) depends on the application of interest. Newer generation Fourier-domain OCT technology can provide scan speed 100-1000 times faster than the time-domain technology. Various commercial anterior OCT systems are available on the market. A wide spectrum of diagnostic and surgical applications using anterior segment OCT had been investigated, including mapping of corneal and epithelial thicknesses, keratoconus screening, measuring corneal refractive power, corneal surgery planning and evaluation in LASIK, intracorneal ring implantation, assessment of angle closure glaucoma, anterior chamber biometry and intraocular lens implants, intraocular lens power calculation, and eye bank donor cornea screening.

  12. Effect that Smell Presentation Has on an Individual in Regards to Eye Catching and Memory

    NASA Astrophysics Data System (ADS)

    Tomono, Akira; Kanda, Koyori; Otake, Syunya

    If a person's eyes are greater attracted to the target objects by matching the smell to an important scene of a movie or commercial image, the value of the image contents will rise. In this paper, we attempt to describe the image system that can also present smells, and the reason behind the improvement, from gaze point analysis, of the presence of smell when it is matched to the image. The relationship between the eye catching property and the position of the sight object was examined using the image with the scene where someone eats three kinds of fruits. These objects were gazed at for a long time once releasing their smells. When the smell was not released, the gaze moved actively to try and receive a lot of information from the entire screen. On the other hand, when the smell was inserted, the subject was interested in the object and there was a tendency for their gaze to stay within the narrow area surrounding the image. Moreover, we investigated the effect on the memory by putting the smell on the flowers in the virtual flower shop using immersive virtual reality system (HoloStageTM). It was memorized more easily compared with a scentless case. It seems that the viewer obtains the information actively by reacting to its smell.

  13. Concept for image-guided vitreo-retinal fs-laser surgery: adaptive optics and optical coherence tomography for laser beam shaping and positioning

    NASA Astrophysics Data System (ADS)

    Matthias, Ben; Brockmann, Dorothee; Hansen, Anja; Horke, Konstanze; Knoop, Gesche; Gewohn, Timo; Zabic, Miroslav; Krüger, Alexander; Ripken, Tammo

    2015-03-01

    Fs-lasers are well established in ophthalmic surgery as high precision tools for corneal flap cutting during laser in situ keratomileusis (LASIK) and increasingly utilized for cutting the crystalline lens, e.g. in assisting cataract surgery. For addressing eye structures beyond the cornea, an intraoperative depth resolved imaging is crucial to the safety and success of the surgical procedure due to interindividual anatomical disparities. Extending the field of application even deeper to the posterior eye segment, individual eye aberrations cannot be neglected anymore and surgery with fs-laser is impaired by focus degradation. Our demonstrated concept for image-guided vitreo-retinal fs-laser surgery combines adaptive optics (AO) for spatial beam shaping and optical coherence tomography (OCT) for focus positioning guidance. The laboratory setup comprises an adaptive optics assisted 800 nm fs-laser system and is extended by a Fourier domain optical coherence tomography system. Phantom structures are targeted, which mimic tractional epiretinal membranes in front of excised porcine retina within an eye model. AO and OCT are set up to share the same scanning and focusing optics. A Hartmann-Shack sensor is employed for aberration measurement and a deformable mirror for aberration correction. By means of adaptive optics the threshold energy for laser induced optical breakdown is lowered and cutting precision is increased. 3D OCT imaging of typical ocular tissue structures is achieved with sufficient resolution and the images can be used for orientation of the fs-laser beam. We present targeted dissection of the phantom structures and its evaluation regarding retinal damage.

  14. Photorefraction Screens Millions for Vision Disorders

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Who would have thought that stargazing in the 1980s would lead to hundreds of thousands of schoolchildren seeing more clearly today? Collaborating with research ophthalmologists and optometrists, Marshall Space Flight Center scientists Joe Kerr and the late John Richardson adapted optics technology for eye screening methods using a process called photorefraction. Photorefraction consists of delivering a light beam into the eyes where it bends in the ocular media, hits the retina, and then reflects as an image back to a camera. A series of refinements and formal clinical studies followed their highly successful initial tests in the 1980s. Evaluating over 5,000 subjects in field tests, Kerr and Richardson used a camera system prototype with a specifically angled telephoto lens and flash to photograph a subject s eye. They then analyzed the image, the cornea and pupil in particular, for irregular reflective patterns. Early tests of the system with 1,657 Alabama children revealed that, while only 111 failed the traditional chart test, Kerr and Richardson s screening system found 507 abnormalities.

  15. Angle assessment by EyeCam, goniophotography, and gonioscopy.

    PubMed

    Baskaran, Mani; Perera, Shamira A; Nongpiur, Monisha E; Tun, Tin A; Park, Judy; Kumar, Rajesh S; Friedman, David S; Aung, Tin

    2012-09-01

    To compare EyeCam (Clarity Medical Systems, Pleasanton, CA) and goniophotography in detecting angle closure, using gonioscopy as the reference standard. In this hospital-based, prospective, cross-sectional study, participants underwent gonioscopy by a single observer, and EyeCam imaging and goniophotography by different operators. The anterior chamber angle in a quadrant was classified as closed if the posterior trabecular meshwork could not be seen. A masked observer categorized the eyes as per the number of closed quadrants, and an eye was classified as having angle closure if there were 2 or more quadrants of closure. Agreement between the methods was analyzed by κ statistic and comparison of area under receiver operating characteristic curves (AUC). Eighty-five participants (85 eyes) were included, the majority of whom were Chinese. Angle closure was detected in 38 eyes (45%) with gonioscopy, 40 eyes (47%) using EyeCam, and 40 eyes (47%) with goniophotography (P=0.69 in both comparisons, McNemar test). The agreement for angle closure diagnosis (by eye) between gonioscopy and the 2 imaging modalities was high (κ=0.86; 95% Confidence Interval (CI), 0.75-0.97), whereas the agreement between EyeCam and goniophotography was not as good (κ=0.72; 95% CI, 0.57-0.87); largely due to lack of agreement in the nasal and temporal quadrants (κ=0.55 to 0.67). The AUC for detecting eyes with gonioscopic angle closure was similar for goniophotography and EyeCam (AUC 0.93, sensitivity=94.7%, specificity=91.5%; P>0.95). EyeCam and goniophotography have similarly high sensitivity and specificity for the detection of gonioscopic angle closure.

  16. Eye movements reduce vividness and emotionality of "flashforwards".

    PubMed

    Engelhard, Iris M; van den Hout, Marcel A; Janssen, Wilco C; van der Beek, Jorinde

    2010-05-01

    Earlier studies have shown that eye movements during retrieval of disturbing images about past events reduce their vividness and emotionality, which may be due to both tasks competing for working memory resources. This study examined whether eye movements reduce vividness and emotionality of visual distressing images about feared future events: "flashforwards". A non-clinical sample was asked to select two images of feared future events, which were self-rated for vividness and emotionality. These images were retrieved while making eye movements or without a concurrent secondary task, and then vividness and emotionality were rated again. Relative to the no-dual task condition, eye movements while thinking of future-oriented images resulted in decreased ratings of image vividness and emotional intensity. Apparently, eye movements reduce vividness and emotionality of visual images about past and future feared events. This is in line with a working memory account of the beneficial effects of eye movements, which predicts that any task that taxes working memory during retrieval of disturbing mental images will be beneficial. Copyright 2010 Elsevier Ltd. All rights reserved.

  17. Comparison of ultra-widefield fluorescein angiography with the Heidelberg Spectralis(®) noncontact ultra-widefield module versus the Optos(®) Optomap(®).

    PubMed

    Witmer, Matthew T; Parlitsis, George; Patel, Sarju; Kiss, Szilárd

    2013-01-01

    To compare ultra-widefield fluorescein angiography imaging using the Optos(®) Optomap(®) and the Heidelberg Spectralis(®) noncontact ultra-widefield module. Five patients (ten eyes) underwent ultra-widefield fluorescein angiography using the Optos(®) panoramic P200Tx imaging system and the noncontact ultra-widefield module in the Heidelberg Spectralis(®) HRA+OCT system. The images were obtained as a single, nonsteered shot centered on the macula. The area of imaged retina was outlined and quantified using Adobe(®) Photoshop(®) C5 software. The total area and area within each of four visualized quadrants was calculated and compared between the two imaging modalities. Three masked reviewers also evaluated each quadrant per eye (40 total quadrants) to determine which modality imaged the retinal vasculature most peripherally. Optos(®) imaging captured a total retinal area averaging 151,362 pixels, ranging from 116,998 to 205,833 pixels, while the area captured using the Heidelberg Spectralis(®) was 101,786 pixels, ranging from 73,424 to 116,319 (P = 0.0002). The average area per individual quadrant imaged by Optos(®) versus the Heidelberg Spectralis(®) superiorly was 32,373 vs 32,789 pixels, respectively (P = 0.91), inferiorly was 24,665 vs 26,117 pixels, respectively (P = 0.71), temporally was 47,948 vs 20,645 pixels, respectively (P = 0.0001), and nasally was 46,374 vs 22,234 pixels, respectively (P = 0.0001). The Heidelberg Spectralis(®) was able to image the superior and inferior retinal vasculature to a more distal point than was the Optos(®), in nine of ten eyes (18 of 20 quadrants). The Optos(®) was able to image the nasal and temporal retinal vasculature to a more distal point than was the Heidelberg Spectralis(®), in ten of ten eyes (20 of 20 quadrants). The ultra-widefield fluorescein angiography obtained with the Optos(®) and Heidelberg Spectralis(®) ultra-widefield imaging systems are both excellent modalities that provide views of the peripheral retina. On a single nonsteered image, the Optos(®) Optomap(®) covered a significantly larger total retinal surface area, with greater image variability, than did the Heidelberg Spectralis(®) ultra-widefield module. The Optos(®) captured an appreciably wider view of the retina temporally and nasally, albeit with peripheral distortion, while the ultra-widefield Heidelberg Spectralis(®) module was able to image the superior and inferior retinal vasculature more peripherally. The clinical significance of these findings as well as the area imaged on steered montaged images remains to be determined.

  18. Magnetic resonance imaging study of eye congenital birth defects in mouse model

    PubMed Central

    Tucker, Zachary; Mongan, Maureen; Meng, Qinghang; Xia, Ying

    2017-01-01

    Purpose Embryonic eyelid closure is a well-documented morphogenetic episode in mammalian eye development. Detection of eyelid closure defect in humans is a major challenge because eyelid closure and reopen occur entirely in utero. As a consequence, congenital eye defects that are associated with failure of embryonic eyelid closure remain unknown. To fill the gap, we developed a mouse model of defective eyelid closure. This preliminary work demonstrates that the magnetic resonance imaging (MRI) approach can be used for the detection of extraocular muscle abnormalities in the mouse model. Methods Mice with either normal (Map3k1+/−) or defective (Map3k1−/−) embryonic eyelid closure were used in this study. Images of the extraocular muscles were obtained with a 9.4 T high resolution microimaging MRI system. The extraocular muscles were identified, segmented, and measured in each imaging slice using an in-house program. Results In agreement with histological findings, the imaging data show that mice with defective embryonic eyelid closure develop less extraocular muscle than normal mice. In addition, the size of the eyeballs was noticeably reduced in mice with defective embryonic eyelid closure. Conclusions We demonstrated that MRI can potentially be used for the study of extraocular muscle in the mouse model of the eye open-at-birth defect, despite the lack of specificity of muscle group provided by the current imaging resolution. PMID:28848319

  19. Teleretinal Imaging to Screen for Diabetic Retinopathy in the Veterans Health Administration

    PubMed Central

    Cavallerano, Anthony A.; Conlin, Paul R.

    2008-01-01

    Diabetes is the leading cause of adult vision loss in the United States and other industrialized countries. While the goal of preserving vision in patients with diabetes appears to be attainable, the process of achieving this goal poses a formidable challenge to health care systems. The large increase in the prevalence of diabetes presents practical and logistical challenges to providing quality care to all patients with diabetes. Given this challenge, the Veterans Health Administration (VHA) is increasingly using information technology as a means of improving the efficiency of its clinicians. The VHA has taken advantage of a mature computerized patient medical record system by integrating a program of digital retinal imaging with remote image interpretation (teleretinal imaging) to assist in providing eye care to the nearly 20% of VHA patients with diabetes. We describe this clinical pathway for accessing patients with diabetes in ambulatory care settings, evaluating their retinas for level of diabetic retinopathy with a teleretinal imaging system, and prioritizing their access into an eye and health care program in a timely and appropriate manner. PMID:19885175

  20. Pursuit Eye Movements

    NASA Technical Reports Server (NTRS)

    Krauzlis, Rich; Stone, Leland; Null, Cynthia H. (Technical Monitor)

    1998-01-01

    When viewing objects, primates use a combination of saccadic and pursuit eye movements to stabilize the retinal image of the object of regard within the high-acuity region near the fovea. Although these movements involve widespread regions of the nervous system, they mix seamlessly in normal behavior. Saccades are discrete movements that quickly direct the eyes toward a visual target, thereby translating the image of the target from an eccentric retinal location to the fovea. In contrast, pursuit is a continuous movement that slowly rotates the eyes to compensate for the motion of the visual target, minimizing the blur that can compromise visual acuity. While other mammalian species can generate smooth optokinetic eye movements - which track the motion of the entire visual surround - only primates can smoothly pursue a single small element within a complex visual scene, regardless of the motion elsewhere on the retina. This ability likely reflects the greater ability of primates to segment the visual scene, to identify individual visual objects, and to select a target of interest.

  1. Sensitivity and specificity of a new scoring system for diabetic macular oedema detection using a confocal laser imaging system

    PubMed Central

    Tong, L; Ang, A; Vernon, S; Zambarakji, H; Bhan, A; Sung, V; Page, S

    2001-01-01

    AIM—To assess the use of the Heidelberg retina tomograph (HRT) in screening for sight threatening diabetic macular oedema in a hospital diabetic clinic, using a new subjective analysis system (SCORE).
METHODS—200 eyes of 100 consecutive diabetic patients attending a diabetologist's clinic were studied, all eyes had an acuity of 6/9 or better. All patients underwent clinical examination by an ophthalmologist. Using the HRT, one good scan was obtained for each eye centred on the fovea. A System for Classification and Ordering of Retinal Edema (SCORE) was developed using subjective assessment of the colour map and the reflectivity image. The interobserver agreement of using this method to detect macular oedema was assessed by two observers (ophthalmic trainees) who were familiarised with SCORE by studying standard pictures of eyes not in the study. All scans were graded from 0-6 and test positive cases were defined as having a SCORE value of 0-2. The sensitivity of SCORE was assessed by pooling the data with an additional 88 scans of 88 eyes in order to reduce the confidence interval of the index.
RESULTS—12 eyes in eight out of the 100 patients had macular oedema clinically. Three scans in three patients could not be analysed because of poor scan quality. In the additional group of scans 76 out of 88 eyes had macular oedema clinically. The scoring system had a specificity of 99% (95% CI 96-100) and sensitivity of 67% (95% CI 57-76). The predictive value of a negative test was 87% (95% CI 82-99), and that of a positive test was 95% (95% CI 86-99). The mean difference of the SCORE value between two observers was -0.2 (95% CI -0.5 to +0.07).
CONCLUSIONS—These data suggest that SCORE is potentially useful for detecting diabetic macular oedema in hospital diabetic patients.

 PMID:11133709

  2. Quantitative Assessment of Eye Phenotypes for Functional Genetic Studies Using Drosophila melanogaster

    PubMed Central

    Iyer, Janani; Wang, Qingyu; Le, Thanh; Pizzo, Lucilla; Grönke, Sebastian; Ambegaokar, Surendra S.; Imai, Yuzuru; Srivastava, Ashutosh; Troisí, Beatriz Llamusí; Mardon, Graeme; Artero, Ruben; Jackson, George R.; Isaacs, Adrian M.; Partridge, Linda; Lu, Bingwei; Kumar, Justin P.; Girirajan, Santhosh

    2016-01-01

    About two-thirds of the vital genes in the Drosophila genome are involved in eye development, making the fly eye an excellent genetic system to study cellular function and development, neurodevelopment/degeneration, and complex diseases such as cancer and diabetes. We developed a novel computational method, implemented as Flynotyper software (http://flynotyper.sourceforge.net), to quantitatively assess the morphological defects in the Drosophila eye resulting from genetic alterations affecting basic cellular and developmental processes. Flynotyper utilizes a series of image processing operations to automatically detect the fly eye and the individual ommatidium, and calculates a phenotypic score as a measure of the disorderliness of ommatidial arrangement in the fly eye. As a proof of principle, we tested our method by analyzing the defects due to eye-specific knockdown of Drosophila orthologs of 12 neurodevelopmental genes to accurately document differential sensitivities of these genes to dosage alteration. We also evaluated eye images from six independent studies assessing the effect of overexpression of repeats, candidates from peptide library screens, and modifiers of neurotoxicity and developmental processes on eye morphology, and show strong concordance with the original assessment. We further demonstrate the utility of this method by analyzing 16 modifiers of sine oculis obtained from two genome-wide deficiency screens of Drosophila and accurately quantifying the effect of its enhancers and suppressors during eye development. Our method will complement existing assays for eye phenotypes, and increase the accuracy of studies that use fly eyes for functional evaluation of genes and genetic interactions. PMID:26994292

  3. Photoreceptor counting and montaging of en-face retinal images from an adaptive optics fundus camera

    PubMed Central

    Xue, Bai; Choi, Stacey S.; Doble, Nathan; Werner, John S.

    2008-01-01

    A fast and efficient method for quantifying photoreceptor density in images obtained with an en-face flood-illuminated adaptive optics (AO) imaging system is described. To improve accuracy of cone counting, en-face images are analyzed over extended areas. This is achieved with two separate semiautomated algorithms: (1) a montaging algorithm that joins retinal images with overlapping common features without edge effects and (2) a cone density measurement algorithm that counts the individual cones in the montaged image. The accuracy of the cone density measurement algorithm is high, with >97% agreement for a simulated retinal image (of known density, with low contrast) and for AO images from normal eyes when compared with previously reported histological data. Our algorithms do not require spatial regularity in cone packing and are, therefore, useful for counting cones in diseased retinas, as demonstrated for eyes with Stargardt’s macular dystrophy and retinitis pigmentosa. PMID:17429482

  4. Photoreceptor counting and montaging of en-face retinal images from an adaptive optics fundus camera

    NASA Astrophysics Data System (ADS)

    Xue, Bai; Choi, Stacey S.; Doble, Nathan; Werner, John S.

    2007-05-01

    A fast and efficient method for quantifying photoreceptor density in images obtained with an en-face flood-illuminated adaptive optics (AO) imaging system is described. To improve accuracy of cone counting, en-face images are analyzed over extended areas. This is achieved with two separate semiautomated algorithms: (1) a montaging algorithm that joins retinal images with overlapping common features without edge effects and (2) a cone density measurement algorithm that counts the individual cones in the montaged image. The accuracy of the cone density measurement algorithm is high, with >97% agreement for a simulated retinal image (of known density, with low contrast) and for AO images from normal eyes when compared with previously reported histological data. Our algorithms do not require spatial regularity in cone packing and are, therefore, useful for counting cones in diseased retinas, as demonstrated for eyes with Stargardt's macular dystrophy and retinitis pigmentosa.

  5. Femtosecond photography lessons

    NASA Astrophysics Data System (ADS)

    Fanchenko, S. D.

    1999-06-01

    Antic scientists, sailors, warriors, physician, etc. were perceiving the space by means of their eye vision system. Nowadays the same people use eyeglasses, telescopes, microscopes, image converters. All these devices fit the necessary magnification, intensification gain and image spectrum to the eyes. The human brain is processing the image data offered to him in a format pertaining to eyes. Hence, the cognition of images can be regarded as a direct measurement. As to the time scale converters, they turned out to be harder done as compared with the spatial scale converters. Hence, the development of the high-speed photography (HSP) continues for more than a hundred and fifty years. The recent pico- femtosecond HSP branch sprang up in 1949 at the Kurchatov Institute -- its cradle. All about the HSP had been advertised. Instead of reprinting what is already well known, it makes sense to emphasize some instructive lessons drawn from past experience. Also it is tempting to look a bit into the high-speed photography future.

  6. Design and characterization of a tunable opto-mechatronic system to mimic the focusing and the regulation of illumination in the formation of images made by the human eye

    NASA Astrophysics Data System (ADS)

    Santiago-Alvarado, A.; Cruz-Félix, A.; Hernández Méndez, A.; Pérez-Maldonado, Y.; Domínguez-Osante, C.

    2015-05-01

    Tunable lenses have attracted much attention due to their potential applications in such areas like machine vision, laser projection, ophthalmology, etc. In this work we present the design of a tunable opto-mechatronic system capable of focusing and to regulate the entrance illumination that mimics the performance made by the iris and the crystalline lens of the human eye. A solid elastic lens made of PDMS has been used in order to mimic the crystalline lens and an automatic diaphragm has been used to mimic the iris of the human eye. Also, a characterization of such system has been performed with standard values of luminosity for the human eye have been taken into account to calibrate and to validate the entrance illumination levels to the overall optical system.

  7. RapidEye constellation relative radiometric accuracy measurement using lunar images

    NASA Astrophysics Data System (ADS)

    Steyn, Joe; Tyc, George; Beckett, Keith; Hashida, Yoshi

    2009-09-01

    The RapidEye constellation includes five identical satellites in Low Earth Orbit (LEO). Each satellite has a 5-band (blue, green, red, red-edge and near infrared (NIR)) multispectral imager at 6.5m GSD. A three-axes attitude control system allows pointing the imager of each satellite at the Moon during lunations. It is therefore possible to image the Moon from near identical viewing geometry within a span of 80 minutes with each one of the imagers. Comparing the radiometrically corrected images obtained from each band and each satellite allows a near instantaneous relative radiometric accuracy measurement and determination of relative gain changes between the five imagers. A more traditional terrestrial vicarious radiometric calibration program has also been completed by MDA on RapidEye. The two components of this program provide for spatial radiometric calibration ensuring that detector-to-detector response remains flat, while a temporal radiometric calibration approach has accumulated images of specific dry dessert calibration sites. These images are used to measure the constellation relative radiometric response and make on-ground gain and offset adjustments in order to maintain the relative accuracy of the constellation within +/-2.5%. A quantitative comparison between the gain changes measured by the lunar method and the terrestrial temporal radiometric calibration method is performed and will be presented.

  8. Assessment of minimum permissible geometrical parameters of a near-to-eye display.

    PubMed

    Valyukh, Sergiy; Slobodyanyuk, Oleksandr

    2015-07-20

    Light weight and small dimensions are some of the most important characteristics of near-to-eye displays (NEDs). These displays consist of two basic parts: a microdisplay for generating an image and supplementary optics in order to see the image. Nowadays, the pixel size of microdisplays may be less than 4 μm, which makes the supplementary optics the major factor in defining restrictions on a NED dimensions or at least on the distance between the microdisplay and the eye. The goal of the present work is to find answers to the following two questions: how small this distance can be in principle and what is the microdisplay maximum resolution that stays effective to see through the supplementary optics placed in immediate vicinity of the eye. To explore the first question, we consider an aberration-free magnifier, which is the initial stage in elaboration of a real optical system. In this case, the paraxial approximation and the transfer matrix method are ideal tools for simulation of light propagation from the microdisplay through the magnifier and the human eye's optical system to the retina. The human eye is considered according to the Gullstrand model. Parameters of the magnifier, its location with respect to the eye and the microdisplay, and the depth of field, which can be interpreted as the tolerance of the microdisplay position, are determined and discussed. The second question related to the microdisplay maximum resolution is investigated by using the principles of wave optics.

  9. A thermographic study on eyes affected by Age-related Macular Degeneration: Comparison among various forms of the pathology and analysis of risk factors

    NASA Astrophysics Data System (ADS)

    Matteoli, Sara; Finocchio, Lucia; Biagini, Ilaria; Giacomelli, Giovanni; Sodi, Andrea; Corvi, Andrea; Virgili, Gianni; Rizzo, Stanislao

    2016-05-01

    The aims of this study are to investigate (1) the ocular thermographic profiles in eyes affected by Age related Macular Degeneration (AMD) and age-matched controls to detect possible hemodynamic abnormalities that could be involved in the pathogenesis of the disease, (2) whether any risk factors associated with the disease could affect the development of a form of AMD rather than another. Thirty-four eyes with Age-Related Maculopathy (ARM), 41 eyes with dry AMD, 60 eyes affected by wet AMD, and 74 eyes with fibrotic AMD were included in the study. The control group consisted of 48 healthy eyes. Exclusion criteria were represented by any other ocular diseases other than AMD, tear film abnormalities, systemic cardiovascular abnormalities, systemic diseases and a body temperature higher than 37.5 °C. A total of 210 eyes without pupil dilation were investigated by infrared thermography (FLIR A320). The Ocular Surface Temperature (OST) of five ocular areas was calculated by means of an image processing technique from the infrared images. Two-sample t-test, one-way ANOVA test and multivariate analysis were used for statistical analyses. ANOVA analyses showed no significant differences among AMD groups (P-value > 0.05), however, OST in AMD patients was significantly lower than in controls (P-value < 0.0001). Smokers showed higher possibility (P-value = 0.012) of developing wet AMD instead of dry AMD. Infrared thermography may be a helpful, non-invasive and not time-consuming method to be used in the management of patients with this common degenerative maculopathy.

  10. Intra-cavity upconversion to 631 nm of images illuminated by an eye-safe ASE source at 1550 nm.

    PubMed

    Torregrosa, A J; Maestre, H; Capmany, J

    2015-11-15

    We report an image wavelength upconversion system. The system mixes an incoming image at around 1550 nm (eye-safe region) illuminated by an amplified spontaneous emission (ASE) fiber source with a Gaussian beam at 1064 nm generated in a continuous-wave diode-pumped Nd(3+):GdVO(4) laser. Mixing takes place in a periodically poled lithium niobate (PPLN) crystal placed intra-cavity. The upconverted image obtained by sum-frequency mixing falls around the 631 nm red spectral region, well within the spectral response of standard silicon focal plane array bi-dimensional sensors, commonly used in charge-coupled device (CCD) or complementary metal-oxide-semiconductor (CMOS) video cameras, and of most image intensifiers. The use of ASE illumination benefits from a noticeable increase in the field of view (FOV) that can be upconverted with regard to using coherent laser illumination. The upconverted power allows us to capture real-time video in a standard nonintensified CCD camera.

  11. Eye tracking and gating system for proton therapy of orbital tumors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shin, Dongho; Yoo, Seung Hoon; Moon, Sung Ho

    2012-07-15

    Purpose: A new motion-based gated proton therapy for the treatment of orbital tumors using real-time eye-tracking system was designed and evaluated. Methods: We developed our system by image-pattern matching, using a normalized cross-correlation technique with LabVIEW 8.6 and Vision Assistant 8.6 (National Instruments, Austin, TX). To measure the pixel spacing of an image consistently, four different calibration modes such as the point-detection, the edge-detection, the line-measurement, and the manual measurement mode were suggested and used. After these methods were applied to proton therapy, gating was performed, and radiation dose distributions were evaluated. Results: Moving phantom verification measurements resulted in errorsmore » of less than 0.1 mm for given ranges of translation. Dosimetric evaluation of the beam-gating system versus nongated treatment delivery with a moving phantom shows that while there was only 0.83 mm growth in lateral penumbra for gated radiotherapy, there was 4.95 mm growth in lateral penumbra in case of nongated exposure. The analysis from clinical results suggests that the average of eye movements depends distinctively on each patient by showing 0.44 mm, 0.45 mm, and 0.86 mm for three patients, respectively. Conclusions: The developed automatic eye-tracking based beam-gating system enabled us to perform high-precision proton radiotherapy of orbital tumors.« less

  12. Integrated Eye Tracking and Neural Monitoring for Enhanced Assessment of Mild TBI

    DTIC Science & Technology

    2016-04-01

    but these delays are nearing resolution and we anticipate the initiation of the neuroimaging portion of the study early in Year 3. The fMRI task...resonance imagining ( fMRI ) and diffusion tensor imaging (DTI) to characterize the extent of functional cortical recruitment and white matter injury...respectively. The inclusion of fMRI and DTI will provide an objective basis for cross-validating the EEG and eye tracking system. Both the EEG and eye

  13. Content Design and System Implementation of a Teleophthalmology System for Eye Disease Diagnosis and Treatment and Its Preliminary Practice in Guangdong, China.

    PubMed

    Xiao, Di; Vignarajan, Janardhan; Chen, Tingting; Ye, Tiantian; Xiao, Baixiang; Congdon, Nathan; Kanagasingam, Yogessan

    2017-12-01

    We have developed a new telemedicine system for comprehensive eye examination, diabetic retinopathy (DR) screening, and eye disease diagnosis and treatment. The novel points of the system include a tablet application for facilitating doctor's examination and diagnosis process, a comprehensive eye examination component, and integrated treatment planning and recording. The system provided a new service model through one ophthalmological center linking with multiple remote and rural hospitals for eye care in Guangdong province, China. The early stage of the project study also undertook the responsibility of educations for remote-area doctors and image graders for DR grading and glaucoma grading and research on the effectiveness of short message service (SMS) reminder for patient revisit. Some other research, such as the comparison of the accuracy of graders' DR grading with the gold standard, and doctor's tentative diagnosis with final diagnosis and related statistical information, has been implemented in the system. In the preliminary practice, we summarized the outcomes related to presenting system performance and made an initial analysis. From the practice, the project has shown the telemedicine system and associated contents have satisfied our initial goal and demonstrated their effectiveness and efficiency.

  14. Isoplanatic patch of the human eye for arbitrary wavelengths

    NASA Astrophysics Data System (ADS)

    Han, Guoqing; Cao, Zhaoliang; Mu, Quanquan; Wang, Yukun; Li, Dayu; Wang, Shaoxin; Xu, Zihao; Wu, Daosheng; Hu, Lifa; Xuan, Li

    2018-03-01

    The isoplanatic patch of the human eye is a key parameter for the adaptive optics system (AOS) designed for retinal imaging. The field of view (FOV) usually sets to the same size as the isoplanatic patch to obtain high resolution images. However, it has only been measured at a specific wavelength. Here we investigate the wavelength dependence of this important parameter. An optical setup is initially designed and established in a laboratory to measure the isoplanatic patch at various wavelengths (655 nm, 730 nm and 808 nm). We established the Navarro wide-angle eye model in Zemax software to further validate our results, which suggested high consistency between the two. The isoplanatic patch as a function of wavelength was obtained within the range of visible to near-infrared, which can be expressed as: θ=0.0028 λ - 0 . 74. This work is beneficial for the AOS design for retinal imaging.

  15. Diamond Eye: a distributed architecture for image data mining

    NASA Astrophysics Data System (ADS)

    Burl, Michael C.; Fowlkes, Charless; Roden, Joe; Stechert, Andre; Mukhtar, Saleem

    1999-02-01

    Diamond Eye is a distributed software architecture, which enables users (scientists) to analyze large image collections by interacting with one or more custom data mining servers via a Java applet interface. Each server is coupled with an object-oriented database and a computational engine, such as a network of high-performance workstations. The database provides persistent storage and supports querying of the 'mined' information. The computational engine provides parallel execution of expensive image processing, object recognition, and query-by-content operations. Key benefits of the Diamond Eye architecture are: (1) the design promotes trial evaluation of advanced data mining and machine learning techniques by potential new users (all that is required is to point a web browser to the appropriate URL), (2) software infrastructure that is common across a range of science mining applications is factored out and reused, and (3) the system facilitates closer collaborations between algorithm developers and domain experts.

  16. Large-field-of-view wide-spectrum artificial reflecting superposition compound eyes

    NASA Astrophysics Data System (ADS)

    Huang, Chi-Chieh

    The study of the imaging principles of natural compound eyes has become an active area of research and has fueled the advancement of modern optics with many attractive design features beyond those available with conventional technologies. Most prominent among all compound eyes is the reflecting superposition compound eyes (RSCEs) found in some decapods. They are extraordinary imaging systems with numerous optical features such as minimum chromatic aberration, wide-angle field of view (FOV), high sensitivity to light and superb acuity to motion. Inspired by their remarkable visual system, we were able to implement the unique lens-free, reflection-based imaging mechanisms into a miniaturized, large-FOV optical imaging device operating at the wide visible spectrum to minimize chromatic aberration without any additional post-image processing. First, two micro-transfer printing methods, a multiple and a shear-assisted transfer printing technique, were studied and discussed to realize life-sized artificial RSCEs. The processes exploited the differential adhesive tendencies of the microstructures formed between a donor and a transfer substrate to accomplish an efficient release and transfer process. These techniques enabled conformal wrapping of three-dimensional (3-D) microstructures, initially fabricated in two-dimensional (2-D) layouts with standard fabrication technology onto a wide range of surfaces with complex and curvilinear shapes. Final part of this dissertation was focused on implementing the key operational features of the natural RSCEs into large-FOV, wide-spectrum artificial RSCEs as an optical imaging device suitable for the wide visible spectrum. Our devices can form real, clear images based on reflection rather than refraction, hence avoiding chromatic aberration due to dispersion by the optical materials. Compared to the performance of conventional refractive lenses of comparable size, our devices demonstrated minimum chromatic aberration, exceptional FOV up to 165o without distortion, modest spherical aberrations and comparable imaging quality without any post-image processing. Together with an augmenting cruciform pattern surrounding each focused image, our devices possessed enhanced, dynamic motion-tracking capability ideal for diverse applications in military, security, search and rescue, night navigation, medical imaging and astronomy. In the future, due to its reflection-based operating principles, it can be further extended into mid- and far-infrared for more demanding applications.

  17. The Project MACULA Retinal Pigment Epithelium Grading System for Histology and Optical Coherence Tomography in Age-Related Macular Degeneration

    PubMed Central

    Zanzottera, Emma C.; Messinger, Jeffrey D.; Ach, Thomas; Smith, R. Theodore; Freund, K. Bailey; Curcio, Christine A.

    2015-01-01

    Purpose. To seek pathways of retinal pigment epithelium (RPE) fate in age-related macular degeneration via a morphology grading system; provide nomenclature, visualization targets, and metrics for clinical imaging and model systems. Methods. Donor eyes with geographic atrophy (GA) or choroidal neovascularization (CNV) and one GA eye with previous clinical spectral-domain optical coherence tomography (SDOCT) imaging were processed for histology, photodocumented, and annotated at predefined locations. Retinal pigment epithelial cells contained spindle-shaped melanosomes, apposed a basal lamina or basal laminar deposit (BLamD), and exhibited recognizable morphologies. Thicknesses and unbiased estimates of frequencies were obtained. Results. In 13 GA eyes (449 locations), ‘Shedding,’ ‘Sloughed,’ and ‘Dissociated’ morphologies were abundant; 22.2% of atrophic locations had ‘Dissociated’ RPE. In 39 CNV eyes (1363 locations), 37.3% of locations with fibrovascular/fibrocellular scar had ‘Entombed’ RPE; ‘Sloughed,’ ‘Dissociated,’ and ‘Bilaminar’ morphologies were abundant. Of abnormal RPE, CNV and GA both had ∼35% ‘Sloughed’/‘Intraretinal,’ with more Intraretinal in CNV (9.5% vs. 1.8%). ‘Shedding’ cells associated with granule aggregations in BLamD. The RPE layer did not thin, and BLamD remained thick, with progression. Granule-containing material consistent with three morphologies correlated to SDOCT hyperreflective foci in the previously examined GA patient. Conclusions. Retinal pigment epithelium morphology indicates multiple pathways in GA and CNV. Atrophic/scarred areas have numerous cells capable of transcribing genes and generating imaging signals. Shed granule aggregates, possibly apoptotic, are visible in SDOCT, as are ‘Dissociated’ and ‘Sloughed’ cells. The significance of RPE phenotypes is addressable in longitudinal, high-resolution imaging in clinic populations. Data can motivate future molecular phenotyping studies. PMID:25813989

  18. An ex vivo rat eye model to aid development of high-resolution retina imaging devices for rodents

    NASA Astrophysics Data System (ADS)

    van Oterendorp, Christian; Martin, Keith R.; Zhong, Jiang Jian; Diaz-Santana, Luis

    2010-09-01

    High resolution in vivo retinal imaging in rodents is becoming increasingly important in eye research. Development of suitable imaging devices currently requires many lengthy animal procedures. We present an ex vivo rat model eye with fluorescently labelled retinal ganglion cells (RGC) and nerve fibre bundles that reduces the need for animal procedures while preserving key properties of the living rat eye. Optical aberrations and scattering of four model eyes and eight live rat eyes were quantified using a Shack-Hartmann sensor. Fluorescent images from RGCs were obtained using a prototype scanning laser ophthalmoscope. The wavefront aberration root mean square value without defocus did not significantly differ between model and living eyes. Higher order aberrations were slightly higher but RGC image quality was comparable to published in vivo work. Overall, the model allows a large reduction in number and duration of animal procedures required to develop new in vivo retinal imaging devices.

  19. Nystagmus and oscillopsia.

    PubMed

    Straube, A; Bronstein, A; Straumann, D

    2012-01-01

    The ocular motor system consists of several subsystems, including the vestibular ocular nystagmus saccade system, the pursuit system, the fixation and gaze-holding system and the vergence system. All these subsystems aid the stabilization of the images on the retina during eye and head movements and any kind of disturbance of one of the systems can cause instability of the eyes (e.g. nystagmus) or an inadequate eye movement causing a mismatch between head and eye movement (e.g. bilateral vestibular failure). In both situations, the subjects experience a movement of the world (oscillopsia) which is quite disturbing. New insights into the patho-physiology of some of the ocular motor disorders have helped to establish new treatment options, in particular in downbeat nystagmus, upbeat nystagmus, periodic alternating nystagmus, acquired pendular nystagmus and paroxysmal vestibular episodes/attacks. The discussed patho-physiology of these disorders and the current literature on treatment options are discussed and practical treatment recommendations are given in the paper. © 2011 The Author(s). European Journal of Neurology © 2011 EFNS.

  20. A Model-Based Approach for the Measurement of Eye Movements Using Image Processing

    NASA Technical Reports Server (NTRS)

    Sung, Kwangjae; Reschke, Millard F.

    1997-01-01

    This paper describes a video eye-tracking algorithm which searches for the best fit of the pupil modeled as a circular disk. The algorithm is robust to common image artifacts such as the droopy eyelids and light reflections while maintaining the measurement resolution available by the centroid algorithm. The presented algorithm is used to derive the pupil size and center coordinates, and can be combined with iris-tracking techniques to measure ocular torsion. A comparison search method of pupil candidates using pixel coordinate reference lookup tables optimizes the processing requirements for a least square fit of the circular disk model. This paper includes quantitative analyses and simulation results for the resolution and the robustness of the algorithm. The algorithm presented in this paper provides a platform for a noninvasive, multidimensional eye measurement system which can be used for clinical and research applications requiring the precise recording of eye movements in three-dimensional space.

  1. Signal-to-noise ratio and dose to the lens of the eye for computed tomography examination of the brain using an automatic tube current modulation system.

    PubMed

    Sookpeng, Supawitoo; Butdee, Chitsanupong

    2017-06-01

    The study aimed to evaluate the image quality in terms of signal-to-noise ratio (SNR) and dose to the lens of the eye and the other nearby organs from the CT brain scan using an automatic tube current modulation (ATCM) system with or without CT gantry tilt is needed. An anthropomorphic phantom was scanned with different settings including use of different ATCM, fixed tube current time product (mAs) settings and degree angles of gantry tilt. Gafchromic film XR-QA2 was used to measure absorbed dose of the organs. Relative doses and SNR for the various scan settings were compared with the reference setting of the fixed 330 mAs. Average absorbed dose for the lens of the eyes varied from 8.7 to 21.7 mGy. The use of the ATCM system with the gantry tilt resulted in up to 60% decrease in the dose to the lens of the eye. SNR significantly decreased while tilting the gantry using the fixed mAs techniques, compared to that of the reference setting. However, there were no statistical significant differences for SNRs between the reference setting and all ATCM settings. Compared to the reference setting of the fixed effective mAs, using the ATCM system and appropriate tilting, the gantry resulted in a substantial decrease in the dose to the lens of the eye while preserving signal-to-noise ratio. CT brain examination should be carefully controlled to optimize dose for lens of the eye and image quality of the examination.

  2. Dynamic simulation of the effect of soft toric contact lenses movement on retinal image quality.

    PubMed

    Niu, Yafei; Sarver, Edwin J; Stevenson, Scott B; Marsack, Jason D; Parker, Katrina E; Applegate, Raymond A

    2008-04-01

    To report the development of a tool designed to dynamically simulate the effect of soft toric contact lens movement on retinal image quality, initial findings on three eyes, and the next steps to be taken to improve the utility of the tool. Three eyes of two subjects wearing soft toric contact lenses were cyclopleged with 1% cyclopentolate and 2.5% phenylephrine. Four hundred wavefront aberration measurements over a 5-mm pupil were recorded during soft contact lens wear at 30 Hz using a complete ophthalmic analysis system aberrometer. Each wavefront error measurement was input into Visual Optics Laboratory (version 7.15, Sarver and Associates, Inc.) to generate a retinal simulation of a high contrast log MAR visual acuity chart. The individual simulations were combined into a single dynamic movie using a custom MatLab PsychToolbox program. Visual acuity was measured for each eye reading the movie with best cycloplegic spectacle correction through a 3-mm artificial pupil to minimize the influence of the eyes' uncorrected aberrations. Comparison of the simulated acuity was made to values recorded while the subject read unaberrated charts with contact lenses through a 5-mm artificial pupil. For one study eye, average acuity was the same as the natural contact lens viewing condition. For the other two study eyes visual acuity of the best simulation was more than one line worse than natural viewing conditions. Dynamic simulation of retinal image quality, although not yet perfect, is a promising technique for visually illustrating the optical effects on image quality because of the movements of alignment-sensitive corrections.

  3. Double peacock eye optical element for extended focal depth imaging with ophthalmic applications.

    PubMed

    Romero, Lenny A; Millán, María S; Jaroszewicz, Zbigniew; Kolodziejczyk, Andrzej

    2012-04-01

    The aged human eye is commonly affected by presbyopia, and therefore, it gradually loses its capability to form images of objects placed at different distances. Extended depth of focus (EDOF) imaging elements can overcome this inability, despite the introduction of a certain amount of aberration. This paper evaluates the EDOF imaging performance of the so-called peacock eye phase diffractive element, which focuses an incident plane wave into a segment of the optical axis and explores the element's potential use for ophthalmic presbyopia compensation optics. Two designs of the element are analyzed: the single peacock eye, which produces one focal segment along the axis, and the double peacock eye, which is a spatially multiplexed element that produces two focal segments with partial overlapping along the axis. The performances of the peacock eye elements are compared with those of multifocal lenses through numerical simulations as well as optical experiments in the image space. The results demonstrate that the peacock eye elements form sharper images along the focal segment than the multifocal lenses and, therefore, are more suitable for presbyopia compensation. The extreme points of the depth of field in the object space, which represent the remote and the near object points, have been experimentally obtained for both the single and the double peacock eye optical elements. The double peacock eye element has better imaging quality for relatively short and intermediate distances than the single peacock eye, whereas the latter seems better for far distance vision.

  4. Performance benefits and limitations of a camera network

    NASA Astrophysics Data System (ADS)

    Carr, Peter; Thomas, Paul J.; Hornsey, Richard

    2005-06-01

    Visual information is of vital significance to both animals and artificial systems. The majority of mammals rely on two images, each with a resolution of 107-108 'pixels' per image. At the other extreme are insect eyes where the field of view is segmented into 103-105 images, each comprising effectively one pixel/image. The great majority of artificial imaging systems lie nearer to the mammalian characteristics in this parameter space, although electronic compound eyes have been developed in this laboratory and elsewhere. If the definition of a vision system is expanded to include networks or swarms of sensor elements, then schools of fish, flocks of birds and ant or termite colonies occupy a region where the number of images and the pixels/image may be comparable. A useful system might then have 105 imagers, each with about 104-105 pixels. Artificial analogs to these situations include sensor webs, smart dust and co-ordinated robot clusters. As an extreme example, we might consider the collective vision system represented by the imminent existence of ~109 cellular telephones, each with a one-megapixel camera. Unoccupied regions in this resolution-segmentation parameter space suggest opportunities for innovative artificial sensor network systems. Essential for the full exploitation of these opportunities is the availability of custom CMOS image sensor chips whose characteristics can be tailored to the application. Key attributes of such a chip set might include integrated image processing and control, low cost, and low power. This paper compares selected experimentally determined system specifications for an inward-looking array of 12 cameras with the aid of a camera-network model developed to explore the tradeoff between camera resolution and the number of cameras.

  5. Ultrasound biomicroscopy. High-frequency ultrasound imaging of the eye at microscopic resolution.

    PubMed

    Pavlin, C J; Foster, F S

    1998-11-01

    UBM presents us with a new method of imaging the anterior segment of the eye at high resolution. Its strengths lie in its ability to produce cross-sections of the living eye at microscopic resolution without violating the integrity of the globe. UBM, although lacking the resolution of optical microscopy, gives us images in living eyes without affecting the internal relationships of the structures imaged. There are many other applications of this new imaging method. Examples of other uses include imaging adnexal pathology, assessing corneal changes with refractive surgery, the assessment of trauma, and determination of intraocular lens position.

  6. In vivo imaging of microscopic structures in the rat retina

    PubMed Central

    Geng, Ying; Greenberg, Kenneth P.; Wolfe, Robert; Gray, Daniel C.; Hunter, Jennifer J.; Dubra, Alfredo; Flannery, John G.; Williams, David R.; Porter, Jason

    2010-01-01

    Purpose The ability to resolve single retinal cells in rodents in vivo has applications in rodent models of the visual system and retinal disease. We have characterized the performance of a fluorescence adaptive optics scanning laser ophthalmoscope (fAOSLO) that provides cellular and subcellular imaging of rat retina in vivo. Methods Green fluorescent protein (eGFP) was expressed in retinal ganglion cells of normal Sprague Dawley rats via intravitreal injections of adeno-associated viral vectors. Simultaneous reflectance and fluorescence retinal images were acquired using the fAOSLO. fAOSLO resolution was characterized by comparing in vivo images with subsequent imaging of retinal sections from the same eyes using confocal microscopy. Results Retinal capillaries and eGFP-labeled ganglion cell bodies, dendrites, and axons were clearly resolved in vivo with adaptive optics (AO). AO correction reduced the total root mean square wavefront error, on average, from 0.30 μm to 0.05 μm (1.7-mm pupil). The full width at half maximum (FWHM) of the average in vivo line-spread function (LSF) was ∼1.84 μm, approximately 82% greater than the FWHM of the diffraction-limited LSF. Conclusions With perfect aberration compensation, the in vivo resolution in the rat eye could be ∼2× greater than that in the human eye due to its large numerical aperture (∼0.43). While the fAOSLO corrects a substantial fraction of the rat eye's aberrations, direct measurements of retinal image quality reveal some blur beyond that expected from diffraction. Nonetheless, subcellular features can be resolved, offering promise for using AO to investigate the rodent eye in vivo with high resolution. PMID:19578019

  7. Measurement of skin dose from cone-beam computed tomography imaging.

    PubMed

    Akyalcin, Sercan; English, Jeryl D; Abramovitch, Kenneth M; Rong, Xiujiang J

    2013-10-09

    To measure surface skin dose from various cone-beam computed tomography (CBCT) scanners using point-dosimeters. A head anthropomorphic phantom was used with nanoDOT optically stimulated luminescence (OSL) dosimeters (Landauer Corp., Glenwood, IL) attached to various anatomic landmarks. The phantom was scanned using multiple exposure protocols for craniofacial evaluations in three different CBCT units and a conventional x-ray imaging system. The dosimeters were calibrated for each of the scan protocols on the different imaging systems. Peak skin dose and surface doses at the eye lens, thyroid, submandibular and parotid gland levels were measured. The measured skin doses ranged from 0.09 to 4.62 mGy depending on dosimeter positions and imaging systems. The average surface doses to the lens locations were ~4.0 mGy, well below the threshold for cataractogenesis (500 mGy). The results changed accordingly with x-ray tube output (mAs and kV) and also were sensitive to scan field of view (SFOV). As compared to the conventional panoramic and cephalometric imaging system, doses from all three CBCT systems were at least an order of magnitude higher. Peak skin dose and surface doses at the eye lens, thyroid, and salivary gland levels measured from the CBCT imaging systems were lower than the thresholds to induce deterministic effects. However, our findings do not justify the routine use of CBCT imaging in orthodontics considering the lifetime-attributable risk to the individual.

  8. Measurement of skin dose from cone-beam computed tomography imaging

    PubMed Central

    2013-01-01

    Objective To measure surface skin dose from various cone-beam computed tomography (CBCT) scanners using point-dosimeters. Materials & methods A head anthropomorphic phantom was used with nanoDOT optically stimulated luminescence (OSL) dosimeters (Landauer Corp., Glenwood, IL) attached to various anatomic landmarks. The phantom was scanned using multiple exposure protocols for craniofacial evaluations in three different CBCT units and a conventional x-ray imaging system. The dosimeters were calibrated for each of the scan protocols on the different imaging systems. Peak skin dose and surface doses at the eye lens, thyroid, submandibular and parotid gland levels were measured. Results The measured skin doses ranged from 0.09 to 4.62 mGy depending on dosimeter positions and imaging systems. The average surface doses to the lens locations were ~4.0 mGy, well below the threshold for cataractogenesis (500 mGy). The results changed accordingly with x-ray tube output (mAs and kV) and also were sensitive to scan field of view (SFOV). As compared to the conventional panoramic and cephalometric imaging system, doses from all three CBCT systems were at least an order of magnitude higher. Conclusions Peak skin dose and surface doses at the eye lens, thyroid, and salivary gland levels measured from the CBCT imaging systems were lower than the thresholds to induce deterministic effects. However, our findings do not justify the routine use of CBCT imaging in orthodontics considering the lifetime-attributable risk to the individual. PMID:24192155

  9. Acute Solar Retinopathy Imaged With Adaptive Optics, Optical Coherence Tomography Angiography, and En Face Optical Coherence Tomography.

    PubMed

    Wu, Chris Y; Jansen, Michael E; Andrade, Jorge; Chui, Toco Y P; Do, Anna T; Rosen, Richard B; Deobhakta, Avnish

    2018-01-01

    Solar retinopathy is a rare form of retinal injury that occurs after direct sungazing. To enhance understanding of the structural changes that occur in solar retinopathy by obtaining high-resolution in vivo en face images. Case report of a young adult woman who presented to the New York Eye and Ear Infirmary with symptoms of acute solar retinopathy after viewing the solar eclipse on August 21, 2017. Results of comprehensive ophthalmic examination and images obtained by fundus photography, microperimetry, spectral-domain optical coherence tomography (OCT), adaptive optics scanning light ophthalmoscopy, OCT angiography, and en face OCT. The patient was examined after viewing the solar eclipse. Visual acuity was 20/20 OD and 20/25 OS. The patient was left-eye dominant. Spectral-domain OCT images were consistent with mild and severe acute solar retinopathy in the right and left eye, respectively. Microperimetry was normal in the right eye but showed paracentral decreased retinal sensitivity in the left eye with a central absolute scotoma. Adaptive optics images of the right eye showed a small region of nonwaveguiding photoreceptors, while images of the left eye showed a large area of abnormal and nonwaveguiding photoreceptors. Optical coherence tomography angiography images were normal in both eyes. En face OCT images of the right eye showed a small circular hyperreflective area, with central hyporeflectivity in the outer retina of the right eye. The left eye showed a hyperreflective lesion that intensified in area from inner to middle retina and became mostly hyporeflective in the outer retina. The shape of the lesion on adaptive optics and en face OCT images of the left eye corresponded to the shape of the scotoma drawn by the patient on Amsler grid. Acute solar retinopathy can present with foveal cone photoreceptor mosaic disturbances on adaptive optics scanning light ophthalmoscopy imaging. Corresponding reflectivity changes can be seen on en face OCT, especially in the middle and outer retina. Young adults may be especially vulnerable and need to be better informed of the risks of viewing the sun with inadequate protective eyewear.

  10. Noninvasive Dry Eye Assessment Using High-Technology Ophthalmic Examination Devices.

    PubMed

    Yamaguchi, Masahiko; Sakane, Yuri; Kamao, Tomoyuki; Zheng, Xiaodong; Goto, Tomoko; Shiraishi, Atsushi; Ohashi, Yuichi

    2016-11-01

    Recently, the number of dry eye cases has dramatically increased. Thus, it is important that easy screening, exact diagnoses, and suitable treatments be available. We developed 3 original and noninvasive assessments for this disorder. First, a DR-1 dry eye monitor was used to determine the tear meniscus height quantitatively by capturing a tear meniscus digital image that was analyzed by Meniscus Processor software. The DR-1 meniscus height value significantly correlated with the fluorescein meniscus height (r = 0.06, Bland-Altman analysis). At a cutoff value of 0.22 mm, sensitivity of the dry eye diagnosis was 84.1% with 90.9% specificity. Second, the Tear Stability Analysis System was used to quantitatively measure tear film stability using a topographic modeling system corneal shape analysis device. Tear film stability was objectively and quantitatively evaluated every second during sustained eye openings. The Tear Stability Analysis System is currently installed in an RT-7000 autorefractometer and topographer to automate the diagnosis of dry eye. Third, the Ocular Surface Thermographer uses ophthalmic thermography for diagnosis. The decrease in ocular surface temperature in dry eyes was significantly greater than that in normal eyes (P < 0.001) at 10 seconds after eye opening. Decreased corneal temperature correlated significantly with the tear film breakup time (r = 0.572; P < 0.001). When changes in the ocular surface temperature of the cornea were used as indicators for dry eye, sensitivity was 0.83 and specificity was 0.80 after 10 seconds. This article describes the details and potential of these 3 noninvasive dry eye assessment systems.

  11. Understanding Student Cognition about Complex Earth System Processes Related to Climate Change

    NASA Astrophysics Data System (ADS)

    McNeal, K. S.; Libarkin, J.; Ledley, T. S.; Dutta, S.; Templeton, M. C.; Geroux, J.; Blakeney, G. A.

    2011-12-01

    The Earth's climate system includes complex behavior and interconnections with other Earth spheres that present challenges to student learning. To better understand these unique challenges, we have conducted experiments with high-school and introductory level college students to determine how information pertaining to the connections between the Earth's atmospheric system and the other Earth spheres (e.g., hydrosphere and cryosphere) are processed. Specifically, we include psychomotor tests (e.g., eye-tracking) and open-ended questionnaires in this research study, where participants were provided scientific images of the Earth (e.g., global precipitation and ocean and atmospheric currents), eye-tracked, and asked to provide causal or relational explanations about the viewed images. In addition, the students engaged in on-line modules (http://serc.carleton.edu/eslabs/climate/index.html) focused on Earth system science as training activities to address potential cognitive barriers. The developed modules included interactive media, hands-on lessons, links to outside resources, and formative assessment questions to promote a supportive and data-rich learning environment. Student eye movements were tracked during engagement with the materials to determine the role of perception and attention on understanding. Students also completed a conceptual questionnaire pre-post to determine if these on-line curriculum materials assisted in their development of connections between Earth's atmospheric system and the other Earth systems. The pre-post results of students' thinking about climate change concepts, as well as eye-tracking results, will be presented.

  12. Simple, inexpensive technique for high-quality smartphone fundus photography in human and animal eyes.

    PubMed

    Haddock, Luis J; Kim, David Y; Mukai, Shizuo

    2013-01-01

    Purpose. We describe in detail a relatively simple technique of fundus photography in human and rabbit eyes using a smartphone, an inexpensive app for the smartphone, and instruments that are readily available in an ophthalmic practice. Methods. Fundus images were captured with a smartphone and a 20D lens with or without a Koeppe lens. By using the coaxial light source of the phone, this system works as an indirect ophthalmoscope that creates a digital image of the fundus. The application whose software allows for independent control of focus, exposure, and light intensity during video filming was used. With this app, we recorded high-definition videos of the fundus and subsequently extracted high-quality, still images from the video clip. Results. The described technique of smartphone fundus photography was able to capture excellent high-quality fundus images in both children under anesthesia and in awake adults. Excellent images were acquired with the 20D lens alone in the clinic, and the addition of the Koeppe lens in the operating room resulted in the best quality images. Successful photodocumentation of rabbit fundus was achieved in control and experimental eyes. Conclusion. The currently described system was able to take consistently high-quality fundus photographs in patients and in animals using readily available instruments that are portable with simple power sources. It is relatively simple to master, is relatively inexpensive, and can take advantage of the expanding mobile-telephone networks for telemedicine.

  13. MR-eyetracker: a new method for eye movement recording in functional magnetic resonance imaging.

    PubMed

    Kimmig, H; Greenlee, M W; Huethe, F; Mergner, T

    1999-06-01

    We present a method for recording saccadic and pursuit eye movements in the magnetic resonance tomograph designed for visual functional magnetic resonance imaging (fMRI) experiments. To reliably classify brain areas as pursuit or saccade related it is important to carefully measure the actual eye movements. For this purpose, infrared light, created outside the scanner by light-emitting diodes (LEDs), is guided via optic fibers into the head coil and onto the eye of the subject. Two additional fiber optical cables pick up the light reflected by the iris. The illuminating and detecting cables are mounted in a plastic eyepiece that is manually lowered to the level of the eye. By means of differential amplification, we obtain a signal that covaries with the horizontal position of the eye. Calibration of eye position within the scanner yields an estimate of eye position with a resolution of 0.2 degrees at a sampling rate of 1000 Hz. Experiments are presented that employ echoplanar imaging with 12 image planes through visual, parietal and frontal cortex while subjects performed saccadic and pursuit eye movements. The distribution of BOLD (blood oxygen level dependent) responses is shown to depend on the type of eye movement performed. Our method yields high temporal and spatial resolution of the horizontal component of eye movements during fMRI scanning. Since the signal is purely optical, there is no interaction between the eye movement signals and the echoplanar images. This reasonably priced eye tracker can be used to control eye position and monitor eye movements during fMRI.

  14. Comprehensive eye evaluation algorithm

    NASA Astrophysics Data System (ADS)

    Agurto, C.; Nemeth, S.; Zamora, G.; Vahtel, M.; Soliz, P.; Barriga, S.

    2016-03-01

    In recent years, several research groups have developed automatic algorithms to detect diabetic retinopathy (DR) in individuals with diabetes (DM), using digital retinal images. Studies have indicated that diabetics have 1.5 times the annual risk of developing primary open angle glaucoma (POAG) as do people without DM. Moreover, DM patients have 1.8 times the risk for age-related macular degeneration (AMD). Although numerous investigators are developing automatic DR detection algorithms, there have been few successful efforts to create an automatic algorithm that can detect other ocular diseases, such as POAG and AMD. Consequently, our aim in the current study was to develop a comprehensive eye evaluation algorithm that not only detects DR in retinal images, but also automatically identifies glaucoma suspects and AMD by integrating other personal medical information with the retinal features. The proposed system is fully automatic and provides the likelihood of each of the three eye disease. The system was evaluated in two datasets of 104 and 88 diabetic cases. For each eye, we used two non-mydriatic digital color fundus photographs (macula and optic disc centered) and, when available, information about age, duration of diabetes, cataracts, hypertension, gender, and laboratory data. Our results show that the combination of multimodal features can increase the AUC by up to 5%, 7%, and 8% in the detection of AMD, DR, and glaucoma respectively. Marked improvement was achieved when laboratory results were combined with retinal image features.

  15. A single pixel camera video ophthalmoscope

    NASA Astrophysics Data System (ADS)

    Lochocki, B.; Gambin, A.; Manzanera, S.; Irles, E.; Tajahuerce, E.; Lancis, J.; Artal, P.

    2017-02-01

    There are several ophthalmic devices to image the retina, from fundus cameras capable to image the whole fundus to scanning ophthalmoscopes with photoreceptor resolution. Unfortunately, these devices are prone to a variety of ocular conditions like defocus and media opacities, which usually degrade the quality of the image. Here, we demonstrate a novel approach to image the retina in real-time using a single pixel camera, which has the potential to circumvent those optical restrictions. The imaging procedure is as follows: a set of spatially coded patterns is projected rapidly onto the retina using a digital micro mirror device. At the same time, the inner product's intensity is measured for each pattern with a photomultiplier module. Subsequently, an image of the retina is reconstructed computationally. Obtained image resolution is up to 128 x 128 px with a varying real-time video framerate up to 11 fps. Experimental results obtained in an artificial eye confirm the tolerance against defocus compared to a conventional multi-pixel array based system. Furthermore, the use of a multiplexed illumination offers a SNR improvement leading to a lower illumination of the eye and hence an increase in patient's comfort. In addition, the proposed system could enable imaging in wavelength ranges where cameras are not available.

  16. Crustacean Larvae-Vision in the Plankton.

    PubMed

    Cronin, Thomas W; Bok, Michael J; Lin, Chan

    2017-11-01

    We review the visual systems of crustacean larvae, concentrating on the compound eyes of decapod and stomatopod larvae as well as the functional and behavioral aspects of their vision. Larval compound eyes of these macrurans are all built on fundamentally the same optical plan, the transparent apposition eye, which is eminently suitable for modification into the abundantly diverse optical systems of the adults. Many of these eyes contain a layer of reflective structures overlying the retina that produces a counterilluminating eyeshine, so they are unique in being camouflaged both by their transparency and by their reflection of light spectrally similar to background light to conceal the opaque retina. Besides the pair of compound eyes, at least some crustacean larvae have a non-imaging photoreceptor system based on a naupliar eye and possibly other frontal eyes. Larval compound-eye photoreceptors send axons to a large and well-developed optic lobe consisting of a series of neuropils that are similar to those of adult crustaceans and insects, implying sophisticated analysis of visual stimuli. The visual system fosters a number of advanced and flexible behaviors that permit crustacean larvae to survive extended periods in the plankton and allows them to reach acceptable adult habitats, within which to metamorphose. © The Author 2017. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology. All rights reserved. For permissions please email: journals.permissions@oup.com.

  17. Optical Coherence Tomography Angiography in Optic Disc Swelling.

    PubMed

    Fard, Masoud Aghsaei; Jalili, Jalil; Sahraiyan, Alireza; Khojasteh, Hassan; Hejazi, Marjane; Ritch, Robert; Subramanian, Prem S

    2018-05-04

    To compare optical coherence tomography angiography (OCT-A) of peripapillary total vasculature and capillaries in patients with optic disc swelling. Cross-sectional study. Twenty nine eyes with acute nonarteritic anterior ischemic optic neuropathy (NAION), 44 eyes with papilledema, 8 eyes with acute optic neuritis, and 48 eyes of normal subjects were imaged using OCT-A. Peripapillary total vasculature information was recorded using a commercial vessel density map. Customized image analysis with major vessel removal was also used to measure whole-image capillary density and peripapillary capillary density (PCD). Mixed models showed that the peripapillary total vasculature density values were significantly lower in NAION eyes, followed by papilledema eyes and control eyes, using commercial software (P < .0001 for all comparisons). The customized software also showed significantly lower PCD of NAION eyes compared with papilledema eyes (all P < .001), but did not show significant differences between papilledema and control subjects. Our software showed significantly lower whole image and PCD in eyes with optic neuritis than papilledema. There was no significant difference between NAION and optic neuritis using our software. The area under the receiver operating curves for discriminating NAION from papilledema eyes and optic neuritis from papilledema eyes was highest for whole-image capillary density (0.94 and 0.80, respectively) with our software, followed by peripapillary total vasculature (0.9 and 0.74, respectively ) with commercial software. OCT-A is helpful to distinguish NAION and papillitis from papilledema. Whole-image capillary density had the greatest diagnostic accuracy for differentiating disc swelling. Copyright © 2018 Elsevier Inc. All rights reserved.

  18. A novel microaneurysms detection approach based on convolutional neural networks with reinforcement sample learning algorithm.

    PubMed

    Budak, Umit; Şengür, Abdulkadir; Guo, Yanhui; Akbulut, Yaman

    2017-12-01

    Microaneurysms (MAs) are known as early signs of diabetic-retinopathy which are called red lesions in color fundus images. Detection of MAs in fundus images needs highly skilled physicians or eye angiography. Eye angiography is an invasive and expensive procedure. Therefore, an automatic detection system to identify the MAs locations in fundus images is in demand. In this paper, we proposed a system to detect the MAs in colored fundus images. The proposed method composed of three stages. In the first stage, a series of pre-processing steps are used to make the input images more convenient for MAs detection. To this end, green channel decomposition, Gaussian filtering, median filtering, back ground determination, and subtraction operations are applied to input colored fundus images. After pre-processing, a candidate MAs extraction procedure is applied to detect potential regions. A five-stepped procedure is adopted to get the potential MA locations. Finally, deep convolutional neural network (DCNN) with reinforcement sample learning strategy is used to train the proposed system. The DCNN is trained with color image patches which are collected from ground-truth MA locations and non-MA locations. We conducted extensive experiments on ROC dataset to evaluate of our proposal. The results are encouraging.

  19. Laser applications and system considerations in ocular imaging

    PubMed Central

    Elsner, Ann E.; Muller, Matthew S.

    2009-01-01

    We review laser applications for primarily in vivo ocular imaging techniques, describing their constraints based on biological tissue properties, safety, and the performance of the imaging system. We discuss the need for cost effective sources with practical wavelength tuning capabilities for spectral studies. Techniques to probe the pathological changes of layers beneath the highly scattering retina and diagnose the onset of various eye diseases are described. The recent development of several optical coherence tomography based systems for functional ocular imaging is reviewed, as well as linear and nonlinear ocular imaging techniques performed with ultrafast lasers, emphasizing recent source developments and methods to enhance imaging contrast. PMID:21052482

  20. Corneal thickness in dry eyes in an Iraqi population.

    PubMed

    Ali, Noora Mauwafak; Hamied, Furkaan M; Farhood, Qasim K

    2017-01-01

    Dry eye disorder is a multifactorial disease of the tears and ocular surface that results in discomfort and visual disturbance. Corneal pachymetry becomes increasingly important in refractive surgery, for the accurate assessment of intraocular pressure, and in the preoperative assessment of other ocular surgeries. To assess the effect of dry eye disorder on the central corneal thickness (CCT) by comparing with CCT of normal eyes of age-matched individuals. The total number of eyes examined was 280 (140 dry eyes from 70 patients and 140 normal eyes from 70 individuals). Pentacam (Scheimpflug imaging system) was used for measuring the CCT of all eyes. Patients with dry eye syndrome had significantly lower CCT compared to the control group ( P <0.01). Its mean was 536.5 versus 561.3, respectively. CCT of dry eyes was significantly reduced when compared with age- and gender-matched population. This result can be attributed to chronic desiccation by the inflammatory mediators in dry eyes, leading to corneal thinning.

  1. ExpertEyes: open-source, high-definition eyetracking.

    PubMed

    Parada, Francisco J; Wyatte, Dean; Yu, Chen; Akavipat, Ruj; Emerick, Brandi; Busey, Thomas

    2015-03-01

    ExpertEyes is a low-cost, open-source package of hardware and software that is designed to provide portable high-definition eyetracking. The project involves several technological innovations, including portability, high-definition video recording, and multiplatform software support. It was designed for challenging recording environments, and all processing is done offline to allow for optimization of parameter estimation. The pupil and corneal reflection are estimated using a novel forward eye model that simultaneously fits both the pupil and the corneal reflection with full ellipses, addressing a common situation in which the corneal reflection sits at the edge of the pupil and therefore breaks the contour of the ellipse. The accuracy and precision of the system are comparable to or better than what is available in commercial eyetracking systems, with a typical accuracy of less than 0.4° and best accuracy below 0.3°, and with a typical precision (SD method) around 0.3° and best precision below 0.2°. Part of the success of the system comes from a high-resolution eye image. The high image quality results from uncasing common digital camcorders and recording directly to SD cards, which avoids the limitations of the analog NTSC format. The software is freely downloadable, and complete hardware plans are available, along with sources for custom parts.

  2. Automated Diabetic Retinopathy Image Assessment Software: Diagnostic Accuracy and Cost-Effectiveness Compared with Human Graders.

    PubMed

    Tufail, Adnan; Rudisill, Caroline; Egan, Catherine; Kapetanakis, Venediktos V; Salas-Vega, Sebastian; Owen, Christopher G; Lee, Aaron; Louw, Vern; Anderson, John; Liew, Gerald; Bolter, Louis; Srinivas, Sowmya; Nittala, Muneeswar; Sadda, SriniVas; Taylor, Paul; Rudnicka, Alicja R

    2017-03-01

    With the increasing prevalence of diabetes, annual screening for diabetic retinopathy (DR) by expert human grading of retinal images is challenging. Automated DR image assessment systems (ARIAS) may provide clinically effective and cost-effective detection of retinopathy. We aimed to determine whether ARIAS can be safely introduced into DR screening pathways to replace human graders. Observational measurement comparison study of human graders following a national screening program for DR versus ARIAS. Retinal images from 20 258 consecutive patients attending routine annual diabetic eye screening between June 1, 2012, and November 4, 2013. Retinal images were manually graded following a standard national protocol for DR screening and were processed by 3 ARIAS: iGradingM, Retmarker, and EyeArt. Discrepancies between manual grades and ARIAS results were sent to a reading center for arbitration. Screening performance (sensitivity, false-positive rate) and diagnostic accuracy (95% confidence intervals of screening-performance measures) were determined. Economic analysis estimated the cost per appropriate screening outcome. Sensitivity point estimates (95% confidence intervals) of the ARIAS were as follows: EyeArt 94.7% (94.2%-95.2%) for any retinopathy, 93.8% (92.9%-94.6%) for referable retinopathy (human graded as either ungradable, maculopathy, preproliferative, or proliferative), 99.6% (97.0%-99.9%) for proliferative retinopathy; Retmarker 73.0% (72.0 %-74.0%) for any retinopathy, 85.0% (83.6%-86.2%) for referable retinopathy, 97.9% (94.9%-99.1%) for proliferative retinopathy. iGradingM classified all images as either having disease or being ungradable. EyeArt and Retmarker saved costs compared with manual grading both as a replacement for initial human grading and as a filter prior to primary human grading, although the latter approach was less cost-effective. Retmarker and EyeArt systems achieved acceptable sensitivity for referable retinopathy when compared with that of human graders and had sufficient specificity to make them cost-effective alternatives to manual grading alone. ARIAS have the potential to reduce costs in developed-world health care economies and to aid delivery of DR screening in developing or remote health care settings. Copyright © 2016 American Academy of Ophthalmology. All rights reserved.

  3. Vision-based method for detecting driver drowsiness and distraction in driver monitoring system

    NASA Astrophysics Data System (ADS)

    Jo, Jaeik; Lee, Sung Joo; Jung, Ho Gi; Park, Kang Ryoung; Kim, Jaihie

    2011-12-01

    Most driver-monitoring systems have attempted to detect either driver drowsiness or distraction, although both factors should be considered for accident prevention. Therefore, we propose a new driver-monitoring method considering both factors. We make the following contributions. First, if the driver is looking ahead, drowsiness detection is performed; otherwise, distraction detection is performed. Thus, the computational cost and eye-detection error can be reduced. Second, we propose a new eye-detection algorithm that combines adaptive boosting, adaptive template matching, and blob detection with eye validation, thereby reducing the eye-detection error and processing time significantly, which is hardly achievable using a single method. Third, to enhance eye-detection accuracy, eye validation is applied after initial eye detection, using a support vector machine based on appearance features obtained by principal component analysis (PCA) and linear discriminant analysis (LDA). Fourth, we propose a novel eye state-detection algorithm that combines appearance features obtained using PCA and LDA, with statistical features such as the sparseness and kurtosis of the histogram from the horizontal edge image of the eye. Experimental results showed that the detection accuracies of the eye region and eye states were 99 and 97%, respectively. Both driver drowsiness and distraction were detected with a success rate of 98%.

  4. High-speed polarization sensitive optical coherence tomography for retinal diagnostics

    NASA Astrophysics Data System (ADS)

    Yin, Biwei; Wang, Bingqing; Vemishetty, Kalyanramu; Nagle, Jim; Liu, Shuang; Wang, Tianyi; Rylander, Henry G., III; Milner, Thomas E.

    2012-01-01

    We report design and construction of an FPGA-based high-speed swept-source polarization-sensitive optical coherence tomography (SS-PS-OCT) system for clinical retinal imaging. Clinical application of the SS-PS-OCT system is accurate measurement and display of thickness, phase retardation and birefringence maps of the retinal nerve fiber layer (RNFL) in human subjects for early detection of glaucoma. The FPGA-based SS-PS-OCT system provides three incident polarization states on the eye and uses a bulk-optic polarization sensitive balanced detection module to record two orthogonal interference fringe signals. Interference fringe signals and relative phase retardation between two orthogonal polarization states are used to obtain Stokes vectors of light returning from each RNFL depth. We implement a Levenberg-Marquardt algorithm on a Field Programmable Gate Array (FPGA) to compute accurate phase retardation and birefringence maps. For each retinal scan, a three-state Levenberg-Marquardt nonlinear algorithm is applied to 360 clusters each consisting of 100 A-scans to determine accurate maps of phase retardation and birefringence in less than 1 second after patient measurement allowing real-time clinical imaging-a speedup of more than 300 times over previous implementations. We report application of the FPGA-based SS-PS-OCT system for real-time clinical imaging of patients enrolled in a clinical study at the Eye Institute of Austin and Duke Eye Center.

  5. Humanoid monocular stereo measuring system with two degrees of freedom using bionic optical imaging system

    NASA Astrophysics Data System (ADS)

    Du, Jia-Wei; Wang, Xuan-Yin; Zhu, Shi-Qiang

    2017-10-01

    Based on the process by which the spatial depth clue is obtained by a single eye, a monocular stereo vision to measure the depth information of spatial objects was proposed in this paper and a humanoid monocular stereo measuring system with two degrees of freedom was demonstrated. The proposed system can effectively obtain the three-dimensional (3-D) structure of spatial objects of different distances without changing the position of the system and has the advantages of being exquisite, smart, and flexible. The bionic optical imaging system we proposed in a previous paper, named ZJU SY-I, was employed and its vision characteristic was just like the resolution decay of the eye's vision from center to periphery. We simplified the eye's rotation in the eye socket and the coordinated rotation of other organs of the body into two rotations in the orthogonal direction and employed a rotating platform with two rotation degrees of freedom to drive ZJU SY-I. The structure of the proposed system was described in detail. The depth of a single feature point on the spatial object was deduced, as well as its spatial coordination. With the focal length adjustment of ZJU SY-I and the rotation control of the rotation platform, the spatial coordinates of all feature points on the spatial object could be obtained and then the 3-D structure of the spatial object could be reconstructed. The 3-D structure measurement experiments of two spatial objects with different distances and sizes were conducted. Some main factors affecting the measurement accuracy of the proposed system were analyzed and discussed.

  6. Temporal accommodation response measured by photorefractive accommodation measurement device

    NASA Astrophysics Data System (ADS)

    Song, Byoungsub; Leportier, Thibault; Park, Min-Chul

    2017-02-01

    Although accommodation response plays an important role in the human vision system for perception of distance, some three-dimensional (3D) displays offer depth stimuli regardless of the accommodation response. The consequence is that most observers watching 3D displays have complained about visual fatigue. The measurement of the accommodation response is therefore necessary to develop human-friendly 3D displays. However, only few studies about accommodation measurement have been reported. Most of the investigations have been focused on the measurement and analysis of monocular accommodation responses only because the accommodation response works individually in each eye. Moreover, a main eye perceives dominantly the object distance. However, the binocular accommodation response should be examined because both eyes are used to watch the 3D display in natural conditions. The ophthalmic instrument that we developed enabled to measure changes in the accommodation response of the two eyes simultaneously. Two cameras acquired separately the infrared images reflected from each eyes after the reflected beams passed through a cylindrical lens. The changes in the accommodation response could then be estimated from the changes in the astigmatism ratio of the infrared images that were acquired in real time. In this paper, we compared the accommodation responses of main eye between the monocular and the binocular conditions. The two eyes were measured one by one, with only one eye opened, during measurement for monocular condition. Then the two eyes were examined simultaneously for binocular condition. The results showed similar tendencies for main eye accommodation response in both cases.

  7. Adaptation to interocular differences in blur

    PubMed Central

    Kompaniez, Elysse; Sawides, Lucie; Marcos, Susana; Webster, Michael A.

    2013-01-01

    Adaptation to a blurred image causes a physically focused image to appear too sharp, and shifts the point of subjective focus toward the adapting blur, consistent with a renormalization of perceived focus. We examined whether and how this adaptation normalizes to differences in blur between the two eyes, which can routinely arise from differences in refractive errors. Observers adapted to images filtered to simulate optical defocus or different axes of astigmatism, as well as to images that were isotropically blurred or sharpened by varying the slope of the amplitude spectrum. Adaptation to the different types of blur produced strong aftereffects that showed strong transfer across the eyes, as assessed both in a monocular adaptation task and in a contingent adaptation task in which the two eyes were simultaneously exposed to different blur levels. Selectivity for the adapting eye was thus generally weak. When one eye was exposed to a sharper image than the other, the aftereffects also tended to be dominated by the sharper image. Our results suggest that while short-term adaptation can rapidly recalibrate the perception of blur, it cannot do so independently for the two eyes, and that the binocular adaptation of blur is biased by the sharper of the two eyes' retinal images. PMID:23729770

  8. Fundus Image Features Extraction for Exudate Mining in Coordination with Content Based Image Retrieval: A Study

    NASA Astrophysics Data System (ADS)

    Gururaj, C.; Jayadevappa, D.; Tunga, Satish

    2018-02-01

    Medical field has seen a phenomenal improvement over the previous years. The invention of computers with appropriate increase in the processing and internet speed has changed the face of the medical technology. However there is still scope for improvement of the technologies in use today. One of the many such technologies of medical aid is the detection of afflictions of the eye. Although a repertoire of research has been accomplished in this field, most of them fail to address how to take the detection forward to a stage where it will be beneficial to the society at large. An automated system that can predict the current medical condition of a patient after taking the fundus image of his eye is yet to see the light of the day. Such a system is explored in this paper by summarizing a number of techniques for fundus image features extraction, predominantly hard exudate mining, coupled with Content Based Image Retrieval to develop an automation tool. The knowledge of the same would bring about worthy changes in the domain of exudates extraction of the eye. This is essential in cases where the patients may not have access to the best of technologies. This paper attempts at a comprehensive summary of the techniques for Content Based Image Retrieval (CBIR) or fundus features image extraction, and few choice methods of both, and an exploration which aims to find ways to combine these two attractive features, and combine them so that it is beneficial to all.

  9. Fundus Image Features Extraction for Exudate Mining in Coordination with Content Based Image Retrieval: A Study

    NASA Astrophysics Data System (ADS)

    Gururaj, C.; Jayadevappa, D.; Tunga, Satish

    2018-06-01

    Medical field has seen a phenomenal improvement over the previous years. The invention of computers with appropriate increase in the processing and internet speed has changed the face of the medical technology. However there is still scope for improvement of the technologies in use today. One of the many such technologies of medical aid is the detection of afflictions of the eye. Although a repertoire of research has been accomplished in this field, most of them fail to address how to take the detection forward to a stage where it will be beneficial to the society at large. An automated system that can predict the current medical condition of a patient after taking the fundus image of his eye is yet to see the light of the day. Such a system is explored in this paper by summarizing a number of techniques for fundus image features extraction, predominantly hard exudate mining, coupled with Content Based Image Retrieval to develop an automation tool. The knowledge of the same would bring about worthy changes in the domain of exudates extraction of the eye. This is essential in cases where the patients may not have access to the best of technologies. This paper attempts at a comprehensive summary of the techniques for Content Based Image Retrieval (CBIR) or fundus features image extraction, and few choice methods of both, and an exploration which aims to find ways to combine these two attractive features, and combine them so that it is beneficial to all.

  10. Measuring the retina optical properties using a structured illumination imaging system

    NASA Astrophysics Data System (ADS)

    Basiri, A.; Nguyen, T. A.; Ibrahim, M.; Nguyen, Q. D.; Ramella-Roman, Jessica C.

    2011-03-01

    Patients with diabetic retinopathy (DR) may experience a reduction in retinal oxygen saturation (SO2). Close monitoring with a fundus ophthalmoscope can help in the prediction of the progression of disease. In this paper we present a noninvasive instrument based on structured illumination aimed at measuring the retina optical properties including oxygen saturation. The instrument uses two wavelngths one in the NIR and one visible, a fast acquisition camera, and a splitter system that allows for contemporaneous collection of images at two different wavelengths. This scheme greatly reduces eye movement artifacts. Structured illumination was achieved in two different ways, firstly several binary illumination masks fabricated using laser micro-machining were used, a near-sinusoidal projection pattern is ultimately achieved at the image plane by appropriate positioning of the binary masks. Secondarily a sinusoidal pattern printed on a thin plastic sheet was positioned at image plane of a fundus ophthalmoscope. The system was calibrated using optical phantoms of known optical properties as well as an eye phantom that included a 150μm capillary vessel containing different concentrations of oxygenated and deoxygenated hemoglobin.

  11. GCaMP expression in retinal ganglion cells characterized using a low-cost fundus imaging system

    NASA Astrophysics Data System (ADS)

    Chang, Yao-Chuan; Walston, Steven T.; Chow, Robert H.; Weiland, James D.

    2017-10-01

    Objective. Virus-transduced, intracellular-calcium indicators are effective reporters of neural activity, offering the advantage of cell-specific labeling. Due to the existence of an optimal time window for the expression of calcium indicators, a suitable tool for tracking GECI expression in vivo following transduction is highly desirable. Approach. We developed a noninvasive imaging approach based on a custom-modified, low-cost fundus viewing system that allowed us to monitor and characterize in vivo bright-field and fluorescence images of the mouse retina. AAV2-CAG-GCaMP6f was injected into a mouse eye. The fundus imaging system was used to measure fluorescence at several time points post injection. At defined time points, we prepared wholemount retina mounted on a transparent multielectrode array and used calcium imaging to evaluate the responsiveness of retinal ganglion cells (RGCs) to external electrical stimulation. Main results. The noninvasive fundus imaging system clearly resolves individual (RGCs and axons. RGC fluorescence intensity and the number of observable fluorescent cells show a similar rising trend from week 1 to week 3 after viral injection, indicating a consistent increase of GCaMP6f expression. Analysis of the in vivo fluorescence intensity trend and in vitro neurophysiological responsiveness shows that the slope of intensity versus days post injection can be used to estimate the optimal time for calcium imaging of RGCs in response to external electrical stimulation. Significance. The proposed fundus imaging system enables high-resolution digital fundus imaging in the mouse eye, based on off-the-shelf components. The long-term tracking experiment with in vitro calcium imaging validation demonstrates the system can serve as a powerful tool monitoring the level of genetically-encoded calcium indicator expression, further determining the optimal time window for following experiment.

  12. Integration of a Spectral Domain Optical Coherence Tomography System into a Surgical Microscope for Intraoperative Imaging

    PubMed Central

    Ehlers, Justis P.; Tao, Yuankai K.; Farsiu, Sina; Maldonado, Ramiro; Izatt, Joseph A.

    2011-01-01

    Purpose. To demonstrate an operating microscope-mounted spectral domain optical coherence tomography (MMOCT) system for human retinal and model surgery imaging. Methods. A prototype MMOCT system was developed to interface directly with an ophthalmic surgical microscope, to allow SDOCT imaging during surgical viewing. Nonoperative MMOCT imaging was performed in an Institutional Review Board–approved protocol in four healthy volunteers. The effect of surgical instrument materials on MMOCT imaging was evaluated while performing retinal surface, intraretinal, and subretinal maneuvers in cadaveric porcine eyes. The instruments included forceps, metallic and polyamide subretinal needles, and soft silicone-tipped instruments, with and without diamond dusting. Results. High-resolution images of the human retina were successfully obtained with the MMOCT system. The optical properties of surgical instruments affected the visualization of the instrument and the underlying retina. Metallic instruments (e.g., forceps and needles) showed high reflectivity with total shadowing below the instrument. Polyamide material had a moderate reflectivity with subtotal shadowing. Silicone instrumentation showed moderate reflectivity with minimal shadowing. Summed voxel projection MMOCT images provided clear visualization of the instruments, whereas the B-scans from the volume revealed details of the interactions between the tissues and the instrumentation (e.g., subretinal space cannulation, retinal elevation, or retinal holes). Conclusions. High-quality retinal imaging is feasible with an MMOCT system. Intraoperative imaging with model eyes provides high-resolution depth information including visualization of the instrument and intraoperative tissue manipulation. This study demonstrates a key component of an interactive platform that could provide enhanced information for the vitreoretinal surgeon. PMID:21282565

  13. Integration of a spectral domain optical coherence tomography system into a surgical microscope for intraoperative imaging.

    PubMed

    Ehlers, Justis P; Tao, Yuankai K; Farsiu, Sina; Maldonado, Ramiro; Izatt, Joseph A; Toth, Cynthia A

    2011-05-16

    To demonstrate an operating microscope-mounted spectral domain optical coherence tomography (MMOCT) system for human retinal and model surgery imaging. A prototype MMOCT system was developed to interface directly with an ophthalmic surgical microscope, to allow SDOCT imaging during surgical viewing. Nonoperative MMOCT imaging was performed in an Institutional Review Board-approved protocol in four healthy volunteers. The effect of surgical instrument materials on MMOCT imaging was evaluated while performing retinal surface, intraretinal, and subretinal maneuvers in cadaveric porcine eyes. The instruments included forceps, metallic and polyamide subretinal needles, and soft silicone-tipped instruments, with and without diamond dusting. High-resolution images of the human retina were successfully obtained with the MMOCT system. The optical properties of surgical instruments affected the visualization of the instrument and the underlying retina. Metallic instruments (e.g., forceps and needles) showed high reflectivity with total shadowing below the instrument. Polyamide material had a moderate reflectivity with subtotal shadowing. Silicone instrumentation showed moderate reflectivity with minimal shadowing. Summed voxel projection MMOCT images provided clear visualization of the instruments, whereas the B-scans from the volume revealed details of the interactions between the tissues and the instrumentation (e.g., subretinal space cannulation, retinal elevation, or retinal holes). High-quality retinal imaging is feasible with an MMOCT system. Intraoperative imaging with model eyes provides high-resolution depth information including visualization of the instrument and intraoperative tissue manipulation. This study demonstrates a key component of an interactive platform that could provide enhanced information for the vitreoretinal surgeon.

  14. Optical aberrations, retinal image quality and eye growth: Experimentation and modeling

    NASA Astrophysics Data System (ADS)

    Tian, Yibin

    2007-12-01

    Retinal image quality is important for normal eye growth. Optical aberrations are of interest for two reasons: first, they degrade retinal images; second, they might provide some cues to defocus. Higher than normal ocular aberrations have been previously associated with human myopia. However, these studies were cross-sectional in design, and only reported aberrations in terms of root mean square (RMS) errors of Zernike coefficients, a poor metric of optical quality. This dissertation presents results from investigations of ocular optical aberrations, retinal image quality and eye growth in chicks and humans. A number of techniques were utilized, including Shack-Hartmann aberrometry, high-frequency A-scan ultrasonography, ciliary nerve section (CNX), photorefractive keratectomy (PRK) as well as computer simulations and modeling. A technique to extract light scatter information from Shack-Hartmann images was also developed. The main findings of the dissertation are summarized below. In young chicks, most ocular aberrations decreased with growth in both normal and CNX eyes, and there were diurnal fluctuations in some aberrations. Modeling suggested active reduction in higher order aberrations (HOAs) during early development. Although CNX eyes manifested greater than normal HOAs, they showed near normal growth. Retinal image degradation varied greatly among individual eyes post-PRK in young chicks. Including light scatter information into analyses of retinal image quality better estimated the latter. Albino eyes showed more severe retinal image degradation than normal eyes, due to increased optical aberrations and light scatter, but their growth was similar to those of normal eyes, implying that they are relatively insensitive to retina image quality. Although the above results questioned the influence of optical aberrations on early ocular growth, some optical quality metrics, derived from optical aberrations data, could predict how much the eyes of young chicks subsequently elongated. The performance of some focus measures was very poor when non-defocus aberrations exceeded a certain level; presumably, these non-defocus aberrations might interfere with the eye's ability to interpret defocus. In anisomyopic human adults, more myopic eyes had larger anterior and vitreous chambers, greater astigmatism, and more positive spherical aberration. However, compared to isometropes, only interocular differences in spherical equivalent refractive errors were significantly increased.

  15. Adaptive Optics for the Human Eye

    NASA Astrophysics Data System (ADS)

    Williams, D. R.

    2000-05-01

    Adaptive optics can extend not only the resolution of ground-based telescopes, but also the human eye. Both static and dynamic aberrations in the cornea and lens of the normal eye limit its optical quality. Though it is possible to correct defocus and astigmatism with spectacle lenses, higher order aberrations remain. These aberrations blur vision and prevent us from seeing at the fundamental limits set by the retina and brain. They also limit the resolution of cameras to image the living retina, cameras that are a critical for the diagnosis and treatment of retinal disease. I will describe an adaptive optics system that measures the wave aberration of the eye in real time and compensates for it with a deformable mirror, endowing the human eye with unprecedented optical quality. This instrument provides fresh insight into the ultimate limits on human visual acuity, reveals for the first time images of the retinal cone mosaic responsible for color vision, and points the way to contact lenses and laser surgical methods that could enhance vision beyond what is currently possible today. Supported by the NSF Science and Technology Center for Adaptive Optics, the National Eye Institute, and Bausch and Lomb, Inc.

  16. Comparison of ultra-widefield fluorescein angiography with the Heidelberg Spectralis® noncontact ultra-widefield module versus the Optos® Optomap®

    PubMed Central

    Witmer, Matthew T; Parlitsis, George; Patel, Sarju; Kiss, Szilárd

    2013-01-01

    Purpose To compare ultra-widefield fluorescein angiography imaging using the Optos® Optomap® and the Heidelberg Spectralis® noncontact ultra-widefield module. Methods Five patients (ten eyes) underwent ultra-widefield fluorescein angiography using the Optos® panoramic P200Tx imaging system and the noncontact ultra-widefield module in the Heidelberg Spectralis® HRA+OCT system. The images were obtained as a single, nonsteered shot centered on the macula. The area of imaged retina was outlined and quantified using Adobe® Photoshop® C5 software. The total area and area within each of four visualized quadrants was calculated and compared between the two imaging modalities. Three masked reviewers also evaluated each quadrant per eye (40 total quadrants) to determine which modality imaged the retinal vasculature most peripherally. Results Optos® imaging captured a total retinal area averaging 151,362 pixels, ranging from 116,998 to 205,833 pixels, while the area captured using the Heidelberg Spectralis® was 101,786 pixels, ranging from 73,424 to 116,319 (P = 0.0002). The average area per individual quadrant imaged by Optos® versus the Heidelberg Spectralis® superiorly was 32,373 vs 32,789 pixels, respectively (P = 0.91), inferiorly was 24,665 vs 26,117 pixels, respectively (P = 0.71), temporally was 47,948 vs 20,645 pixels, respectively (P = 0.0001), and nasally was 46,374 vs 22,234 pixels, respectively (P = 0.0001). The Heidelberg Spectralis® was able to image the superior and inferior retinal vasculature to a more distal point than was the Optos®, in nine of ten eyes (18 of 20 quadrants). The Optos® was able to image the nasal and temporal retinal vasculature to a more distal point than was the Heidelberg Spectralis®, in ten of ten eyes (20 of 20 quadrants). Conclusion The ultra-widefield fluorescein angiography obtained with the Optos® and Heidelberg Spectralis® ultra-widefield imaging systems are both excellent modalities that provide views of the peripheral retina. On a single nonsteered image, the Optos® Optomap® covered a significantly larger total retinal surface area, with greater image variability, than did the Heidelberg Spectralis® ultra-widefield module. The Optos® captured an appreciably wider view of the retina temporally and nasally, albeit with peripheral distortion, while the ultra-widefield Heidelberg Spectralis® module was able to image the superior and inferior retinal vasculature more peripherally. The clinical significance of these findings as well as the area imaged on steered montaged images remains to be determined. PMID:23458976

  17. Ocular wavefront aberrations in patients with macular diseases

    PubMed Central

    Bessho, Kenichiro; Bartsch, Dirk-Uwe G.; Gomez, Laura; Cheng, Lingyun; Koh, Hyoung Jun; Freeman, William R.

    2009-01-01

    Background There have been reports that by compensating for the ocular aberrations using adaptive optical systems it may be possible to improve the resolution of clinical retinal imaging systems beyond what is now possible. In order to develop such system to observe eyes with retinal disease, understanding of the ocular wavefront aberrations in individuals with retinal disease is required. Methods 82 eyes of 66 patients with macular disease (epiretinal membrane, macular edema, macular hole etc.) and 85 eyes of 51 patients without retinal disease were studied. Using a ray-tracing wavefront device, each eye was scanned at both small and large pupil apertures and Zernike coefficients up to 6th order were acquired. Results In phakic eyes, 3rd order root mean square errors (RMS) in macular disease group were statistically greater than control, an average of 12% for 5mm and 31% for 3mm scan diameters (p<0.021). In pseudophakic eyes, there also was an elevation of 3rd order RMS, on average 57% for 5mm and 51% for 3mm scan diameters (p<0.031). Conclusion Higher order wavefront aberrations in eyes with macular disease were greater than in control eyes without disease. Our study suggests that such aberrations may result from irregular or multiple reflecting retinal surfaces. Modifications in wavefront sensor technology will be needed to accurately determine wavefront aberration and allow correction using adaptive optics in eyes with macular irregularities. PMID:19574950

  18. Vision based flight procedure stereo display system

    NASA Astrophysics Data System (ADS)

    Shen, Xiaoyun; Wan, Di; Ma, Lan; He, Yuncheng

    2008-03-01

    A virtual reality flight procedure vision system is introduced in this paper. The digital flight map database is established based on the Geographic Information System (GIS) and high definitions satellite remote sensing photos. The flight approaching area database is established through computer 3D modeling system and GIS. The area texture is generated from the remote sensing photos and aerial photographs in various level of detail. According to the flight approaching procedure, the flight navigation information is linked to the database. The flight approaching area vision can be dynamic displayed according to the designed flight procedure. The flight approaching area images are rendered in 2 channels, one for left eye images and the others for right eye images. Through the polarized stereoscopic projection system, the pilots and aircrew can get the vivid 3D vision of the flight destination approaching area. Take the use of this system in pilots preflight preparation procedure, the aircrew can get more vivid information along the flight destination approaching area. This system can improve the aviator's self-confidence before he carries out the flight mission, accordingly, the flight safety is improved. This system is also useful in validate the visual flight procedure design, and it helps to the flight procedure design.

  19. A standard model eye with micro scale multilayer structure for ophthalmic optical coherence tomography equipment

    NASA Astrophysics Data System (ADS)

    Cao, Zhenggang; Ding, Zengqian; Hu, Zhixiong; Wen, Tao; Qiao, Wen; Liu, Wenli

    2016-10-01

    Optical coherence tomography (OCT) has been widely applied in diagnosis of eye diseases during the last 20 years. Differing from traditional two-dimension imaging technologies, OCT could also provide cross-sectional information of target tissues simultaneously and precisely. As well known, axial resolution is one of the most critical parameters impacting the OCT image quality, which determines whether an accurate diagnosis could be obtained. Therefore, it is important to evaluate the axial resolution of an OCT equipment. Phantoms always play an important role in the standardization and validation process. Here, a standard model eye with micro-scale multilayer structure was custom designed and manufactured. Mimicking a real human eye, analyzing the physical characteristic of layer structures of retina and cornea in-depth, appropriate materials were selected by testing the scattering coefficient of PDMS phantoms with difference concentration of TiO2 or BaSO4 particles. An artificial retina and cornea with multilayer-films which have a thickness of 10 to 60 micrometers for each layer were fabricated using spin coating technology. Considering key parameters of the standard model eye need to be traceable as well as accurate, the optical refractive index and layer structure thicknesses of phantoms were verified by utilizing Thickness Monitoring System. Consequently, a standard OCT model eye was obtained after the retinal or corneal phantom was embedded into a water-filled model eye which has been fabricated by 3D printing technology to simulate ocular dispersion and emmetropic refraction. The eye model was manufactured with a transparent resin to simulate realistic ophthalmic testing environment, and most key optical elements including cornea, lens and vitreous body were realized. By investigating with a research and a clinical OCT system respectively, the OCT model eye was demonstrated with similar physical properties as natural eye, and the multilayer film measurement provided an effective method to rapidly evaluate the axial resolution of ophthalmic OCT devices.

  20. The Method of Curvatures.

    ERIC Educational Resources Information Center

    Greenslade, Thomas B., Jr.; Miller, Franklin, Jr.

    1981-01-01

    Describes method for locating images in simple and complex systems of thin lenses and spherical mirrors. The method helps students to understand differences between real and virtual images. It is helpful in discussing the human eye and the correction of imperfect vision by the use of glasses. (Author/SK)

  1. A comparative examination of neural circuit and brain patterning between the lamprey and amphioxus reveals the evolutionary origin of the vertebrate visual center.

    PubMed

    Suzuki, Daichi G; Murakami, Yasunori; Escriva, Hector; Wada, Hiroshi

    2015-02-01

    Vertebrates are equipped with so-called camera eyes, which provide them with image-forming vision. Vertebrate image-forming vision evolved independently from that of other animals and is regarded as a key innovation for enhancing predatory ability and ecological success. Evolutionary changes in the neural circuits, particularly the visual center, were central for the acquisition of image-forming vision. However, the evolutionary steps, from protochordates to jaw-less primitive vertebrates and then to jawed vertebrates, remain largely unknown. To bridge this gap, we present the detailed development of retinofugal projections in the lamprey, the neuroarchitecture in amphioxus, and the brain patterning in both animals. Both the lateral eye in larval lamprey and the frontal eye in amphioxus project to a light-detecting visual center in the caudal prosencephalic region marked by Pax6, which possibly represents the ancestral state of the chordate visual system. Our results indicate that the visual system of the larval lamprey represents an evolutionarily primitive state, forming a link from protochordates to vertebrates and providing a new perspective of brain evolution based on developmental mechanisms and neural functions. © 2014 Wiley Periodicals, Inc.

  2. Typhoon Champi Develops Massive Eye

    NASA Image and Video Library

    2017-12-08

    Taken on October 22, 2015 at 0400 UTC by the Suomi NPP satellite's VIIRS sensor, this colorized infrared image shows the extremely large eye of Typhoon Champi. With a diameter of 60 nautical miles, the eye of the storm is larger than the state of Rhode Island. Typhoon Champi is currently 700 nautical miles south of Tokyo, Japan with 110mph sustained winds, and is moving northeast with no threat to land. Credit: NASA/NOAA via NOAA Environmental Visualization Laboratory NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  3. Evaluating anesthetic protocols for functional blood flow imaging in the rat eye

    NASA Astrophysics Data System (ADS)

    Moult, Eric M.; Choi, WooJhon; Boas, David A.; Baumann, Bernhard; Clermont, Allen C.; Feener, Edward P.; Fujimoto, James G.

    2017-01-01

    The purpose of this study is to evaluate the suitability of five different anesthetic protocols (isoflurane, isoflurane-xylazine, pentobarbital, ketamine-xylazine, and ketamine-xylazine-vecuronium) for functional blood flow imaging in the rat eye. Total retinal blood flow was measured at a series of time points using an ultrahigh-speed Doppler OCT system. Additionally, each anesthetic protocol was qualitatively evaluated according to the following criteria: (1) time-stability of blood flow, (2) overall rate of blood flow, (3) ocular immobilization, and (4) simplicity. We observed that different anesthetic protocols produced markedly different blood flows. Different anesthetic protocols also varied with respect to the four evaluated criteria. These findings suggest that the choice of anesthetic protocol should be carefully considered when designing and interpreting functional blood flow studies in the rat eye.

  4. Non-contact high resolution Bessel beam probe for diagnostic imaging of cornea and trabecular meshwork region in eye

    NASA Astrophysics Data System (ADS)

    Murukeshan, V. M.; Jesmond, Hong Xun J.; Shinoj, V. K.; Baskaran, M.; Tin, Aung

    2015-07-01

    Primary angle closure glaucoma is a major form of disease that causes blindness in Asia and worldwide. In glaucoma, irregularities in the ocular aqueous outflow system cause an elevation in intraocular pressure (IOP) with subsequent death of retinal ganglion cells, resulting in loss of vision. High resolution visualization of the iridocorneal angle region has great diagnostic value in understanding the disease condition which enables monitoring of surgical interventions that decrease IOP. None of the current diagnostic techniques such as goniophotography, ultrasound biomicroscopy (UBM), anterior segment optical coherence tomography (AS-OCT) and RetCam™ can image with molecular specificity and required spatial resolution that can delineate the trabecular meshwork structures. This paper in this context proposes new concepts and methodology using Bessel beams based illumination and imaging for such diagnostic ocular imaging applications. The salient features using Bessel beams instead of the conventional Gaussian beam, and the optimization challenges in configuring the probe system will be illustrated with porcine eye samples.

  5. Efficacy of Lens Protection Systems: Dependency on Different Cranial CT Scans in The Acute Stroke Setting.

    PubMed

    Guberina, Nika; Forsting, Michael; Ringelstein, Adrian

    2017-06-15

    To evaluate the dose-reduction potential with different lens protectors for patients undergoing cranial computed tomography (CT) scans. Eye lens dose was assessed in vitro (α-Al2O3:C thermoluminescence dosemeters) using an Alderson-Rando phantom® in cranial CT protocols at different CT scanners (SOMATOM-Definition-AS+®(CT1) and SOMATOM-Definition-Flash® (CT2)) using two different lens-protection systems (Somatex® (SOM) and Medical Imaging Systems® (MIS)). Summarised percentage of the transmitted photons: (1) CT1 (a) unenhanced CT (nCT) with gantry angulation: SOM = 103%, MIS = 111%; (2) CT2 (a) nCT without gantry angulation: SOM = 81%, MIS = 91%; (b) CT angiography (CTA) with automatic dose-modulation technique: SOM = 39%, MIS = 74%; (c) CTA without dose-modulation technique: SOM = 22%, MIS = 48%; (d) CT perfusion: SOM = 44%, MIS = 69%. SOM showed a higher dose-reduction potential than MIS maintaining equal image quality. Lens-protection systems are most effective in CTA protocols without dose-reduction techniques. Lens-protection systems lower the average eye lens dose during CT scans up to 1/3 (MIS) and 2/3 (SOM), respectively, if the eye lens is exposed to the direct beam of radiation. Considering both the CT protocol and the material of lens protectors, they seem to be mandatory for reducing the radiation exposure of the eye lens. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. Development of Extended-Depth Swept Source Optical Coherence Tomography for Applications in Ophthalmic Imaging of the Anterior and Posterior Eye

    NASA Astrophysics Data System (ADS)

    Dhalla, Al-Hafeez Zahir

    Optical coherence tomography (OCT) is a non-invasive optical imaging modality that provides micron-scale resolution of tissue micro-structure over depth ranges of several millimeters. This imaging technique has had a profound effect on the field of ophthalmology, wherein it has become the standard of care for the diagnosis of many retinal pathologies. Applications of OCT in the anterior eye, as well as for imaging of coronary arteries and the gastro-intestinal tract, have also shown promise, but have not yet achieved widespread clinical use. The usable imaging depth of OCT systems is most often limited by one of three factors: optical attenuation, inherent imaging range, or depth-of-focus. The first of these, optical attenuation, stems from the limitation that OCT only detects singly-scattered light. Thus, beyond a certain penetration depth into turbid media, essentially all of the incident light will have been multiply scattered, and can no longer be used for OCT imaging. For many applications (especially retinal imaging), optical attenuation is the most restrictive of the three imaging depth limitations. However, for some applications, especially anterior segment, cardiovascular (catheter-based) and GI (endoscopic) imaging, the usable imaging depth is often not limited by optical attenuation, but rather by the inherent imaging depth of the OCT systems. This inherent imaging depth, which is specific to only Fourier Domain OCT, arises due to two factors: sensitivity fall-off and the complex conjugate ambiguity. Finally, due to the trade-off between lateral resolution and axial depth-of-focus inherent in diffractive optical systems, additional depth limitations sometimes arises in either high lateral resolution or extended depth OCT imaging systems. The depth-of-focus limitation is most apparent in applications such as adaptive optics (AO-) OCT imaging of the retina, and extended depth imaging of the ocular anterior segment. In this dissertation, techniques for extending the imaging range of OCT systems are developed. These techniques include the use of a high spectral purity swept source laser in a full-field OCT system, as well as the use of a peculiar phenomenon known as coherence revival to resolve the complex conjugate ambiguity in swept source OCT. In addition, a technique for extending the depth of focus of OCT systems by using a polarization-encoded, dual-focus sample arm is demonstrated. Along the way, other related advances are also presented, including the development of techniques to reduce crosstalk and speckle artifacts in full-field OCT, and the use of fast optical switches to increase the imaging speed of certain low-duty cycle swept source OCT systems. Finally, the clinical utility of these techniques is demonstrated by combining them to demonstrate high-speed, high resolution, extended-depth imaging of both the anterior and posterior eye simultaneously and in vivo.

  7. Adaptive optics ophthalmoscopy.

    PubMed

    Roorda, Austin; Duncan, Jacque L

    2015-11-01

    This review starts with a brief history and description of adaptive optics (AO) technology, followed by a showcase of the latest capabilities of AO systems for imaging the human retina and an extensive review of the literature on where AO is being used clinically. The review concludes with a discussion on future directions and guidance on usage and interpretation of images from AO systems for the eye.

  8. Second harmonic generation microscopy of the living human cornea

    NASA Astrophysics Data System (ADS)

    Artal, Pablo; Ávila, Francisco; Bueno, Juan

    2018-02-01

    Second Harmonic Generation (SHG) microscopy provides high-resolution structural imaging of the corneal stroma without the need of labelling techniques. This powerful tool has never been applied to living human eyes so far. Here, we present a new compact SHG microscope specifically developed to image the structural organization of the corneal lamellae in living healthy human volunteers. The research prototype incorporates a long-working distance dry objective that allows non-contact three-dimensional SHG imaging of the cornea. Safety assessment and effectiveness of the system were firstly tested in ex-vivo fresh eyes. The maximum average power of the used illumination laser was 20 mW, more than 10 times below the maximum permissible exposure (according to ANSI Z136.1-2000). The instrument was successfully employed to obtain non-contact and non-invasive SHG of the living human eye within well-established light safety limits. This represents the first recording of in vivo SHG images of the human cornea using a compact multiphoton microscope. This might become an important tool in Ophthalmology for early diagnosis and tracking ocular pathologies.

  9. Are reconstruction filters necessary?

    NASA Astrophysics Data System (ADS)

    Holst, Gerald C.

    2006-05-01

    Shannon's sampling theorem (also called the Shannon-Whittaker-Kotel'nikov theorem) was developed for the digitization and reconstruction of sinusoids. Strict adherence is required when frequency preservation is important. Three conditions must be met to satisfy the sampling theorem: (1) The signal must be band-limited, (2) the digitizer must sample the signal at an adequate rate, and (3) a low-pass reconstruction filter must be present. In an imaging system, the signal is band-limited by the optics. For most imaging systems, the signal is not adequately sampled resulting in aliasing. While the aliasing seems excessive mathematically, it does not significantly affect the perceived image. The human visual system detects intensity differences, spatial differences (shapes), and color differences. The eye is less sensitive to frequency effects and therefore sampling artifacts have become quite acceptable. Indeed, we love our television even though it is significantly undersampled. The reconstruction filter, although absolutely essential, is rarely discussed. It converts digital data (which we cannot see) into a viewable analog signal. There are several reconstruction filters: electronic low-pass filters, the display media (monitor, laser printer), and your eye. These are often used in combination to create a perceived continuous image. Each filter modifies the MTF in a unique manner. Therefore image quality and system performance depends upon the reconstruction filter(s) used. The selection depends upon the application.

  10. High-speed adaptive optics for imaging of the living human eye

    PubMed Central

    Yu, Yongxin; Zhang, Tianjiao; Meadway, Alexander; Wang, Xiaolin; Zhang, Yuhua

    2015-01-01

    The discovery of high frequency temporal fluctuation of human ocular wave aberration dictates the necessity of high speed adaptive optics (AO) correction for high resolution retinal imaging. We present a high speed AO system for an experimental adaptive optics scanning laser ophthalmoscope (AOSLO). We developed a custom high speed Shack-Hartmann wavefront sensor and maximized the wavefront detection speed based upon a trade-off among the wavefront spatial sampling density, the dynamic range, and the measurement sensitivity. We examined the temporal dynamic property of the ocular wavefront under the AOSLO imaging condition and improved the dual-thread AO control strategy. The high speed AO can be operated with a closed-loop frequency up to 110 Hz. Experiment results demonstrated that the high speed AO system can provide improved compensation for the wave aberration up to 30 Hz in the living human eye. PMID:26368408

  11. Optimized Two-Party Video Chat with Restored Eye Contact Using Graphics Hardware

    NASA Astrophysics Data System (ADS)

    Dumont, Maarten; Rogmans, Sammy; Maesen, Steven; Bekaert, Philippe

    We present a practical system prototype to convincingly restore eye contact between two video chat participants, with a minimal amount of constraints. The proposed six-fold camera setup is easily integrated into the monitor frame, and is used to interpolate an image as if its virtual camera captured the image through a transparent screen. The peer user has a large freedom of movement, resulting in system specifications that enable genuine practical usage. Our software framework thereby harnesses the powerful computational resources inside graphics hardware, and maximizes arithmetic intensity to achieve over real-time performance up to 42 frames per second for 800 ×600 resolution images. Furthermore, an optimal set of fine tuned parameters are presented, that optimizes the end-to-end performance of the application to achieve high subjective visual quality, and still allows for further algorithmic advancement without loosing its real-time capabilities.

  12. Comparative study between a spectral domain and a high-speed single-beam swept source OCTA system for identifying choroidal neovascularization in AMD

    NASA Astrophysics Data System (ADS)

    Told, R.; Ginner, L.; Hecht, A.; Sacu, S.; Leitgeb, R.; Pollreisz, A.; Schmidt-Erfurth, U.

    2016-12-01

    This comparative study between a SD- and SS-OCTA system for visualizing neovascular patterns in AMD, also assessed the influence of cataract on OCTA imaging. 25 eyes with active CNV (AMD) were documented by FA, ICGA and SD-OCT. Two OCTA devices were used: A custom built SS-OCTA (1050 nm, 400,000 A-scans/s, 5 × 5 mm, no image segmentation); AngioVue (OptoVue, CA, USA) SD-OCTA (840 nm, 70.000 A-scans/s, 3 × 3 mm, SSADA technology). Two retina experts graded CNV types and vascular patterns. Cataract influence on OCTA image quality was reported for the superficial retinal plexus (6 eyes). The SS-OCTA prototype showed more CNV lesions compared to the SD-OCTA system (p = 0.01). Overall sensitivity of SD- and SS-OCTA systems to detect CNV lesions was.32 and.68, respectively. The SS-OCTA system was able to detect discrete lesion characteristics better than the SD-OCTA. No significant difference was found in the ability to identify CNV in treatment-naïve eyes. There was no significant influence of cataract. The SS-OCTA prototype detected CNV-associated vascular patterns more reliably than the SD-OCTA system. This is attributed to the SS-OCTA system’s longer center wavelength and higher A-scan rate yielding higher definition and contrast of small neovascular structures. The SS-OCTA system used showed no advantage regarding cataract influence.

  13. Etracker: A Mobile Gaze-Tracking System with Near-Eye Display Based on a Combined Gaze-Tracking Algorithm.

    PubMed

    Li, Bin; Fu, Hong; Wen, Desheng; Lo, WaiLun

    2018-05-19

    Eye tracking technology has become increasingly important for psychological analysis, medical diagnosis, driver assistance systems, and many other applications. Various gaze-tracking models have been established by previous researchers. However, there is currently no near-eye display system with accurate gaze-tracking performance and a convenient user experience. In this paper, we constructed a complete prototype of the mobile gaze-tracking system ' Etracker ' with a near-eye viewing device for human gaze tracking. We proposed a combined gaze-tracking algorithm. In this algorithm, the convolutional neural network is used to remove blinking images and predict coarse gaze position, and then a geometric model is defined for accurate human gaze tracking. Moreover, we proposed using the mean value of gazes to resolve pupil center changes caused by nystagmus in calibration algorithms, so that an individual user only needs to calibrate it the first time, which makes our system more convenient. The experiments on gaze data from 26 participants show that the eye center detection accuracy is 98% and Etracker can provide an average gaze accuracy of 0.53° at a rate of 30⁻60 Hz.

  14. Development of a low cost high precision three-layer 3D artificial compound eye.

    PubMed

    Zhang, Hao; Li, Lei; McCray, David L; Scheiding, Sebastian; Naples, Neil J; Gebhardt, Andreas; Risse, Stefan; Eberhardt, Ramona; Tünnermann, Andreas; Yi, Allen Y

    2013-09-23

    Artificial compound eyes are typically designed on planar substrates due to the limits of current imaging devices and available manufacturing processes. In this study, a high precision, low cost, three-layer 3D artificial compound eye consisting of a 3D microlens array, a freeform lens array, and a field lens array was constructed to mimic an apposition compound eye on a curved substrate. The freeform microlens array was manufactured on a curved substrate to alter incident light beams and steer their respective images onto a flat image plane. The optical design was performed using ZEMAX. The optical simulation shows that the artificial compound eye can form multiple images with aberrations below 11 μm; adequate for many imaging applications. Both the freeform lens array and the field lens array were manufactured using microinjection molding process to reduce cost. Aluminum mold inserts were diamond machined by the slow tool servo method. The performance of the compound eye was tested using a home-built optical setup. The images captured demonstrate that the proposed structures can successfully steer images from a curved surface onto a planar photoreceptor. Experimental results show that the compound eye in this research has a field of view of 87°. In addition, images formed by multiple channels were found to be evenly distributed on the flat photoreceptor. Additionally, overlapping views of the adjacent channels allow higher resolution images to be re-constructed from multiple 3D images taken simultaneously.

  15. Effects of anode geometry on forward wide-angle neon ion emissions in 3.5 kJ plasma focus device by novel mega-size panorama polycarbonate image detectors

    NASA Astrophysics Data System (ADS)

    Sohrabi, M.; Soltani, Z.; Sarlak, Z.

    2018-03-01

    Forward wide-angle neon ion emissions in a 3.5 kJ plasma focus device (PFD) were studied using 5 different anode top geometries; hollow-end cylinder, solid triangle, solid hemisphere, hollow-end cone and flat-end cone. Position-sensitive mega-size panorama polycarbonate ion image detectors (MS-PCID) developed by dual-cell circular mega-size electrochemical etching (MS-ECE) systems were applied for processesing wide-angle neon ion images on MS-PCIDs exposed on the PFD cylinder top base under a single pinch shot. The images can be simply observed, analyzed and relatively quantified in terms of ion emission angular distributions even by the unaided eyes. By analysis of the forward neon ion emission images, the ion emission yields, ion emission angular distributions, iso-fluence ion contours and solid angles of ion emissions in 4π PFD space were determined. The neon ion emission yields on the PFD cylinder top base are in an increasing order ~2.1×109, ~2.2 ×109, ~2.8×109, ~2.9×109, and ~3.5×109 neon ions/shot for the 5 stated anode top geometries respectively. The panorama neon ion images as diagnosed even by the unaided eyes demonstrate the lowest and highest ion yields from the hollow-end cylinder and flat-end cone anode tops respectively. Relative dynamic qualitative neon ion spectrometry was made by the unaided eyes demonstrating relative neon ion energy as they appear. The study also demonstrates the unique power of the MS-PCID/MS-ECE imaging system as an advanced state-of-the-art ion imaging method for wide-angle dynamic parametric studies in PFD space and other ion study applications.

  16. Retinal imaging using adaptive optics technology☆

    PubMed Central

    Kozak, Igor

    2014-01-01

    Adaptive optics (AO) is a technology used to improve the performance of optical systems by reducing the effect of wave front distortions. Retinal imaging using AO aims to compensate for higher order aberrations originating from the cornea and the lens by using deformable mirror. The main application of AO retinal imaging has been to assess photoreceptor cell density, spacing, and mosaic regularity in normal and diseased eyes. Apart from photoreceptors, the retinal pigment epithelium, retinal nerve fiber layer, retinal vessel wall and lamina cribrosa can also be visualized with AO technology. Recent interest in AO technology in eye research has resulted in growing number of reports and publications utilizing this technology in both animals and humans. With the availability of first commercially available instruments we are making transformation of AO technology from a research tool to diagnostic instrument. The current challenges include imaging eyes with less than perfect optical media, formation of normative databases for acquired images such as cone mosaics, and the cost of the technology. The opportunities for AO will include more detailed diagnosis with description of some new findings in retinal diseases and glaucoma as well as expansion of AO into clinical trials which has already started. PMID:24843304

  17. NASA-NOAA's Suomi NPP Satellite Cyclone Haruna Near Madagascar at Night

    NASA Image and Video Library

    2017-12-08

    This night-time image revealed Cyclone Haruna's massive eye before it made landfall in southwestern Madagascar. This image was taken from the VIIRS instrument that flies aboard the NASA-NOAA Suomi NPP satellite. The image was taken on Feb. 20 at 2242 UTC (5:42 p.m. EST/U.S.) and shows a clear eye, surrounded by very powerful thunderstorms. The bright lights of the Capital city of Antananarivo are seen in this image. The capital city lies about 300 nautical miles northwest of the storm's center. Haruna's center made landfall near Manombo, Madagascar around 0600 UTC (1 a.m. EST/U.S.) and its eye became cloud-filled quickly. For the entire storm history, visit NASA's Hurricane Page: www.nasa.gov/mission_pages/hurricanes/archives/2013/h2013... Text: Credit: Univ.of Wisconsin/NASA/NOAA NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  18. Ultra-Wide Field Imaging in Paradoxical Worsening of Tubercular Multifocal Serpiginoid Choroiditis after the Initiation of Anti-Tubercular Therapy.

    PubMed

    Aggarwal, Kanika; Agarwal, Aniruddha; Deokar, Ankit; Singh, Ramandeep; Bansal, Reema; Sharma, Aman; Sharma, Kusum; Dogra, Mangat R; Gupta, Vishali

    2017-10-11

    To evaluate role of ultra-wide field (UWF) versus conventional imaging in the follow-up and paradoxical worsening (PW) of tubercular (TB) multifocal serpiginoid choroiditis (MSC). Prospective observational study of patients with TB MSC undergoing UWF imaging, autofluorescence and fluorescein angiography was performed. A circle simulating central 75° field representing conventional imaging was drawn on UWF images. The information yielded by the two modalities, progression of choroiditis lesions and PW was compared. 44 eyes (29 patients, mean age: 30.7 ± 9 years; 23 males) were included. UWF imaging showed additional lesions in 39/44 eyes (88.6%). Overall, 16/44 eyes (36.4%) showed PW; 3/16 eyes (18.7%) showed only peripheral PW, while 10/16 eyes showed both central and peripheral PW. Management was altered in 11 patients (37.93%) based on UWF imaging. UWF is more useful than conventional imaging in identifying additional choroiditis lesions, PW and altering the course of therapy in TB MSC.

  19. Lightless cataract surgery using a near-infrared operating microscope.

    PubMed

    Kim, Bong-Hyun

    2006-10-01

    To describe the near-infrared (NIR) operating microscopy (NIOM) system using the NIR wavelength as the illumination source and to evaluate the feasibility of this system for lightless cataract surgery. HenAm Kim Eye Center, Haenam-Gun, South Korea. In this noncomparative interventional case series, cataract surgery was performed in 4 patients with bilateral cataract using the NIOM system in 1 eye and conventional microscopy in the fellow eye. The primary components of the system include an optical filter, a stereoscopic camera, head-mounted displays, and a recording system. This system uses invisible NIR (wavelength 850 to 1300 nm) illumination to facilitate cataract surgery without light. The differences between the NIOM system and conventional microscopy during cataract surgery were evaluated. The NIOM system provided excellent 3-dimensional viewing in real time. The image resolution was sufficient while performing all steps of cataract surgery. Immediately postoperatively and at 10 and 30 minutes and 1 hour, the visual acuity was better in the 4 eyes in which the NIOM system was used than in the 4 eyes in which conventional microscopy was used. However, using the NIOM system required good surgical skill. Lightless cataract surgery using the NIOM system seems useful for obtaining good visual acuity immediately postoperatively. The system may also reduce the incidence of light-induced retinal toxicity and the need for mydriatic administration and be a good educational tool.

  20. Ultrahigh-speed ultrahigh-resolution adaptive optics: optical coherence tomography system for in-vivo small animal retinal imaging

    NASA Astrophysics Data System (ADS)

    Jian, Yifan; Xu, Jing; Zawadzki, Robert J.; Sarunic, Marinko V.

    2013-03-01

    Small animal models of human retinal diseases are a critical component of vision research. In this report, we present an ultrahigh-resolution ultrahigh-speed adaptive optics optical coherence tomography (AO-OCT) system for small animal retinal imaging (mouse, fish, etc.). We adapted our imaging system to different types of small animals in accordance with the optical properties of their eyes. Results of AO-OCT images of small animal retinas acquired with AO correction are presented. Cellular structures including nerve fiber bundles, capillary networks and detailed double-cone photoreceptors are visualized.

  1. Practical low-cost stereo head-mounted display

    NASA Astrophysics Data System (ADS)

    Pausch, Randy; Dwivedi, Pramod; Long, Allan C., Jr.

    1991-08-01

    A high-resolution head-mounted display has been developed from substantially cheaper components than previous systems. Monochrome displays provide 720 by 280 monochrome pixels to each eye in a one-inch-square region positioned approximately one inch from each eye. The display hardware is the Private Eye, manufactured by Reflection Technologies, Inc. The tracking system uses the Polhemus Isotrak, providing (x,y,z, azimuth, elevation and roll) information on the user''s head position and orientation 60 times per second. In combination with a modified Nintendo Power Glove, this system provides a full-functionality virtual reality/simulation system. Using two host 80386 computers, real-time wire frame images can be produced. Other virtual reality systems require roughly 250,000 in hardware, while this one requires only 5,000. Stereo is particularly useful for this system because shading or occlusion cannot be used as depth cues.

  2. A compact eyetracked optical see-through head-mounted display

    NASA Astrophysics Data System (ADS)

    Hua, Hong; Gao, Chunyu

    2012-03-01

    An eye-tracked head-mounted display (ET-HMD) system is able to display virtual images as a classical HMD does, while additionally tracking the gaze direction of the user. There is ample evidence that a fully-integrated ETHMD system offers multi-fold benefits, not only to fundamental scientific research but also to emerging applications of such technology. For instance eyetracking capability in HMDs adds a very valuable tool and objective metric for scientists to quantitatively assess user interaction with 3D environments and investigate the effectiveness of various 3D visualization technologies for various specific tasks including training, education, and augmented cognition tasks. In this paper, we present an innovative optical approach to the design of an optical see-through ET-HMD system based on freeform optical technology and an innovative optical scheme that uniquely combines the display optics with the eye imaging optics. A preliminary design of the described ET-HMD system will be presented.

  3. Compressive sensing method for recognizing cat-eye effect targets.

    PubMed

    Li, Li; Li, Hui; Dang, Ersheng; Liu, Bo

    2013-10-01

    This paper proposes a cat-eye effect target recognition method with compressive sensing (CS) and presents a recognition method (sample processing before reconstruction based on compressed sensing, or SPCS) for image processing. In this method, the linear projections of original image sequences are applied to remove dynamic background distractions and extract cat-eye effect targets. Furthermore, the corresponding imaging mechanism for acquiring active and passive image sequences is put forward. This method uses fewer images to recognize cat-eye effect targets, reduces data storage, and translates the traditional target identification, based on original image processing, into measurement vectors processing. The experimental results show that the SPCS method is feasible and superior to the shape-frequency dual criteria method.

  4. An eye model for uncalibrated eye gaze estimation under variable head pose

    NASA Astrophysics Data System (ADS)

    Hnatow, Justin; Savakis, Andreas

    2007-04-01

    Gaze estimation is an important component of computer vision systems that monitor human activity for surveillance, human-computer interaction, and various other applications including iris recognition. Gaze estimation methods are particularly valuable when they are non-intrusive, do not require calibration, and generalize well across users. This paper presents a novel eye model that is employed for efficiently performing uncalibrated eye gaze estimation. The proposed eye model was constructed from a geometric simplification of the eye and anthropometric data about eye feature sizes in order to circumvent the requirement of calibration procedures for each individual user. The positions of the two eye corners and the midpupil, the distance between the two eye corners, and the radius of the eye sphere are required for gaze angle calculation. The locations of the eye corners and midpupil are estimated via processing following eye detection, and the remaining parameters are obtained from anthropometric data. This eye model is easily extended to estimating eye gaze under variable head pose. The eye model was tested on still images of subjects at frontal pose (0 °) and side pose (34 °). An upper bound of the model's performance was obtained by manually selecting the eye feature locations. The resulting average absolute error was 2.98 ° for frontal pose and 2.87 ° for side pose. The error was consistent across subjects, which indicates that good generalization was obtained. This level of performance compares well with other gaze estimation systems that utilize a calibration procedure to measure eye features.

  5. Mapping owl's eye cells of patients with cytomegalovirus corneal endotheliitis using in vivo laser confocal microscopy.

    PubMed

    Yokogawa, Hideaki; Kobayashi, Akira; Sugiyama, Kazuhisa

    2013-01-01

    To produce a two-dimensional reconstruction map of owl's eye cells using in vivo laser confocal microscopy in patients with cytomegalovirus (CMV) corneal endotheliitis, and to demonstrate any association between owl's eye cells and coin-shaped lesions observed with slit-lamp biomicroscopy. Two patients (75- and 77-year-old men) with polymerase chain reaction-proven CMV corneal endotheliitis were evaluated in this study. Slit-lamp biomicroscopy and in vivo laser confocal microscopy were performed. Images of owl's eye cells in the endothelial cell layer were arranged and mapped into subconfluent montages. Montage images of owl's eye cells were then superimposed on a slit-lamp photo of the corresponding coin-shaped lesion. Degree of concordance between the confocal microscopic images and slit-lamp photos was evaluated. In both eyes, a two-dimensional reconstruction map of the owl's eye cells was created by computer software using acquired confocal images; the maps showed circular patterns. Superimposing montage images of owl's eye cells onto the photos of a coin-shaped lesion showed good concordance in the two eyes. This study suggests that there is an association between owl's eye cells observed by confocal microscopy and coin-shaped lesions observed by slit-lamp biomicroscopy in patients with CMV corneal endotheliitis. The use of in vivo laser confocal microscopy may provide clues as to the underlying causes of CMV corneal endotheliitis.

  6. Pursuit and prediction in the tracking of moving food by a teleost fish (Acanthaluteres spilomelanurus).

    PubMed

    Lanchester, B S; Mark, R F

    1975-12-01

    1. The path, eye and body movements of a teleost fish (the leatherjacket Acanthaluteres spilomelanurus) approaching and taking food were measured by cinematography. 2. Fixation of the food by movement of the eyes is an invariable feature of the approach. The eyes then remain aligned with the target while the body moves forward and round to bring the mouth to the food. 3. When pursuing pieces of food moving vertically at constant velocity through the water these fish normally trace out the pathway that can be calculated by assuming the fish aims constantly at the food. Predictive pathways that imply anticipation of the point of intersection with the food are not regularly seen. 4. Deviations from pursuit occur sporadically, usually in the direction of a predictive path, particularly when the fish approach falling food from below. 5. The geometry of the situation suggests that predictive paths may sometimes be generated if the alignment of eye and body during the pursuit of moving food can be delayed. In approaches from below this may be because forward movement of the fish would tend to stabilize the image of the falling food in the retina. 6. We suggest that a simple linked control system using both eye and body movements to fixate retinal images will on occasions generate predictive pathways without any need for the central nervous system to calculate them in advance.

  7. Ultrasound biomicroscopy after canaloplasty: clinical study with two different units.

    PubMed

    Doro, Daniele; Koerber, Norbert; Paolucci, Pierpaolo; Cimatti, Pierangela

    2012-12-01

    Canaloplasty is a recent non penetrating glaucoma surgical procedure where Schlemm's canal is dilated and trabecular meshwork distended by tensioning polypropylene suture. The aim of this study was to visualize the iridocorneal angle after the canaloplasty procedure by means of two different ultrasound biomicroscopy (UBM) units. Ten eyes of nine patients with primary open angle glaucoma (average age 62 years) underwent canaloplasty (six eyes) or canaloplasty combined with phacoemulsification and in-the-bag intraocular lens implantation (4 eyes). Both 50 MHz (Paradigm P45) and 80 MHz (i-UltraSound) systems were used. All procedures were performed by the same surgeon. UBM examination was performed 3 to 12 (mean 7 +/- 3.1) months after surgery. No, mild and good trabecular meshwork distension by suture tensioning was graded as 0, 1 and 2 according to the higher resolution 80 MHz images. Both ultrasound systems could show intrascleral lake and trabecular meshwork distension, which was graded as 0, 1 and 2 in 10%, 30% and 60% of eyes, respectively. Schlemm's canal could be imaged with the 80 MHz transducer only. The overall qualified success of canaloplasty (80%) was apparently correlated with suture tensioning (r=0.64). In our experience, after canaloplasty the 80 MHz but also 50 MHz technology can show trabecular meshwork distension. A greater number of eyes are needed to assess the correlation between intraocular pressure decrease and suture tensioning.

  8. Intraocular Hemorrhages and Retinopathy of Prematurity in the Telemedicine Approaches to Evaluating Acute-Phase Retinopathy of Prematurity (e-ROP) Study.

    PubMed

    Daniel, Ebenezer; Ying, Gui-Shuang; Siatkowski, R Michael; Pan, Wei; Smith, Eli; Quinn, Graham E

    2017-03-01

    To describe the clinical characteristics of intraocular hemorrhages (IOHs) in infants in the Telemedicine Approaches to Evaluating Acute-Phase Retinopathy of Prematurity (e-ROP) Study and to evaluate their potential use for prediction of disease severity. Secondary data analysis from a prospective study. Preterm infants with birth weight (BW) ≤1250 g. Infants underwent serial digital retinal imaging in both eyes starting at 32 weeks' postmenstrual age. Nonphysician trained readers (TRs) evaluated all image sets from eyes that ever had IOHs documented on image evaluation or eye examination for the presence, location, type, area, and relation of the IOH to the junction between vascularized and avascular retina. Associations of IOH with demographic and neonatal factors, and with the presence and severity of retinopathy of prematurity (ROP) were investigated by univariate and multivariate analyses. Sensitivity and specificity of the telemedicine system for detecting referral-warranted ROP (RW-ROP) were calculated with and without incorporating hemorrhage into the standardized grading protocol. Retinal and vitreous hemorrhage. Among 1239 infants (mean [standard deviation] BW = 864 [212] g; gestational age [GA] = 27 [2.2] weeks) who underwent an average of 3.2 imaging sessions, 22% had an IOH in an eye on at least 1 of the e-ROP visits. Classification of IOH was preretinal (57%), blot (57%), dot (38%), flame-shaped (16%), and vitreous (8%); most IOHs were unilateral (70%). The IOH resolved in 35% of eyes by the next imaging session and in the majority (76%) of cases by 8 weeks after initial detection. Presence of IOH was inversely associated with BW and GA and significantly associated (P < 0.0001) with the presence and severity of ROP (BW and GA adjusted odds ratios [ORs] of 2.46 for any ROP, 2.88 for stage 3, and 3.19 for RW-ROP). Incorporating IOH into the grading protocol minimally altered the sensitivity of the system (94% vs. 95%). Approximately 1 in 5 preterm infants examined had IOHs, generally unilateral. The presence of hemorrhage was directly correlated with both presence and severity of ROP and inversely correlated with BW and GA, although including hemorrhage in the grading algorithm only minimally improved the sensitivity of the telemedicine system to detect RW-ROP. Copyright © 2016 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  9. A positive effect of flowers rather than eye images in a large-scale, cross-cultural dictator game.

    PubMed

    Raihani, Nichola J; Bshary, Redouan

    2012-09-07

    People often consider how their behaviour will be viewed by others, and may cooperate to avoid gaining a bad reputation. Sensitivity to reputation may be elicited by subtle social cues of being watched: previous studies have shown that people behave more cooperatively when they see images of eyes rather than control images. Here, we tested whether eye images enhance cooperation in a dictator game, using the online labour market Amazon Mechanical Turk (AMT). In contrast to our predictions and the results of most previous studies, dictators gave away more money when they saw images of flowers rather than eye images. Donations in response to eye images were not significantly different to donations under control treatments. Dictator donations varied significantly across cultures but there was no systematic variation in responses to different image types across cultures. Unlike most previous studies, players interacting via AMT may feel truly anonymous when making decisions and, as such, may not respond to subtle social cues of being watched. Nevertheless, dictators gave away similar amounts as in previous studies, so anonymity did not erase helpfulness. We suggest that eye images might only promote cooperative behaviour in relatively public settings and that people may ignore these cues when they know their behaviour is truly anonymous.

  10. Visual attention to food cues in obesity: an eye-tracking study.

    PubMed

    Doolan, Katy J; Breslin, Gavin; Hanna, Donncha; Murphy, Kate; Gallagher, Alison M

    2014-12-01

    Based on the theory of incentive sensitization, the aim of this study was to investigate differences in attentional processing of food-related visual cues between normal-weight and overweight/obese males and females. Twenty-six normal-weight (14M, 12F) and 26 overweight/obese (14M, 12F) adults completed a visual probe task and an eye-tracking paradigm. Reaction times and eye movements to food and control images were collected during both a fasted and fed condition in a counterbalanced design. Participants had greater visual attention towards high-energy-density food images compared to low-energy-density food images regardless of hunger condition. This was most pronounced in overweight/obese males who had significantly greater maintained attention towards high-energy-density food images when compared with their normal-weight counterparts however no between weight group differences were observed for female participants. High-energy-density food images appear to capture visual attention more readily than low-energy-density food images. Results also suggest the possibility of an altered visual food cue-associated reward system in overweight/obese males. Attentional processing of food cues may play a role in eating behaviors thus should be taken into consideration as part of an integrated approach to curbing obesity. © 2014 The Obesity Society.

  11. Iris recognition in the presence of ocular disease

    PubMed Central

    Aslam, Tariq Mehmood; Tan, Shi Zhuan; Dhillon, Baljean

    2009-01-01

    Iris recognition systems are among the most accurate of all biometric technologies with immense potential for use in worldwide security applications. This study examined the effect of eye pathology on iris recognition and in particular whether eye disease could cause iris recognition systems to fail. The experiment involved a prospective cohort of 54 patients with anterior segment eye disease who were seen at the acute referral unit of the Princess Alexandra Eye Pavilion in Edinburgh. Iris camera images were obtained from patients before treatment was commenced and again at follow-up appointments after treatment had been given. The principal outcome measure was that of mathematical difference in the iris recognition templates obtained from patients' eyes before and after treatment of the eye disease. Results showed that the performance of iris recognition was remarkably resilient to most ophthalmic disease states, including corneal oedema, iridotomies (laser puncture of iris) and conjunctivitis. Problems were, however, encountered in some patients with acute inflammation of the iris (iritis/anterior uveitis). The effects of a subject developing anterior uveitis may cause current recognition systems to fail. Those developing and deploying iris recognition should be aware of the potential problems that this could cause to this key biometric technology. PMID:19324690

  12. Iris recognition in the presence of ocular disease.

    PubMed

    Aslam, Tariq Mehmood; Tan, Shi Zhuan; Dhillon, Baljean

    2009-05-06

    Iris recognition systems are among the most accurate of all biometric technologies with immense potential for use in worldwide security applications. This study examined the effect of eye pathology on iris recognition and in particular whether eye disease could cause iris recognition systems to fail. The experiment involved a prospective cohort of 54 patients with anterior segment eye disease who were seen at the acute referral unit of the Princess Alexandra Eye Pavilion in Edinburgh. Iris camera images were obtained from patients before treatment was commenced and again at follow-up appointments after treatment had been given. The principal outcome measure was that of mathematical difference in the iris recognition templates obtained from patients' eyes before and after treatment of the eye disease. Results showed that the performance of iris recognition was remarkably resilient to most ophthalmic disease states, including corneal oedema, iridotomies (laser puncture of iris) and conjunctivitis. Problems were, however, encountered in some patients with acute inflammation of the iris (iritis/anterior uveitis). The effects of a subject developing anterior uveitis may cause current recognition systems to fail. Those developing and deploying iris recognition should be aware of the potential problems that this could cause to this key biometric technology.

  13. Identification of Diabetic Retinopathy and Ungradable Image Rate with Ultrawide Field Imaging in a National Teleophthalmology Program.

    PubMed

    Silva, Paolo S; Horton, Mark B; Clary, Dawn; Lewis, Drew G; Sun, Jennifer K; Cavallerano, Jerry D; Aiello, Lloyd Paul

    2016-06-01

    To compare diabetic retinopathy (DR) identification and ungradable image rates between nonmydriatic ultrawide field (UWF) imaging and nonmydriatic multifield fundus photography (NMFP) in a large multistate population-based DR teleophthalmology program. Multiple-site, nonrandomized, consecutive, cross-sectional, retrospective, uncontrolled imaging device evaluation. Thirty-five thousand fifty-two eyes (17 526 patients) imaged using NMFP and 16 218 eyes (8109 patients) imaged using UWF imaging. All patients undergoing Joslin Vision Network (JVN) imaging with either NMFP or UWF imaging from May 1, 2014, through August 30, 2015, within the Indian Health Service-JVN program, which serves American Indian and Alaska Native communities at 97 sites across 25 states, were evaluated. All retinal images were graded using a standardized validated protocol in a centralized reading center. Ungradable rate for DR and diabetic macular edema (DME). The ungradable rate per patient for DR and DME was significantly lower with UWF imaging compared with NMFP (DR, 2.8% vs. 26.9% [P < 0.0001]; DME, 3.8% vs. 26.2% [P < 0.0001]). Identification of eyes with either DR or referable DR (moderate nonproliferative DR or DME or worse) was increased using UWF imaging from 11.7% to 24.2% (P < 0.0001) and from 6.2% to 13.6% (P < 0.0001), respectively. In eyes with DR imaged with UWF imaging (n = 3926 eyes of 2402 patients), the presence of predominantly peripheral lesions suggested a more severe level of DR in 7.2% of eyes (9.6% of patients). In a large, widely distributed DR ocular telehealth program, as compared with NMFP, nonmydriatic UWF imaging reduced the number of ungradable eyes by 81%, increased the identification of DR nearly 2-fold, and identified peripheral lesions suggesting more severe DR in almost 10% of patients, thus demonstrating significant benefits of this imaging method for large DR teleophthalmology programs. Copyright © 2016 American Academy of Ophthalmology. All rights reserved.

  14. Retrospective Evaluation of a Teleretinal Screening Program in Detecting Multiple Nondiabetic Eye Diseases.

    PubMed

    Maa, April Y; Patel, Shivangi; Chasan, Joel E; Delaune, William; Lynch, Mary G

    2017-01-01

    Diabetic teleretinal screening programs have been utilized successfully across the world to detect diabetic retinopathy (DR) and are well validated. Less information, however, exists on the ability of teleretinal imaging to detect nondiabetic ocular pathology. This study performed a retrospective evaluation to assess the ability of a community-based diabetic teleretinal screening program to detect common ocular disease other than DR. A retrospective chart review of 1,774 patients who underwent diabetic teleretinal screening was performed. Eye clinic notes from the Veterans Health Administration's electronic medical record, Computerized Patient Record System, were searched for each of the patients screened through teleretinal imaging. When a face-to-face examination note was present, the physical findings were compared to those obtained through teleretinal imaging. Sensitivity, specificity, and positive and negative predictive values were calculated for suspicious nerve, cataract, and age-related macular degeneration. A total of 903 patients underwent a clinical examination. The positive predictive value was highest for cataract (100%), suspicious nerve (93%), and macular degeneration (90%). The negative predictive value and the percent agreement between teleretinal imaging and a clinical examination were over 90% for each disease category. A teleretinal imaging protocol may be used to screen for other common ocular diseases. It may be feasible to use diabetic teleretinal photographs to screen patients for other potential eye diseases. Additional elements of the eye workup may be added to enhance accuracy of disease detection. Further study is necessary to confirm this initial retrospective review.

  15. Automated grading system for evaluation of ocular redness associated with dry eye.

    PubMed

    Rodriguez, John D; Johnston, Patrick R; Ousler, George W; Smith, Lisa M; Abelson, Mark B

    2013-01-01

    We have observed that dry eye redness is characterized by a prominence of fine horizontal conjunctival vessels in the exposed ocular surface of the interpalpebral fissure, and have incorporated this feature into the grading of redness in clinical studies of dry eye. To develop an automated method of grading dry eye-associated ocular redness in order to expand on the clinical grading system currently used. Ninety nine images from 26 dry eye subjects were evaluated by five graders using a 0-4 (in 0.5 increments) dry eye redness (Ora Calibra™ Dry Eye Redness Scale [OCDER]) scale. For the automated method, the Opencv computer vision library was used to develop software for calculating redness and horizontal conjunctival vessels (noted as "horizontality"). From original photograph, the region of interest (ROI) was selected manually using the open source ImageJ software. Total average redness intensity (Com-Red) was calculated as a single channel 8-bit image as R - 0.83G - 0.17B, where R, G and B were the respective intensities of the red, green and blue channels. The location of vessels was detected by normalizing the blue channel and selecting pixels with an intensity of less than 97% of the mean. The horizontal component (Com-Hor) was calculated by the first order Sobel derivative in the vertical direction and the score was calculated as the average blue channel image intensity of this vertical derivative. Pearson correlation coefficients, accuracy and concordance correlation coefficients (CCC) were calculated after regression and standardized regression of the dataset. The agreement (both Pearson's and CCC) among investigators using the OCDER scale was 0.67, while the agreement of investigator to computer was 0.76. A multiple regression using both redness and horizontality improved the agreement CCC from 0.66 and 0.69 to 0.76, demonstrating the contribution of vessel geometry to the overall grade. Computer analysis of a given image has 100% repeatability and zero variability from session to session. This objective means of grading ocular redness in a unified fashion has potential significance as a new clinical endpoint. In comparisons between computer and investigator, computer grading proved to be more reliable than another investigator using the OCDER scale. The best fitting model based on the present sample, and usable for future studies, was [Formula: see text] is the predicted investigator grade, and [Formula: see text] and [Formula: see text] are logarithmic transformations of the computer calculated parameters COM-Hor and COM-Red. Considering the superior repeatability, computer automated grading might be preferable to investigator grading in multicentered dry eye studies in which the subtle differences in redness incurred by treatment have been historically difficult to define.

  16. The fate of the oculomotor system in clinical bilateral anophthalmia.

    PubMed

    Bridge, Holly; Ragge, Nicola; Jenkinson, Ned; Cowey, Alan; Watkins, Kate E

    2012-05-01

    The interdependence of the development of the eye and oculomotor system during embryogenesis is currently unclear. The occurrence of clinical anophthalmia, where the globe fails to develop, permits us to study the effects this has on the development of the complex neuromuscular system controlling eye movements. In this study, we use very high-resolution T2-weighted imaging in five anophthalmic subjects to visualize the extraocular muscles and the cranial nerves that innervate them. The subjects differed in the presence or absence of the optic nerve, the abducens nerve, and the extraocular muscles, reflecting differences in the underlying disruption to the eye's morphogenetic pathway. The oculomotor nerve was present in all anophthalmic subjects and only slightly reduced in size compared to measurements in sighted controls. As might be expected, the presence of rudimentary eye-like structures in the socket appeared to correlate with development and persistence of the extraocular muscles in some cases. Our study supports in part the concept of an initial independence of muscle development, with its maintenance subject to the presence of these eye-like structures.

  17. Retinal and choroidal imaging in vivo using integrated photoacoustic microscopy and optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Tian, Chao; Zhang, Wei; Nguyen, Van Phuc; Huang, Ziyi; Wang, Xueding; Paulus, Yannis M.

    2018-02-01

    Most reported photoacoustic ocular imaging work to date uses small animals, such as mice and rats, the eyes of which are small and less than one-third the size of a human eye, which poses a challenge for clinical translation. Here we achieved chorioretinal imaging of larger animals, i.e. rabbits, using a dual-modality photoacoustic microscopy (PAM) and optical coherence tomography (OCT) system. Preliminary experimental results in living rabbits demonstrate that the PAM can noninvasively visualize depth-resolved retinal and choroidal vessels using a safe laser exposure dose; and the OCT can finely distinguish different retinal layers, the choroid, and the sclera. This reported work might be a major step forward in clinical translation of photoacoustic microscopy.

  18. Imaging patients with glaucoma using spectral-domain optical coherence tomography and optical microangiography

    NASA Astrophysics Data System (ADS)

    Auyeung, Kris; Auyeung, Kelsey; Kono, Rei; Chen, Chieh-Li; Zhang, Qinqin; Wang, Ruikang K.

    2015-03-01

    In ophthalmology, a reliable means of diagnosing glaucoma in its early stages is still an open issue. Past efforts, including forays into fluorescent angiography (FA) and early optical coherence tomography (OCT) systems, to develop a potential biomarker for the disease have been explored. However, this development has been hindered by the inability of the current techniques to provide useful depth and microvasculature information of the optic nerve head (ONH), which have been debated as possible hallmarks of glaucoma progression. We reasoned that a system incorporating a spectral-domain OCT (SD-OCT) based Optical Microangiography (OMAG) system, could allow an effective, non-invasive methodology to evaluate effects on microvasculature by glaucoma. SD-OCT follows the principle of light reflection and interference to produce detailed cross-sectional and 3D images of the eye. OMAG produces imaging contrasts via endogenous light scattering from moving particles, allowing for 3D image productions of dynamic blood perfusion at capillary-level resolution. The purpose of this study was to investigate the optic cup perfusion (flow) differences in glaucomatous and normal eyes. Images from three normal and five glaucomatous subjects were analyzed our OCT based OMAG system for blood perfusion and structural images, allowing for comparisons. Preliminary results from blood flow analysis revealed reduced blood perfusion within the whole-depth region encompassing the Lamina Cribrosa in glaucomatous cases as compared to normal ones. We conclude that our OCT-OMAG system may provide promise and viability for glaucoma screening.

  19. Portable dynamic fundus instrument

    NASA Technical Reports Server (NTRS)

    Taylor, Gerald R. (Inventor); Meehan, Richard T. (Inventor); Hunter, Norwood R. (Inventor); Caputo, Michael P. (Inventor); Gibson, C. Robert (Inventor)

    1992-01-01

    A portable diagnostic image analysis instrument is disclosed for retinal funduscopy in which an eye fundus image is optically processed by a lens system to a charge coupled device (CCD) which produces recordable and viewable output data and is simultaneously viewable on an electronic view finder. The fundus image is processed to develop a representation of the vessel or vessels from the output data.

  20. Adaptive optics ophthalmoscopy

    PubMed Central

    Roorda, Austin; Duncan, Jacque L.

    2016-01-01

    This review starts with a brief history and description of adaptive optics (AO) technology, followed by a showcase of the latest capabilities of AO systems for imaging the human retina and an extensive review of the literature on where AO is being used clinically. The review concludes with a discussion on future directions and guidance on usage and interpretation of images from AO systems for the eye. PMID:26973867

  1. Using parallel evolutionary development for a biologically-inspired computer vision system for mobile robots.

    PubMed

    Wright, Cameron H G; Barrett, Steven F; Pack, Daniel J

    2005-01-01

    We describe a new approach to attacking the problem of robust computer vision for mobile robots. The overall strategy is to mimic the biological evolution of animal vision systems. Our basic imaging sensor is based upon the eye of the common house fly, Musca domestica. The computational algorithms are a mix of traditional image processing, subspace techniques, and multilayer neural networks.

  2. Detection of Potentially Severe Retinopathy of Prematurity by Remote Image Grading.

    PubMed

    Quinn, Graham E; Ying, Gui-Shuang; Pan, Wei; Baumritter, Agnieshka; Daniel, Ebenezer

    2017-09-01

    Telemedicine in retinopathy of prematurity (ROP) has the potential for delivering timely care to premature infants at risk for serious ROP. To describe the characteristics of eyes at risk for ROP to provide insights into what types of ROP are most easily detected early by image grading. Secondary analysis of eyes with referral-warranted (RW) ROP (stage 3 ROP, zone I ROP, plus disease) on diagnostic examination from the Telemedicine Approaches to Evaluating Acute-Phase Retinopathy of Prematurity (e-ROP) study was conducted from May 1, 2011, to October 31, 2013, in 1257 premature infants with birth weights less than 1251 g in 13 neonatal units in North America. Data analysis was performed between February 1, 2016, and June 5, 2017. Serial imaging sessions with concurrent diagnostic examinations for ROP. Time of detecting RW-ROP on image evaluation compared with clinical examination. In the e-ROP study, 246 infants (492 eyes) were included in the analysis; 138 (56.1%) were male. A total of 447 eyes had RW-ROP on diagnostic examination. Image grading in 123 infants (mean [SD] gestational age, 24.8 [1.4] weeks) detected RW-ROP earlier than diagnostic examination (early) in 191 (42.7%) eyes by about 15 days and detected RW-ROP in 123 infants (mean [SD] gestational age, 24.6 [1.5] weeks) at the same time (same) in 200 (44.7%) eyes. Most of the early eyes (153 [80.1%]) interpreted as being RW-ROP positive on imaging evaluation agreed with examination findings when the examination subsequently documented RW-ROP. At the sessions in which RW-ROP was first found by examination, stage 3 or more in 123 infants (mean [SD] gestational age, 24.8 [1.4] weeks) ROP was noted earlier on image evaluation in 151 of 191 early eyes (79.1%) and in 172 of 200 of same eyes (86.0%) (P = .08); the presence of zone I ROP was detected in 57 of 191 (29.8%) early eyes vs 64 of 200 (32.0%) same eyes (P = .90); and plus disease was noted in 30 of 191 (15.7%) early eyes and 45 of 200 (22.5%) same eyes (P = .08). In both early and same eyes, zone I and/or stage 3 ROP determined a significant proportion of RW-ROP; plus disease played a relatively minor role. In most early RW-ROP eyes, the findings were consistent with clinical examination and/or image grading at the next session. Because ROP telemedicine is used more widely, development of standard approaches and protocols is essential.

  3. Multispectral laser-induced fluorescence imaging system for large biological samples

    NASA Astrophysics Data System (ADS)

    Kim, Moon S.; Lefcourt, Alan M.; Chen, Yud-Ren

    2003-07-01

    A laser-induced fluorescence imaging system developed to capture multispectral fluorescence emission images simultaneously from a relatively large target object is described. With an expanded, 355-nm Nd:YAG laser as the excitation source, the system captures fluorescence emission images in the blue, green, red, and far-red regions of the spectrum centered at 450, 550, 678, and 730 nm, respectively, from a 30-cm-diameter target area in ambient light. Images of apples and of pork meat artificially contaminated with diluted animal feces have demonstrated the versatility of fluorescence imaging techniques for potential applications in food safety inspection. Regions of contamination, including sites that were not readily visible to the human eye, could easily be identified from the images.

  4. Drowsy driver mobile application: Development of a novel scleral-area detection method.

    PubMed

    Mohammad, Faisal; Mahadas, Kausalendra; Hung, George K

    2017-10-01

    A reliable and practical app for mobile devices was developed to detect driver drowsiness. It consisted of two main components: a Haar cascade classifier, provided by a computer vision framework called OpenCV, for face/eye detection; and a dedicated JAVA software code for image processing that was applied over a masked region circumscribing the eye. A binary threshold was performed over the masked region to provide a quantitative measure of the number of white pixels in the sclera, which represented the state of eye opening. A continuously low white-pixel count would indicate drowsiness, thereby triggering an alarm to alert the driver. This system was successfully implemented on: (1) a static face image, (2) two subjects under laboratory conditions, and (3) a subject in a vehicle environment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. The structure and function of the macula in patients with advanced retinitis pigmentosa.

    PubMed

    Vámos, Rita; Tátrai, Erika; Németh, János; Holder, Graham E; DeBuc, Delia Cabrera; Somfai, Gábor Márk

    2011-10-28

    To assess the structure and function of the macula in advanced retinitis pigmentosa (RP). Twenty-nine eyes of 22 patients with RP were compared against 17 control eyes. Time-domain optical coherence tomography (OCT) data were processed using OCTRIMA (optical coherence tomography retinal image analysis) as a means of quantifying commercial OCT system images. The thickness of the retinal nerve fiber layer (RNFL), ganglion cell layer and inner plexiform layer complex (GCL+IPL), inner nuclear layer and outer plexiform layer complex (INL+OPL), and the outer nuclear layer (ONL) were measured. Multifocal electroretinography (mfERG) was performed; two groups were formed based on the mfERG findings. Fourteen eyes had no detectable central retinal function (NCRF) on mfERG; detectable but abnormal retinal function (DRF) was present in the mfERG of the other 15 eyes. The thickness of the ONL in the central macular region was significantly less in the NCRF eyes compared with that in both DRF eyes and controls. The ONL was significantly thinner in the pericentral region in both patient groups compared with that in controls, whereas the thickness of the GCL+IPL and INL+OPL was significantly decreased only in the NCRF eyes. The RNFL in the peripheral region was significantly thicker, whereas the thickness of the GCL+IPL and ONL was significantly thinner in both patient groups compared with that in controls. The results are consistent with degeneration of the outer retina preceding inner retinal changes in RP. OCT image segmentation enables objective evaluation of retinal structural changes in RP, with potential use in the planning of therapeutic interventions and conceivably as an outcome measure.

  6. The Structure and Function of the Macula in Patients with Advanced Retinitis Pigmentosa

    PubMed Central

    Vámos, Rita; Tátrai, Erika; Németh, János; Holder, Graham E.; DeBuc, Delia Cabrera

    2011-01-01

    Purpose. To assess the structure and function of the macula in advanced retinitis pigmentosa (RP). Methods. Twenty-nine eyes of 22 patients with RP were compared against 17 control eyes. Time-domain optical coherence tomography (OCT) data were processed using OCTRIMA (optical coherence tomography retinal image analysis) as a means of quantifying commercial OCT system images. The thickness of the retinal nerve fiber layer (RNFL), ganglion cell layer and inner plexiform layer complex (GCL+IPL), inner nuclear layer and outer plexiform layer complex (INL+OPL), and the outer nuclear layer (ONL) were measured. Multifocal electroretinography (mfERG) was performed; two groups were formed based on the mfERG findings. Fourteen eyes had no detectable central retinal function (NCRF) on mfERG; detectable but abnormal retinal function (DRF) was present in the mfERG of the other 15 eyes. Results. The thickness of the ONL in the central macular region was significantly less in the NCRF eyes compared with that in both DRF eyes and controls. The ONL was significantly thinner in the pericentral region in both patient groups compared with that in controls, whereas the thickness of the GCL+IPL and INL+OPL was significantly decreased only in the NCRF eyes. The RNFL in the peripheral region was significantly thicker, whereas the thickness of the GCL+IPL and ONL was significantly thinner in both patient groups compared with that in controls. Conclusions. The results are consistent with degeneration of the outer retina preceding inner retinal changes in RP. OCT image segmentation enables objective evaluation of retinal structural changes in RP, with potential use in the planning of therapeutic interventions and conceivably as an outcome measure. PMID:21948552

  7. Remote gaze tracking system for 3D environments.

    PubMed

    Congcong Liu; Herrup, Karl; Shi, Bertram E

    2017-07-01

    Eye tracking systems are typically divided into two categories: remote and mobile. Remote systems, where the eye tracker is located near the object being viewed by the subject, have the advantage of being less intrusive, but are typically used for tracking gaze points on fixed two dimensional (2D) computer screens. Mobile systems such as eye tracking glasses, where the eye tracker are attached to the subject, are more intrusive, but are better suited for cases where subjects are viewing objects in the three dimensional (3D) environment. In this paper, we describe how remote gaze tracking systems developed for 2D computer screens can be used to track gaze points in a 3D environment. The system is non-intrusive. It compensates for small head movements by the user, so that the head need not be stabilized by a chin rest or bite bar. The system maps the 3D gaze points of the user onto 2D images from a scene camera and is also located remotely from the subject. Measurement results from this system indicate that it is able to estimate gaze points in the scene camera to within one degree over a wide range of head positions.

  8. [Electronic Device for Retinal and Iris Imaging].

    PubMed

    Drahanský, M; Kolář, R; Mňuk, T

    This paper describes design and construction of a new device for automatic capturing of eye retina and iris. This device has two possible ways of utilization - either for biometric purposes (persons recognition on the base of their eye characteristics) or for medical purposes as supporting diagnostic device. eye retina, eye iris, device, acquisition, image.

  9. A Study of Deep CNN-Based Classification of Open and Closed Eyes Using a Visible Light Camera Sensor.

    PubMed

    Kim, Ki Wan; Hong, Hyung Gil; Nam, Gi Pyo; Park, Kang Ryoung

    2017-06-30

    The necessity for the classification of open and closed eyes is increasing in various fields, including analysis of eye fatigue in 3D TVs, analysis of the psychological states of test subjects, and eye status tracking-based driver drowsiness detection. Previous studies have used various methods to distinguish between open and closed eyes, such as classifiers based on the features obtained from image binarization, edge operators, or texture analysis. However, when it comes to eye images with different lighting conditions and resolutions, it can be difficult to find an optimal threshold for image binarization or optimal filters for edge and texture extraction. In order to address this issue, we propose a method to classify open and closed eye images with different conditions, acquired by a visible light camera, using a deep residual convolutional neural network. After conducting performance analysis on both self-collected and open databases, we have determined that the classification accuracy of the proposed method is superior to that of existing methods.

  10. The evolution of eyes and visually guided behaviour

    PubMed Central

    Nilsson, Dan-Eric

    2009-01-01

    The morphology and molecular mechanisms of animal photoreceptor cells and eyes reveal a complex pattern of duplications and co-option of genetic modules, leading to a number of different light-sensitive systems that share many components, in which clear-cut homologies are rare. On the basis of molecular and morphological findings, I discuss the functional requirements for vision and how these have constrained the evolution of eyes. The fact that natural selection on eyes acts through the consequences of visually guided behaviour leads to a concept of task-punctuated evolution, where sensory systems evolve by a sequential acquisition of sensory tasks. I identify four key innovations that, one after the other, paved the way for the evolution of efficient eyes. These innovations are (i) efficient photopigments, (ii) directionality through screening pigment, (iii) photoreceptor membrane folding, and (iv) focusing optics. A corresponding evolutionary sequence is suggested, starting at non-directional monitoring of ambient luminance and leading to comparisons of luminances within a scene, first by a scanning mode and later by parallel spatial channels in imaging eyes. PMID:19720648

  11. Multispectral imaging of the ocular fundus using light emitting diode illumination

    NASA Astrophysics Data System (ADS)

    Everdell, N. L.; Styles, I. B.; Calcagni, A.; Gibson, J.; Hebden, J.; Claridge, E.

    2010-09-01

    We present an imaging system based on light emitting diode (LED) illumination that produces multispectral optical images of the human ocular fundus. It uses a conventional fundus camera equipped with a high power LED light source and a highly sensitive electron-multiplying charge coupled device camera. It is able to take pictures at a series of wavelengths in rapid succession at short exposure times, thereby eliminating the image shift introduced by natural eye movements (saccades). In contrast with snapshot systems the images retain full spatial resolution. The system is not suitable for applications where the full spectral resolution is required as it uses discrete wavebands for illumination. This is not a problem in retinal imaging where the use of selected wavelengths is common. The modular nature of the light source allows new wavelengths to be introduced easily and at low cost. The use of wavelength-specific LEDs as a source is preferable to white light illumination and subsequent filtering of the remitted light as it minimizes the total light exposure of the subject. The system is controlled via a graphical user interface that enables flexible control of intensity, duration, and sequencing of sources in synchrony with the camera. Our initial experiments indicate that the system can acquire multispectral image sequences of the human retina at exposure times of 0.05 s in the range of 500-620 nm with mean signal to noise ratio of 17 dB (min 11, std 4.5), making it suitable for quantitative analysis with application to the diagnosis and screening of eye diseases such as diabetic retinopathy and age-related macular degeneration.

  12. Multispectral imaging of the ocular fundus using light emitting diode illumination.

    PubMed

    Everdell, N L; Styles, I B; Calcagni, A; Gibson, J; Hebden, J; Claridge, E

    2010-09-01

    We present an imaging system based on light emitting diode (LED) illumination that produces multispectral optical images of the human ocular fundus. It uses a conventional fundus camera equipped with a high power LED light source and a highly sensitive electron-multiplying charge coupled device camera. It is able to take pictures at a series of wavelengths in rapid succession at short exposure times, thereby eliminating the image shift introduced by natural eye movements (saccades). In contrast with snapshot systems the images retain full spatial resolution. The system is not suitable for applications where the full spectral resolution is required as it uses discrete wavebands for illumination. This is not a problem in retinal imaging where the use of selected wavelengths is common. The modular nature of the light source allows new wavelengths to be introduced easily and at low cost. The use of wavelength-specific LEDs as a source is preferable to white light illumination and subsequent filtering of the remitted light as it minimizes the total light exposure of the subject. The system is controlled via a graphical user interface that enables flexible control of intensity, duration, and sequencing of sources in synchrony with the camera. Our initial experiments indicate that the system can acquire multispectral image sequences of the human retina at exposure times of 0.05 s in the range of 500-620 nm with mean signal to noise ratio of 17 dB (min 11, std 4.5), making it suitable for quantitative analysis with application to the diagnosis and screening of eye diseases such as diabetic retinopathy and age-related macular degeneration.

  13. New NASA Infrared Image of Irma Shows an Angry Eye

    NASA Image and Video Library

    2017-09-05

    Hurricane Irma is the strongest hurricane ever recorded outside the Caribbean Sea and Gulf of Mexico. These two images from the Atmospheric Infrared Sounder (AIRS) instrument aboard NASA's Aqua satellite show what Hurricane Irma looked like when Aqua passed overhead just before 1 p.m. local time (10 a.m. PDT) on Sept. 5, 2017. Forecasts at the National Hurricane Center have Irma passing near the major islands to its west before turning northward near Florida this weekend. The first image (top) is an infrared snapshot from AIRS (see Figure 1 for larger image). In orange and red areas, the ocean surface shines through, while blue and purple areas represent cold, high clouds that obscure what lies below. Typical of well-developed hurricanes, Irma is nearly circular with a well-defined eye at its center. The eye is about 25 miles (40 kilometers) in diameter. Careful scrutiny shows a red pixel in the center of the eye, which means that AIRS achieved a bulls-eye with one of its "looks" and was able to see to the ocean between the dense clouds in the eye wall. The second image (bottom) shows the view through AIRS' microwave-colored "lenses" (see Figure 2 for larger image). Here the ocean surface looks yellow, while green represents various degrees of cloudiness. Blue shows areas where it is raining heavily. The eye is not apparent in this image because the "pixel size" of the microwave sounder, about 30 miles (50 kilometers), is larger than the eye and therefore cannot "thread the needle." The infrared sounder, on the other hand, has a pixel size of only 10 miles (16.5 kilometers) and can distinguish the small eye. https://photojournal.jpl.nasa.gov/catalog/PIA21941

  14. Anterior segment photography in pediatric eyes using the Lytro light field handheld noncontact camera.

    PubMed

    Marcus, Inna; Tung, Irene T; Dosunmu, Eniolami O; Thiamthat, Warakorn; Freedman, Sharon F

    2013-12-01

    To compare anterior segment findings identified in young children using digital photographic images from the Lytro light field camera to those observed clinically. This was a prospective study of children <9 years of age with an anterior segment abnormality. Clinically observed anterior segment examination findings for each child were recorded and several digital images of the anterior segment of each eye captured with the Lytro camera. The images were later reviewed by a masked examiner. Sensitivity of abnormal examination findings on Lytro imaging was calculated and compared to the clinical examination as the gold standard. A total of 157 eyes of 80 children (mean age, 4.4 years; range, 0.1-8.9) were included. Clinical examination revealed 206 anterior segment abnormalities altogether: lids/lashes (n = 21 eyes), conjunctiva/sclera (n = 28 eyes), cornea (n = 71 eyes), anterior chamber (n = 14 eyes), iris (n = 43 eyes), and lens (n = 29 eyes). Review of Lytro photographs of eyes with clinically diagnosed anterior segment abnormality correctly identified 133 of 206 (65%) of all abnormalities. Additionally, 185 abnormalities in 50 children were documented at examination under anesthesia. The Lytro camera was able to document most abnormal anterior segment findings in un-sedated young children. Its unique ability to allow focus change after image capture is a significant improvement on prior technology. Copyright © 2013 American Association for Pediatric Ophthalmology and Strabismus. Published by Mosby, Inc. All rights reserved.

  15. Pathologic Myopia.

    PubMed

    Ohno-Matsui, Kyoko

    Pathologic myopia (PM) is the only myopia that causes the loss of best-corrected visual acuity. The main reason for best-corrected visual acuity loss is complications specific to PM, such as myopic maculopathy, myopic traction maculopathy, and myopic optic neuropathy (or glaucoma). The meta-analyses of the PM study group (META-PM study) made a classification system for myopic maculopathy. On the basis of this study, PM has been defined as eyes having atrophic changes equal to or more severe than diffuse atrophy. Posterior staphyloma and eye deformity are important causes of developing vision-threatening complications. Posterior staphyloma is unique to PM, except for inferior staphyloma due to tilted disc syndrome. It is defined as an outpouching of the wall of the eye that has a radius of curvature that is less than the surrounding curvature of the wall of the eye. The mechanical load onto the important region for central vision (optic nerve and macula) is not comparable between eyes with and without posterior staphyloma. Three-dimensional magnetic resonance imaging is a powerful tool to analyze the entire shape of the eye. When ultra-widefield optical coherence tomography is available, it is expected to be a new tool that will surpass 3-dimensional magnetic resonance imaging. In the future, preventive therapies targeting staphyloma and eye deformity are expected before vision-threatening complications develop and it is too late for patients.

  16. Iris recognition via plenoptic imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santos-Villalobos, Hector J.; Boehnen, Chris Bensing; Bolme, David S.

    Iris recognition can be accomplished for a wide variety of eye images by using plenoptic imaging. Using plenoptic technology, it is possible to correct focus after image acquisition. One example technology reconstructs images having different focus depths and stitches them together, resulting in a fully focused image, even in an off-angle gaze scenario. Another example technology determines three-dimensional data for an eye and incorporates it into an eye model used for iris recognition processing. Another example technology detects contact lenses. Application of the technologies can result in improved iris recognition under a wide variety of scenarios.

  17. Integrated Photoacoustic Ophthalmoscopy and Spectral-domain Optical Coherence Tomography

    PubMed Central

    Jiao, Shuliang; Zhang, Hao F.

    2013-01-01

    Both the clinical diagnosis and fundamental investigation of major ocular diseases greatly benefit from various non-invasive ophthalmic imaging technologies. Existing retinal imaging modalities, such as fundus photography1, confocal scanning laser ophthalmoscopy (cSLO)2, and optical coherence tomography (OCT)3, have significant contributions in monitoring disease onsets and progressions, and developing new therapeutic strategies. However, they predominantly rely on the back-reflected photons from the retina. As a consequence, the optical absorption properties of the retina, which are usually strongly associated with retinal pathophysiology status, are inaccessible by the traditional imaging technologies. Photoacoustic ophthalmoscopy (PAOM) is an emerging retinal imaging modality that permits the detection of the optical absorption contrasts in the eye with a high sensitivity4-7 . In PAOM nanosecond laser pulses are delivered through the pupil and scanned across the posterior eye to induce photoacoustic (PA) signals, which are detected by an unfocused ultrasonic transducer attached to the eyelid. Because of the strong optical absorption of hemoglobin and melanin, PAOM is capable of non-invasively imaging the retinal and choroidal vasculatures, and the retinal pigment epithelium (RPE) melanin at high contrasts 6,7. More importantly, based on the well-developed spectroscopic photoacoustic imaging5,8 , PAOM has the potential to map the hemoglobin oxygen saturation in retinal vessels, which can be critical in studying the physiology and pathology of several blinding diseases 9 such as diabetic retinopathy and neovascular age-related macular degeneration. Moreover, being the only existing optical-absorption-based ophthalmic imaging modality, PAOM can be integrated with well-established clinical ophthalmic imaging techniques to achieve more comprehensive anatomic and functional evaluations of the eye based on multiple optical contrasts6,10 . In this work, we integrate PAOM and spectral-domain OCT (SD-OCT) for simultaneously in vivo retinal imaging of rat, where both optical absorption and scattering properties of the retina are revealed. The system configuration, system alignment and imaging acquisition are presented. PMID:23354081

  18. Wavefront aberrations and retinal image quality in different lenticular opacity types and densities.

    PubMed

    Wu, Cheng-Zhe; Jin, Hua; Shen, Zhen-Nv; Li, Ying-Jun; Cui, Xun

    2017-11-10

    To investigate wavefront aberrations in the entire eye and in the internal optics (lens) and retinal image qualities according to different lenticular opacity types and densities. Forty-one eyes with nuclear cataract, 33 eyes with cortical cataract, and 29 eyes with posterior subcapsular cataract were examined. In each group, wavefront aberrations in the entire eye and in the internal optics and retinal image quality were measured using a raytracing aberrometer. Eyes with cortical cataracts showed significantly higher coma-like aberrations compared to the other two groups in both entire eye and internal optic aberrations (P = 0.012 and P = 0.007, respectively). Eyes with nuclear cataract had lower spherical-like aberrations than the other two groups in both entire eye and internal optics aberrations (P < 0.001 and P < 0.001, respectively). In the nuclear cataract group, nuclear lens density was negatively correlated with internal spherical aberrations (r = -0.527, P = 0.005). Wavefront technology is useful for objective and quantitative analysis of retinal image quality deterioration in eyes with different early lenticular opacity types and densities. Understanding the wavefront optical properties of different crystalline lens opacities may help ophthalmic surgeons determine the optimal time to perform cataract surgery.

  19. Study of Light Scattering in the Human Eye

    NASA Astrophysics Data System (ADS)

    Perez, I. Kelly; Bruce, N. C.; Valdos, L. R. Berriel

    2008-04-01

    In this paper we present a numerical model of the human eye to be used in studies of the scattering of light in different components of the eye's optical system. Different parts of the eye are susceptible to produce scattering for different reasons; age, illness or injury. For example, cataracts can appear in the human lens or injuries or fungi can appear on the cornea. The aim of the study is to relate the backscattered light, which is what doctors measure or detect, to the forward scattered light, which is what affects the patient's vision. We present the model to be used, the raytrace procedure and some preliminary results for the image on the retina without scattering.

  20. The Eye Catching Property of Digital-Signage with Scent and a Scent-Emitting Video Display System

    NASA Astrophysics Data System (ADS)

    Tomono, Akira; Otake, Syunya

    In this paper, the effective method of inducing a glance aimed at the digital signage by emitting a scent is described. The simulation experiment was done using the immersive VR System because there were a lot of restrictions to the experiment in an actual passageway. In order to investigate the eye catching property of the digital signage, the passer-by's eye movement was analyzed. Through the experiment, they were clarified that the digital signage with the scent was paid to attention, and the strong impression remained in the memory. Next, a scent-emitting video display system applying to the digital signage is described. To this end, a scent-emitting device that is able to quickly change the scents it is releasing, and present them from a distance (by the non-contact method), thus maintaining a relationship between the scent and the image, must be developed. We propose a new method where a device that can release pressurized gases is placed behind the display screen filled with tiny pores. Scents are then ejected from this device, traveling through the pores to the front side of the screen. An excellent scent delivery characteristic was obtained because the distance to the user is close and the scent is presented from the front. We also present a method for inducing viewer reactions using on-screen images, thereby enabling scent release to coincide precisely with viewer inhalations. We anticipate that the simultaneous presentation of scents and video images will deepen viewers' comprehension of these images.

  1. Simulated disparity and peripheral blur interact during binocular fusion.

    PubMed

    Maiello, Guido; Chessa, Manuela; Solari, Fabio; Bex, Peter J

    2014-07-17

    We have developed a low-cost, practical gaze-contingent display in which natural images are presented to the observer with dioptric blur and stereoscopic disparity that are dependent on the three-dimensional structure of natural scenes. Our system simulates a distribution of retinal blur and depth similar to that experienced in real-world viewing conditions by emmetropic observers. We implemented the system using light-field photographs taken with a plenoptic camera which supports digital refocusing anywhere in the images. We coupled this capability with an eye-tracking system and stereoscopic rendering. With this display, we examine how the time course of binocular fusion depends on depth cues from blur and stereoscopic disparity in naturalistic images. Our results show that disparity and peripheral blur interact to modify eye-movement behavior and facilitate binocular fusion, and the greatest benefit was gained by observers who struggled most to achieve fusion. Even though plenoptic images do not replicate an individual’s aberrations, the results demonstrate that a naturalistic distribution of depth-dependent blur may improve 3-D virtual reality, and that interruptions of this pattern (e.g., with intraocular lenses) which flatten the distribution of retinal blur may adversely affect binocular fusion. © 2014 ARVO.

  2. Simulated disparity and peripheral blur interact during binocular fusion

    PubMed Central

    Maiello, Guido; Chessa, Manuela; Solari, Fabio; Bex, Peter J

    2014-01-01

    We have developed a low-cost, practical gaze-contingent display in which natural images are presented to the observer with dioptric blur and stereoscopic disparity that are dependent on the three-dimensional structure of natural scenes. Our system simulates a distribution of retinal blur and depth similar to that experienced in real-world viewing conditions by emmetropic observers. We implemented the system using light-field photographs taken with a plenoptic camera which supports digital refocusing anywhere in the images. We coupled this capability with an eye-tracking system and stereoscopic rendering. With this display, we examine how the time course of binocular fusion depends on depth cues from blur and stereoscopic disparity in naturalistic images. Our results show that disparity and peripheral blur interact to modify eye-movement behavior and facilitate binocular fusion, and the greatest benefit was gained by observers who struggled most to achieve fusion. Even though plenoptic images do not replicate an individual's aberrations, the results demonstrate that a naturalistic distribution of depth-dependent blur may improve 3-D virtual reality, and that interruptions of this pattern (e.g., with intraocular lenses) which flatten the distribution of retinal blur may adversely affect binocular fusion. PMID:25034260

  3. Replication and characterization of the compound eye of a fruit fly for imaging purpose

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Hefu; University of Chinese Academy of Sciences, Beijing 10039; Gong, Xianwei

    In this work, we report the replication and characterization of the compound eye of a fruit fly for imaging purpose. In the replication, soft lithography method was employed to replicate the compound eye of a fruit fly into a UV-curable polymer. The method was demonstrated to be effective and the compound eye is replicated into the polymer (NOA78) where each ommatidium has a diameter of about 30 μm and a sag height of about 7 μm. To characterize its optical property, the point spread function of the compound eye was tested and a NA of 0.386 has been obtained for the replicatedmore » polymeric ommatidium. Comparing with the NA of a real fruit fly ommatidium which was measured to be about 0.212, the replicated polymeric ommatidium has a much larger NA due to the refractive index of NOA78 is much higher than that of the material used to form the real fruit fly ommatidium. Furthermore, the replicated compound eye was used to image a photomask patterned with grating structures to test its imaging property. It is shown that the grating with a line width of 20 μm can be clearly imaged. The image of the grating formed by the replicated compound eye was shrunk by about 10 times and therefore a line width of about 2.2 μm in the image plane has been obtained, which is close to the diffraction limited resolution calculated through the measured NA. In summary, the replication method demonstrated is effective and the replicated compound eye has the great potential in optical imaging.« less

  4. Hyperspectral interventional imaging for enhanced tissue visualization and discrimination combining band selection methods.

    PubMed

    Nouri, Dorra; Lucas, Yves; Treuillet, Sylvie

    2016-12-01

    Hyperspectral imaging is an emerging technology recently introduced in medical applications inasmuch as it provides a powerful tool for noninvasive tissue characterization. In this context, a new system was designed to be easily integrated in the operating room in order to detect anatomical tissues hardly noticed by the surgeon's naked eye. Our LCTF-based spectral imaging system is operative over visible, near- and middle-infrared spectral ranges (400-1700 nm). It is dedicated to enhance critical biological tissues such as the ureter and the facial nerve. We aim to find the best three relevant bands to create a RGB image to display during the intervention with maximal contrast between the target tissue and its surroundings. A comparative study is carried out between band selection methods and band transformation methods. Combined band selection methods are proposed. All methods are compared using different evaluation criteria. Experimental results show that the proposed combined band selection methods provide the best performance with rich information, high tissue separability and short computational time. These methods yield a significant discrimination between biological tissues. We developed a hyperspectral imaging system in order to enhance some biological tissue visualization. The proposed methods provided an acceptable trade-off between the evaluation criteria especially in SWIR spectral band that outperforms the naked eye's capacities.

  5. Three-dimensional ophthalmic optical coherence tomography with a refraction correction algorithm

    NASA Astrophysics Data System (ADS)

    Zawadzki, Robert J.; Leisser, Christoph; Leitgeb, Rainer; Pircher, Michael; Fercher, Adolf F.

    2003-10-01

    We built an optical coherence tomography (OCT) system with a rapid scanning optical delay (RSOD) line, which allows probing full axial eye length. The system produces Three-dimensional (3D) data sets that are used to generate 3D tomograms of the model eye. The raw tomographic data were processed by an algorithm, which is based on Snell"s law to correct the interface positions. The Zernike polynomials representation of the interfaces allows quantitative wave aberration measurements. 3D images of our results are presented to illustrate the capabilities of the system and the algorithm performance. The system allows us to measure intra-ocular distances.

  6. Multimodal Imaging of the Normal Eye.

    PubMed

    Kawali, Ankush; Pichi, Francesco; Avadhani, Kavitha; Invernizzi, Alessandro; Hashimoto, Yuki; Mahendradas, Padmamalini

    2017-10-01

    Multimodal imaging is the concept of "bundling" images obtained from various imaging modalities, viz., fundus photograph, fundus autofluorescence imaging, infrared (IR) imaging, simultaneous fluorescein and indocyanine angiography, optical coherence tomography (OCT), and, more recently, OCT angiography. Each modality has its pros and cons as well as its limitations. Combination of multiple imaging techniques will overcome their individual weaknesses and give a comprehensive picture. Such approach helps in accurate localization of a lesion and understanding the pathology in posterior segment. It is important to know imaging of normal eye before one starts evaluating pathology. This article describes multimodal imaging modalities in detail and discusses healthy eye features as seen on various imaging modalities mentioned above.

  7. Noninvasive photoacoustic detecting intraocular foreign bodies with an annular transducer array.

    PubMed

    Yang, Diwu; Zeng, Lvming; Pan, Changning; Zhao, Xuehui; Ji, Xuanrong

    2013-01-14

    We present a fast photoacoustic imaging system based on an annular transducer array for detection of intraocular foreign bodies. An eight-channel data acquisition system is applied to capture the photoacoustic signals using multiplexing and the total time of data acquisition and transferring is within 3 s. A limited-view filtered back projection algorithm is used to reconstruct the photoacoustic images. Experimental models of intraocular metal and glass foreign bodies were constructed on ex vivo pig's eyes and clear photoacoustic images of intraocular foreign bodies were obtained. Experimental results demonstrate the photoacoustic imaging system holds the potential for in clinic detecting the intraocular foreign bodies.

  8. Adaptive optics retinal imaging: emerging clinical applications.

    PubMed

    Godara, Pooja; Dubis, Adam M; Roorda, Austin; Duncan, Jacque L; Carroll, Joseph

    2010-12-01

    The human retina is a uniquely accessible tissue. Tools like scanning laser ophthalmoscopy and spectral domain-optical coherence tomography provide clinicians with remarkably clear pictures of the living retina. Although the anterior optics of the eye permit such non-invasive visualization of the retina and associated pathology, the same optics induce significant aberrations that obviate cellular-resolution imaging in most cases. Adaptive optics (AO) imaging systems use active optical elements to compensate for aberrations in the optical path between the object and the camera. When applied to the human eye, AO allows direct visualization of individual rod and cone photoreceptor cells, retinal pigment epithelium cells, and white blood cells. AO imaging has changed the way vision scientists and ophthalmologists see the retina, helping to clarify our understanding of retinal structure, function, and the etiology of various retinal pathologies. Here, we review some of the advances that were made possible with AO imaging of the human retina and discuss applications and future prospects for clinical imaging.

  9. Full ocular biometry through dual-depth whole-eye optical coherence tomography

    PubMed Central

    Kim, Hyung-Jin; Kim, Minji; Hyeon, Min Gyu; Choi, Youngwoon; Kim, Beop-Min

    2018-01-01

    We propose a new method of determining the optical axis (OA), pupillary axis (PA), and visual axis (VA) of the human eye by using dual-depth whole-eye optical coherence tomography (OCT). These axes, as well as the angles “α” between the OA and VA and “κ” between PA and VA, are important in many ophthalmologic applications, especially in refractive surgery. Whole-eye images are reconstructed based on simultaneously acquired images of the anterior segment and retina. The light from a light source is split into two orthogonal polarization components for imaging the anterior segment and retina, respectively. The OA and PA are identified based on their geometric definitions by using the anterior segment image only, while the VA is detected through accurate correlation between the two images. The feasibility of our approach was tested using a model eye and human subjects. PMID:29552378

  10. Fast noninvasive eye-tracking and eye-gaze determination for biomedical and remote monitoring applications

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Morookian, John M.; Monacos, Steve P.; Lam, Raymond K.; Lebaw, C.; Bond, A.

    2004-04-01

    Eyetracking is one of the latest technologies that has shown potential in several areas including human-computer interaction for people with and without disabilities, and for noninvasive monitoring, detection, and even diagnosis of physiological and neurological problems in individuals. Current non-invasive eyetracking methods achieve a 30 Hz rate with possibly low accuracy in gaze estimation, that is insufficient for many applications. We propose a new non-invasive visual eyetracking system that is capable of operating at speeds as high as 6-12 KHz. A new CCD video camera and hardware architecture is used, and a novel fast image processing algorithm leverages specific features of the input CCD camera to yield a real-time eyetracking system. A field programmable gate array (FPGA) is used to control the CCD camera and execute the image processing operations. Initial results show the excellent performance of our system under severe head motion and low contrast conditions.

  11. Low Vision Enhancement System

    NASA Technical Reports Server (NTRS)

    1995-01-01

    NASA's Technology Transfer Office at Stennis Space Center worked with the Johns Hopkins Wilmer Eye Institute in Baltimore, Md., to incorporate NASA software originally developed by NASA to process satellite images into the Low Vision Enhancement System (LVES). The LVES, referred to as 'ELVIS' by its users, is a portable image processing system that could make it possible to improve a person's vision by enhancing and altering images to compensate for impaired eyesight. The system consists of two orientation cameras, a zoom camera, and a video projection system. The headset and hand-held control weigh about two pounds each. Pictured is Jacob Webb, the first Mississippian to use the LVES.

  12. Images of photoreceptors in living primate eyes using adaptive optics two-photon ophthalmoscopy

    PubMed Central

    Hunter, Jennifer J.; Masella, Benjamin; Dubra, Alfredo; Sharma, Robin; Yin, Lu; Merigan, William H.; Palczewska, Grazyna; Palczewski, Krzysztof; Williams, David R.

    2011-01-01

    In vivo two-photon imaging through the pupil of the primate eye has the potential to become a useful tool for functional imaging of the retina. Two-photon excited fluorescence images of the macaque cone mosaic were obtained using a fluorescence adaptive optics scanning laser ophthalmoscope, overcoming the challenges of a low numerical aperture, imperfect optics of the eye, high required light levels, and eye motion. Although the specific fluorophores are as yet unknown, strong in vivo intrinsic fluorescence allowed images of the cone mosaic. Imaging intact ex vivo retina revealed that the strongest two-photon excited fluorescence signal comes from the cone inner segments. The fluorescence response increased following light stimulation, which could provide a functional measure of the effects of light on photoreceptors. PMID:21326644

  13. A wearable infrared video pupillography with multi-stimulation of consistent illumination for binocular pupil response

    NASA Astrophysics Data System (ADS)

    Mang, Ou-Yang; Ko, Mei Lan; Tsai, Yi-Chun; Chiou, Jin-Chern; Huang, Ting-Wei

    2016-03-01

    The pupil response to light can reflect various kinds of diseases which are related to physiological health. Pupillary abnormalities may be influenced on people by autonomic neuropathy, glaucoma, diabetes, genetic diseases, and high myopia. In the early stage of neuropathy, it is often asymptomatic and difficulty detectable by ophthalmologists. In addition, the position of injured nerve can lead to unsynchronized pupil response for human eyes. In our study, we design the pupilometer to measure the binocular pupil response simultaneously. It uses the different wavelength of LEDs such as white, red, green and blue light to stimulate the pupil and record the process. Therefore, the pupilometer mainly contains two systems. One is the image acquisition system, it use the two cameras modules with the same external triggered signal to capture the images of the pupil simultaneously. The other one is the illumination system. It use the boost converter ICs and LED driver ICs to supply the constant current for LED to maintain the consistent luminance in each experiments for reduced experimental error. Furthermore, the four infrared LEDs are arranged nearby the stimulating LEDs to illuminate eyes and increase contrast of image for image processing. In our design, we success to implement the function of synchronized image acquisition with the sample speed in 30 fps and the stable illumination system for precise measurement of experiment.

  14. NASA Sees Hurricane Arthur's Cloud-Covered Eye

    NASA Image and Video Library

    2014-07-03

    This visible image of Tropical Storm Arthur was taken by the MODIS instrument aboard NASA's Aqua satellite on July 2 at 18:50 UTC (2:50 p.m. EDT). A cloud-covered eye is clearly visible. Credit: NASA Goddard MODIS Rapid Response Team Read more: www.nasa.gov/content/goddard/arthur-atlantic/ NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  15. Automatic diagnostic system for measuring ocular refractive errors

    NASA Astrophysics Data System (ADS)

    Ventura, Liliane; Chiaradia, Caio; de Sousa, Sidney J. F.; de Castro, Jarbas C.

    1996-05-01

    Ocular refractive errors (myopia, hyperopia and astigmatism) are automatic and objectively determined by projecting a light target onto the retina using an infra-red (850 nm) diode laser. The light vergence which emerges from the eye (light scattered from the retina) is evaluated in order to determine the corresponding ametropia. The system basically consists of projecting a target (ring) onto the retina and analyzing the scattered light with a CCD camera. The light scattered by the eye is divided into six portions (3 meridians) by using a mask and a set of six prisms. The distance between the two images provided by each of the meridians, leads to the refractive error of the referred meridian. Hence, it is possible to determine the refractive error at three different meridians, which gives the exact solution for the eye's refractive error (spherical and cylindrical components and the axis of the astigmatism). The computational basis used for the image analysis is a heuristic search, which provides satisfactory calculation times for our purposes. The peculiar shape of the target, a ring, provides a wider range of measurement and also saves parts of the retina from unnecessary laser irradiation. Measurements were done in artificial and in vivo eyes (using cicloplegics) and the results were in good agreement with the retinoscopic measurements.

  16. Effect of a contact lens on mouse retinal in vivo imaging: Effective focal length changes and monochromatic aberrations.

    PubMed

    Zhang, Pengfei; Mocci, Jacopo; Wahl, Daniel J; Meleppat, Ratheesh Kumar; Manna, Suman K; Quintavalla, Martino; Muradore, Riccardo; Sarunic, Marinko V; Bonora, Stefano; Pugh, Edward N; Zawadzki, Robert J

    2018-03-28

    For in vivo mouse retinal imaging, especially with Adaptive Optics instruments, application of a contact lens is desirable, as it allows maintenance of cornea hydration and helps to prevent cataract formation during lengthy imaging sessions. However, since the refractive elements of the eye (cornea and lens) serve as the objective for most in vivo retinal imaging systems, the use of a contact lens, even with 0 Dpt. refractive power, can alter the system's optical properties. In this investigation we examined the effective focal length change and the aberrations that arise from use of a contact lens. First, focal length changes were simulated with a Zemax mouse eye model. Then ocular aberrations with and without a 0 Dpt. contact lens were measured with a Shack-Hartmann wavefront sensor (SHWS) in a customized AO-SLO system. Total RMS wavefront errors were measured for two groups of mice (14-month, and 2.5-month-old), decomposed into 66 Zernike aberration terms, and compared. These data revealed that vertical coma and spherical aberrations were increased with use of a contact lens in our system. Based on the ocular wavefront data we evaluated the effect of the contact lens on the imaging system performance as a function of the pupil size. Both RMS error and Strehl ratios were quantified for the two groups of mice, with and without contact lenses, and for different input beam sizes. These results provide information for determining optimum pupil size for retinal imaging without adaptive optics, and raise critical issues for design of mouse optical imaging systems that incorporate contact lenses. Copyright © 2018. Published by Elsevier Ltd.

  17. Real-time color/shape-based traffic signs acquisition and recognition system

    NASA Astrophysics Data System (ADS)

    Saponara, Sergio

    2013-02-01

    A real-time system is proposed to acquire from an automotive fish-eye CMOS camera the traffic signs, and provide their automatic recognition on the vehicle network. Differently from the state-of-the-art, in this work color-detection is addressed exploiting the HSI color space which is robust to lighting changes. Hence the first stage of the processing system implements fish-eye correction and RGB to HSI transformation. After color-based detection a noise deletion step is implemented and then, for the classification, a template-based correlation method is adopted to identify potential traffic signs, of different shapes, from acquired images. Starting from a segmented-image a matching with templates of the searched signs is carried out using a distance transform. These templates are organized hierarchically to reduce the number of operations and hence easing real-time processing for several types of traffic signs. Finally, for the recognition of the specific traffic sign, a technique based on extraction of signs characteristics and thresholding is adopted. Implemented on DSP platform the system recognizes traffic signs in less than 150 ms at a distance of about 15 meters from 640x480-pixel acquired images. Tests carried out with hundreds of images show a detection and recognition rate of about 93%.

  18. Adaptive Optics Optical Coherence Tomography in Glaucoma

    PubMed Central

    Dong, Zachary M.; Wollstein, Gadi; Wang, Bo; Schuman, Joel S.

    2016-01-01

    Since the introduction of commercial optical coherence tomography (OCT) systems, the ophthalmic imaging modality has rapidly expanded and it has since changed the paradigm of visualization of the retina and revolutionized the management and diagnosis of neuro-retinal diseases, including glaucoma. OCT remains a dynamic and evolving imaging modality, growing from time-domain OCT to the improved spectral-domain OCT, adapting novel image analysis and processing methods, and onto the newer swept-source OCT and the implementation of adaptive optics (AO) into OCT. The incorporation of AO into ophthalmic imaging modalities has enhanced OCT by improving image resolution and quality, particularly in the posterior segment of the eye. Although OCT previously captured in-vivo cross-sectional images with unparalleled high resolution in the axial direction, monochromatic aberrations of the eye limit transverse or lateral resolution to about 15-20 μm and reduce overall image quality. In pairing AO technology with OCT, it is now possible to obtain diffraction-limited resolution images of the optic nerve head and retina in three-dimensions, increasing resolution down to a theoretical 3 μm3. It is now possible to visualize discrete structures within the posterior eye, such as photoreceptors, retinal nerve fiber layer bundles, the lamina cribrosa, and other structures relevant to glaucoma. Despite its limitations and barriers to widespread commercialization, the expanding role of AO in OCT is propelling this technology into clinical trials and onto becoming an invaluable modality in the clinician's arsenal. PMID:27916682

  19. The Role of Teleophthalmology in the Management of Diabetic Retinopathy.

    PubMed

    Salongcay, Recivall P; Silva, Paolo S

    2018-01-01

    The emergence of diabetes as a global epidemic is accompanied by the rise in diabetes‑related retinal complications. Diabetic retinopathy, if left undetected and untreated, can lead to severe visual impairment and affect an individual's productivity and quality of life. Globally, diabetic retinopathy remains one of the leading causes of visual loss in the working‑age population. Teleophthalmology for diabetic retinopathy is an innovative means of retinal evaluation that allows identification of eyes at risk for visual loss, thereby preserving vision and decreasing the overall burden to the health care system. Numerous studies worldwide have found teleophthalmology to be a reliable and cost‑efficient alternative to traditional clinical examinations. It has reduced barriers to access to specialized eye care in both rural and urban communities. In teleophthalmology applications for diabetic retinopathy, it is critical that standardized protocols in image acquisition and evaluation are used to ensure low image ungradable rates and maintain the quality of images taken. Innovative imaging technology such as ultrawide field imaging has the potential to provide significant benefit with integration into teleophthalmology programs. Teleophthalmology programs for diabetic retinopathy rely on a comprehensive and multidisciplinary approach with partnerships across specialties and health care professionals to attain wider acceptability and allow evidence‑based eye care to reach a much broader population. Copyright 2017 Asia-Pacific Academy of Ophthalmology.

  20. Modern Diagnostic Techniques for the Assessment of Ocular Blood Flow in Myopia: Current State of Knowledge.

    PubMed

    Grudzińska, Ewa; Modrzejewska, Monika

    2018-01-01

    Myopia is the most common refractive error and the subject of interest of various studies assessing ocular blood flow. Increasing refractive error and axial elongation of the eye result in the stretching and thinning of the scleral, choroid, and retinal tissues and the decrease in retinal vessel diameter, disturbing ocular blood flow. Local and systemic factors known to change ocular blood flow include glaucoma, medications and fluctuations in intraocular pressure, and metabolic parameters. Techniques and tools assessing ocular blood flow include, among others, laser Doppler flowmetry (LDF), retinal function imager (RFI), laser speckle contrast imaging (LSCI), magnetic resonance imaging (MRI), optical coherence tomography angiography (OCTA), pulsatile ocular blood flowmeter (POBF), fundus pulsation amplitude (FPA), colour Doppler imaging (CDI), and Doppler optical coherence tomography (DOCT). Many researchers consistently reported lower blood flow parameters in myopic eyes regardless of the used diagnostic method. It is unclear whether this is a primary change that causes secondary thinning of ocular tissues or quite the opposite; that is, the mechanical stretching of the eye wall reduces its thickness and causes a secondary lower demand of tissues for oxygen. This paper presents a review of studies assessing ocular blood flow in myopes.

  1. Benefit from NASA

    NASA Image and Video Library

    1985-01-01

    The NASA imaging processing technology, an advanced computer technique to enhance images sent to Earth in digital form by distant spacecraft, helped develop a new vision screening process. The Ocular Vision Screening system, an important step in preventing vision impairment, is a portable device designed especially to detect eye problems in children through the analysis of retinal reflexes.

  2. Optoelectronic retinal prosthesis: system design and performance

    NASA Astrophysics Data System (ADS)

    Loudin, J. D.; Simanovskii, D. M.; Vijayraghavan, K.; Sramek, C. K.; Butterwick, A. F.; Huie, P.; McLean, G. Y.; Palanker, D. V.

    2007-03-01

    The design of high-resolution retinal prostheses presents many unique engineering and biological challenges. Ever smaller electrodes must inject enough charge to stimulate nerve cells, within electrochemically safe voltage limits. Stimulation sites should be placed within an electrode diameter from the target cells to prevent 'blurring' and minimize current. Signals must be delivered wirelessly from an external source to a large number of electrodes, and visual information should, ideally, maintain its natural link to eye movements. Finally, a good system must have a wide range of stimulation currents, external control of image processing and the option of either anodic-first or cathodic-first pulses. This paper discusses these challenges and presents solutions to them for a system based on a photodiode array implant. Video frames are processed and imaged onto the retinal implant by a head-mounted near-to-eye projection system operating at near-infrared wavelengths. Photodiodes convert light into pulsed electric current, with charge injection maximized by applying a common biphasic bias waveform. The resulting prosthesis will provide stimulation with a frame rate of up to 50 Hz in a central 10° visual field, with a full 30° field accessible via eye movements. Pixel sizes are scalable from 100 to 25 µm, corresponding to 640-10 000 pixels on an implant 3 mm in diameter.

  3. Iris recognition and what is next? Iris diagnosis: a new challenging topic for machine vision from image acquisition to image interpretation

    NASA Astrophysics Data System (ADS)

    Perner, Petra

    2017-03-01

    Molecular image-based techniques are widely used in medicine to detect specific diseases. Look diagnosis is an important issue but also the analysis of the eye plays an important role in order to detect specific diseases. These topics are important topics in medicine and the standardization of these topics by an automatic system can be a new challenging field for machine vision. Compared to iris recognition has the iris diagnosis much more higher demands for the image acquisition and interpretation of the iris. One understands by iris diagnosis (Iridology) the investigation and analysis of the colored part of the eye, the iris, to discover factors, which play an important role for the prevention and treatment of illnesses, but also for the preservation of an optimum health. An automatic system would pave the way for a much wider use of the iris diagnosis for the diagnosis of illnesses and for the purpose of individual health protection. With this paper, we describe our work towards an automatic iris diagnosis system. We describe the image acquisition and the problems with it. Different ways are explained for image acquisition and image preprocessing. We describe the image analysis method for the detection of the iris. The meta-model for image interpretation is given. Based on this model we show the many tasks for image analysis that range from different image-object feature analysis, spatial image analysis to color image analysis. Our first results for the recognition of the iris are given. We describe how detecting the pupil and not wanted lamp spots. We explain how to recognize orange blue spots in the iris and match them against the topological map of the iris. Finally, we give an outlook for further work.

  4. A study of human recognition rates for foveola-sized image patches selected from initial and final fixations on calibrated natural images

    NASA Astrophysics Data System (ADS)

    van der Linde, Ian; Rajashekar, Umesh; Cormack, Lawrence K.; Bovik, Alan C.

    2005-03-01

    Recent years have seen a resurgent interest in eye movements during natural scene viewing. Aspects of eye movements that are driven by low-level image properties are of particular interest due to their applicability to biologically motivated artificial vision and surveillance systems. In this paper, we report an experiment in which we recorded observers" eye movements while they viewed calibrated greyscale images of natural scenes. Immediately after viewing each image, observers were shown a test patch and asked to indicate if they thought it was part of the image they had just seen. The test patch was either randomly selected from a different image from the same database or, unbeknownst to the observer, selected from either the first or last location fixated on the image just viewed. We find that several low-level image properties differed significantly relative to the observers" ability to successfully designate each patch. We also find that the differences between patch statistics for first and last fixations are small compared to the differences between hit and miss responses. The goal of the paper was to, in a non-cognitive natural setting, measure the image properties that facilitate visual memory, additionally observing the role that temporal location (first or last fixation) of the test patch played. We propose that a memorability map of a complex natural scene may be constructed to represent the low-level memorability of local regions in a similar fashion to the familiar saliency map, which records bottom-up fixation attractors.

  5. Automatic anterior chamber angle assessment for HD-OCT images.

    PubMed

    Tian, Jing; Marziliano, Pina; Baskaran, Mani; Wong, Hong-Tym; Aung, Tin

    2011-11-01

    Angle-closure glaucoma is a major blinding eye disease and could be detected by measuring the anterior chamber angle in the human eyes. High-definition OCT (Cirrus HD-OCT) is an emerging noninvasive, high-speed, and high-resolution imaging modality for the anterior segment of the eye. Here, we propose a novel algorithm which automatically detects a new landmark, Schwalbe's line, and measures the anterior chamber angle in the HD-OCT images. The distortion caused by refraction is corrected by dewarping the HD-OCT images, and three biometric measurements are defined to quantitatively assess the anterior chamber angle. The proposed algorithm was tested on 40 HD-OCT images of the eye and provided accurate measurements in about 1 second.

  6. VerifEYE: a real-time meat inspection system for the beef processing industry

    NASA Astrophysics Data System (ADS)

    Kocak, Donna M.; Caimi, Frank M.; Flick, Rick L.; Elharti, Abdelmoula

    2003-02-01

    Described is a real-time meat inspection system developed for the beef processing industry by eMerge Interactive. Designed to detect and localize trace amounts of contamination on cattle carcasses in the packing process, the system affords the beef industry an accurate, high speed, passive optical method of inspection. Using a method patented by United States Department of Agriculture and Iowa State University, the system takes advantage of fluorescing chlorophyll found in the animal's diet and therefore the digestive track to allow detection and imaging of contaminated areas that may harbor potentially dangerous microbial pathogens. Featuring real-time image processing and documentation of performance, the system can be easily integrated into a processing facility's Hazard Analysis and Critical Control Point quality assurance program. This paper describes the VerifEYE carcass inspection and removal verification system. Results indicating the feasibility of the method, as well as field data collected using a prototype system during four university trials conducted in 2001 are presented. Two successful demonstrations using the prototype system were held at a major U.S. meat processing facility in early 2002.

  7. Electro-optic control of photographic imaging quality through ‘Smart Glass’ windows in optics demonstrations

    NASA Astrophysics Data System (ADS)

    Ozolinsh, Maris; Paulins, Paulis

    2017-09-01

    An experimental setup allowing the modeling of conditions in optical devices and in the eye at various degrees of scattering such as cataract pathology in human eyes is presented. The scattering in cells of polymer-dispersed liquid crystals (PDLCs) and ‘Smart Glass’ windows is used in the modeling experiments. Both applications are used as optical obstacles placed in different positions of the optical information flow pathway either directly on the stimuli demonstration computer screen or mounted directly after the image-formation lens of a digital camera. The degree of scattering is changed continuously by applying an AC voltage of up to 30-80 V to the PDLC cell. The setup uses a camera with 14 bit depth and a 24 mm focal length lens. Light-emitting diodes and diode-pumped solid-state lasers emitting radiation of different wavelengths are used as portable small-divergence light sources in the experiments. Image formation, optical system point spread function, modulation transfer functions, and system resolution limits are determined for such sample optical systems in student optics and optometry experimental exercises.

  8. Typhoon Soudelor's Eye over Northwestern Taiwan

    NASA Image and Video Library

    2015-08-10

    In this MODIS image from NASA's Aqua satellite, the eye of Typhoon Soudelor is seen over northwestern Taiwan on August 8, 2015 at 05:25 UTC (1:25 a.m. EDT). At that time, Soudelor had maximum sustained winds near 90 knots. It was less than 100 miles southwest of Taipei, Taiwan. Typhoon-force winds were felt up to 35 miles from the center, covering a 70 mile-wide diameter. Image credit: NASA Goddard MODIS Rapid Response Team/Jeff Schmaltz..NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  9. The Advanced Human Eye Model (AHEM): a personal binocular eye modeling system inclusive of refraction, diffraction, and scatter.

    PubMed

    Donnelly, William

    2008-11-01

    To present a commercially available software tool for creating eye models to assist the development of ophthalmic optics and instrumentation, simulate ailments or surgery-induced changes, explore vision research questions, and provide assistance to clinicians in planning treatment or analyzing clinical outcomes. A commercially available eye modeling system was developed, the Advanced Human Eye Model (AHEM). Two mainstream optical software engines, ZEMAX (ZEMAX Development Corp) and ASAP (Breault Research Organization), were used to construct a similar software eye model and compared. The method of using the AHEM is described and various eye modeling scenarios are created. These scenarios consist of retinal imaging of targets and sources; optimization capability; spectacles, contact lens, and intraocular lens insertion and correction; Zernike surface deformation on the cornea; cataract simulation and scattering; a gradient index lens; a binocular mode; a retinal implant; system import/export; and ray path exploration. Similarity of the two different optical software engines showed validity to the mechanism of the AHEM. Metrics and graphical data are generated from the various modeling scenarios particular to their input specifications. The AHEM is a user-friendly commercially available software tool from Breault Research Organization, which can assist the design of ophthalmic optics and instrumentation, simulate ailments or refractive surgery-induced changes, answer vision research questions, or assist clinicians in planning treatment or analyzing clinical outcomes.

  10. Development and origins of zebrafish ocular vasculature.

    PubMed

    Kaufman, Rivka; Weiss, Omri; Sebbagh, Meyrav; Ravid, Revital; Gibbs-Bar, Liron; Yaniv, Karina; Inbal, Adi

    2015-03-27

    The developing eye receives blood supply from two vascular systems, the intraocular hyaloid system and the superficial choroidal vessels. In zebrafish, a highly stereotypic and simple set of vessels develops on the surface of the eye prior to development of choroidal vessels. The origins and formation of this so-called superficial system have not been described. We have analyzed the development of superficial vessels by time-lapse imaging and identified their origins by photoconversion experiments in kdrl:Kaede transgenic embryos. We show that the entire superficial system is derived from a venous origin, and surprisingly, we find that the hyaloid system has, in addition to its previously described arterial origin, a venous origin for specific vessels. Despite arising solely from a vein, one of the vessels in the superficial system, the nasal radial vessel (NRV), appears to acquire an arterial identity while growing over the nasal aspect of the eye and this happens in a blood flow-independent manner. Our results provide a thorough analysis of the early development and origins of zebrafish ocular vessels and establish the superficial vasculature as a model for studying vascular patterning in the context of the developing eye.

  11. Corneal biomechanical properties from air-puff corneal deformation imaging

    NASA Astrophysics Data System (ADS)

    Marcos, Susana; Kling, Sabine; Bekesi, Nandor; Dorronsoro, Carlos

    2014-02-01

    The combination of air-puff systems with real-time corneal imaging (i.e. Optical Coherence Tomography (OCT), or Scheimpflug) is a promising approach to assess the dynamic biomechanical properties of the corneal tissue in vivo. In this study we present an experimental system which, together with finite element modeling, allows measurements of corneal biomechanical properties from corneal deformation imaging, both ex vivo and in vivo. A spectral OCT instrument combined with an air puff from a non-contact tonometer in a non-collinear configuration was used to image the corneal deformation over full corneal cross-sections, as well as to obtain high speed measurements of the temporal deformation of the corneal apex. Quantitative analysis allows direct extraction of several deformation parameters, such as apex indentation across time, maximal indentation depth, temporal symmetry and peak distance at maximal deformation. The potential of the technique is demonstrated and compared to air-puff imaging with Scheimpflug. Measurements ex vivo were performed on 14 freshly enucleated porcine eyes and five human donor eyes. Measurements in vivo were performed on nine human eyes. Corneal deformation was studied as a function of Intraocular Pressure (IOP, 15-45 mmHg), dehydration, changes in corneal rigidity (produced by UV corneal cross-linking, CXL), and different boundary conditions (sclera, ocular muscles). Geometrical deformation parameters were used as input for inverse finite element simulation to retrieve the corneal dynamic elastic and viscoelastic parameters. Temporal and spatial deformation profiles were very sensitive to the IOP. CXL produced a significant reduction of the cornea indentation (1.41x), and a change in the temporal symmetry of the corneal deformation profile (1.65x), indicating a change in the viscoelastic properties with treatment. Combining air-puff with dynamic imaging and finite element modeling allows characterizing the corneal biomechanics in-vivo.

  12. A Protective Eye Shield for Prevention of Media Opacities during Small Animal Ocular Imaging

    PubMed Central

    Bell, Brent A.; Kaul, Charles; Hollyfield, Joe G.

    2014-01-01

    Optical coherence tomography (OCT), scanning laser ophthalmoscopy (SLO) and other non-invasive imaging techniques are increasingly used in eye research to document disease-related changes in rodent eyes. Corneal dehydration is a major contributor to the formation of ocular opacities that can limit the repeated application of these techniques to individual animals. General anesthesia is usually required for imaging, which is accompanied by the loss of the blink reflex. As a consequence, the tear film cannot be maintained, drying occurs and the cornea becomes dehydrated. Without supplemental hydration, structural damage to the cornea quickly follows. Soon thereafter, anterior lens opacities can also develop. Collectively these changes ultimately compromise image quality, especially for studies involving repeated use of the same animal over several weeks or months. To minimize these changes, a protective shield was designed for mice and rats that prevent ocular dehydration during anesthesia. The eye shield, along with a semi-viscous ophthalmic solution, is placed over the corneas as soon as the anesthesia immobilizes the animal. Eye shields are removed for only the brief periods required for imaging and then reapplied before the fellow eye is examined. As a result, the corneal surface of each eye is exposed only for the time required for imaging. The device and detailed methods described here minimize the corneal and lens changes associated with ocular surface desiccation. When these methods are used consistently, high quality images can be obtained repeatedly from individual animals. PMID:25245081

  13. Eye-motion-corrected optical coherence tomography angiography using Lissajous scanning.

    PubMed

    Chen, Yiwei; Hong, Young-Joo; Makita, Shuichi; Yasuno, Yoshiaki

    2018-03-01

    To correct eye motion artifacts in en face optical coherence tomography angiography (OCT-A) images, a Lissajous scanning method with subsequent software-based motion correction is proposed. The standard Lissajous scanning pattern is modified to be compatible with OCT-A and a corresponding motion correction algorithm is designed. The effectiveness of our method was demonstrated by comparing en face OCT-A images with and without motion correction. The method was further validated by comparing motion-corrected images with scanning laser ophthalmoscopy images, and the repeatability of the method was evaluated using a checkerboard image. A motion-corrected en face OCT-A image from a blinking case is presented to demonstrate the ability of the method to deal with eye blinking. Results show that the method can produce accurate motion-free en face OCT-A images of the posterior segment of the eye in vivo .

  14. Complex differential variance angiography with noise-bias correction for optical coherence tomography of the retina

    PubMed Central

    Braaf, Boy; Donner, Sabine; Nam, Ahhyun S.; Bouma, Brett E.; Vakoc, Benjamin J.

    2018-01-01

    Complex differential variance (CDV) provides phase-sensitive angiographic imaging for optical coherence tomography (OCT) with immunity to phase-instabilities of the imaging system and small-scale axial bulk motion. However, like all angiographic methods, measurement noise can result in erroneous indications of blood flow that confuse the interpretation of angiographic images. In this paper, a modified CDV algorithm that corrects for this noise-bias is presented. This is achieved by normalizing the CDV signal by analytically derived upper and lower limits. The noise-bias corrected CDV algorithm was implemented into an experimental 1 μm wavelength OCT system for retinal imaging that used an eye tracking scanner laser ophthalmoscope at 815 nm for compensation of lateral eye motions. The noise-bias correction improved the CDV imaging of the blood flow in tissue layers with a low signal-to-noise ratio and suppressed false indications of blood flow outside the tissue. In addition, the CDV signal normalization suppressed noise induced by galvanometer scanning errors and small-scale lateral motion. High quality cross-section and motion-corrected en face angiograms of the retina and choroid are presented. PMID:29552388

  15. Complex differential variance angiography with noise-bias correction for optical coherence tomography of the retina.

    PubMed

    Braaf, Boy; Donner, Sabine; Nam, Ahhyun S; Bouma, Brett E; Vakoc, Benjamin J

    2018-02-01

    Complex differential variance (CDV) provides phase-sensitive angiographic imaging for optical coherence tomography (OCT) with immunity to phase-instabilities of the imaging system and small-scale axial bulk motion. However, like all angiographic methods, measurement noise can result in erroneous indications of blood flow that confuse the interpretation of angiographic images. In this paper, a modified CDV algorithm that corrects for this noise-bias is presented. This is achieved by normalizing the CDV signal by analytically derived upper and lower limits. The noise-bias corrected CDV algorithm was implemented into an experimental 1 μm wavelength OCT system for retinal imaging that used an eye tracking scanner laser ophthalmoscope at 815 nm for compensation of lateral eye motions. The noise-bias correction improved the CDV imaging of the blood flow in tissue layers with a low signal-to-noise ratio and suppressed false indications of blood flow outside the tissue. In addition, the CDV signal normalization suppressed noise induced by galvanometer scanning errors and small-scale lateral motion. High quality cross-section and motion-corrected en face angiograms of the retina and choroid are presented.

  16. PRESBYOPIA OPTOMETRY METHOD BASED ON DIOPTER REGULATION AND CHARGE COUPLE DEVICE IMAGING TECHNOLOGY.

    PubMed

    Zhao, Q; Wu, X X; Zhou, J; Wang, X; Liu, R F; Gao, J

    2015-01-01

    With the development of photoelectric technology and single-chip microcomputer technology, objective optometry, also known as automatic optometry, is becoming precise. This paper proposed a presbyopia optometry method based on diopter regulation and Charge Couple Device (CCD) imaging technology and, in the meantime, designed a light path that could measure the system. This method projects a test figure to the eye ground and then the reflected image from the eye ground is detected by CCD. The image is then automatically identified by computer and the far point and near point diopters are determined to calculate lens parameter. This is a fully automatic objective optometry method which eliminates subjective factors of the tested subject. Furthermore, it can acquire the lens parameter of presbyopia accurately and quickly and can be used to measure the lens parameter of hyperopia, myopia and astigmatism.

  17. Origami silicon optoelectronics for hemispherical electronic eye systems.

    PubMed

    Zhang, Kan; Jung, Yei Hwan; Mikael, Solomon; Seo, Jung-Hun; Kim, Munho; Mi, Hongyi; Zhou, Han; Xia, Zhenyang; Zhou, Weidong; Gong, Shaoqin; Ma, Zhenqiang

    2017-11-24

    Digital image sensors in hemispherical geometries offer unique imaging advantages over their planar counterparts, such as wide field of view and low aberrations. Deforming miniature semiconductor-based sensors with high-spatial resolution into such format is challenging. Here we report a simple origami approach for fabricating single-crystalline silicon-based focal plane arrays and artificial compound eyes that have hemisphere-like structures. Convex isogonal polyhedral concepts allow certain combinations of polygons to fold into spherical formats. Using each polygon block as a sensor pixel, the silicon-based devices are shaped into maps of truncated icosahedron and fabricated on flexible sheets and further folded either into a concave or convex hemisphere. These two electronic eye prototypes represent simple and low-cost methods as well as flexible optimization parameters in terms of pixel density and design. Results demonstrated in this work combined with miniature size and simplicity of the design establish practical technology for integration with conventional electronic devices.

  18. A method of rapidly evaluating image quality of NED optical system

    NASA Astrophysics Data System (ADS)

    Sun, Qi; Qiu, Chuankai; Yang, Huan

    2014-11-01

    In recent years, with the development of technology of micro-display, advanced optics and the software and hardware, near-to-eye display ( NED) optical system will have a wide range of potential applications in the fields of amusement and virtual reality. However, research on the evaluating image quality of this kind optical system is comparatively lagging behind. Although now there are some methods and equipment for evaluation, they can't be applied in commercial production because of their complex operation and inaccuracy. In this paper, an academic method is proposed and a Rapid Evaluation System (RES) is designed to evaluate the image of optical system rapidly and exactly. Firstly, a set of parameters that eyes are sensitive to and also express the quality of system should be extracted and quantized to be criterion, so the evaluation standards can be established. Then, some parameters can be detected by RES consisted of micro-display, CCD camera and computer and so on. By process of scaling, the measuring results of the RES are exact and creditable, relationship between object measurement, subjective evaluation and the RES will be established. After that, image quality of optical system can be evaluated just by detecting parameters of that. The RES is simple and the results of evaluation are exact and keeping with human vision. So the method can be used not only for optimizing design of optical system, but also for evaluation in commercial production.

  19. 3D X-Ray Luggage-Screening System

    NASA Technical Reports Server (NTRS)

    Fernandez, Kenneth

    2006-01-01

    A three-dimensional (3D) x-ray luggage- screening system has been proposed to reduce the fatigue experienced by human inspectors and increase their ability to detect weapons and other contraband. The system and variants thereof could supplant thousands of xray scanners now in use at hundreds of airports in the United States and other countries. The device would be applicable to any security checkpoint application where current two-dimensional scanners are in use. A conventional x-ray luggage scanner generates a single two-dimensional (2D) image that conveys no depth information. Therefore, a human inspector must scrutinize the image in an effort to understand ambiguous-appearing objects as they pass by at high speed on a conveyor belt. Such a high level of concentration can induce fatigue, causing the inspector to reduce concentration and vigilance. In addition, because of the lack of depth information, contraband objects could be made more difficult to detect by positioning them near other objects so as to create x-ray images that confuse inspectors. The proposed system would make it unnecessary for a human inspector to interpret 2D images, which show objects at different depths as superimposed. Instead, the system would take advantage of the natural human ability to infer 3D information from stereographic or stereoscopic images. The inspector would be able to perceive two objects at different depths, in a more nearly natural manner, as distinct 3D objects lying at different depths. Hence, the inspector could recognize objects with greater accuracy and less effort. The major components of the proposed system would be similar to those of x-ray luggage scanners now in use. As in a conventional x-ray scanner, there would be an x-ray source. Unlike in a conventional scanner, there would be two x-ray image sensors, denoted the left and right sensors, located at positions along the conveyor that are upstream and downstream, respectively (see figure). X-ray illumination may be provided by a single source or by two sources. The position of the conveyor would be detected to provide a means of matching the appropriate left- and right-eye images of an item under inspection. The appropriate right- and left-eye images of an item would be displayed simultaneously to the right and left eyes, respectively, of the human inspector, using commercially available stereo display screens. The human operator could adjust viewing parameters for maximum viewing comfort. The stereographic images thus generated would differ from true stereoscopic images by small distortions that are characteristic of radiographic images in general, but these distortions would not diminish the value of the images for identifying distinct objects at different depths.

  20. Retinal image registration for eye movement estimation.

    PubMed

    Kolar, Radim; Tornow, Ralf P; Odstrcilik, Jan

    2015-01-01

    This paper describes a novel methodology for eye fixation measurement using a unique videoophthalmoscope setup and advanced image registration approach. The representation of the eye movements via Poincare plot is also introduced. The properties, limitations and perspective of this methodology are finally discussed.

  1. A Study of Deep CNN-Based Classification of Open and Closed Eyes Using a Visible Light Camera Sensor

    PubMed Central

    Kim, Ki Wan; Hong, Hyung Gil; Nam, Gi Pyo; Park, Kang Ryoung

    2017-01-01

    The necessity for the classification of open and closed eyes is increasing in various fields, including analysis of eye fatigue in 3D TVs, analysis of the psychological states of test subjects, and eye status tracking-based driver drowsiness detection. Previous studies have used various methods to distinguish between open and closed eyes, such as classifiers based on the features obtained from image binarization, edge operators, or texture analysis. However, when it comes to eye images with different lighting conditions and resolutions, it can be difficult to find an optimal threshold for image binarization or optimal filters for edge and texture extraction. In order to address this issue, we propose a method to classify open and closed eye images with different conditions, acquired by a visible light camera, using a deep residual convolutional neural network. After conducting performance analysis on both self-collected and open databases, we have determined that the classification accuracy of the proposed method is superior to that of existing methods. PMID:28665361

  2. In vivo retinal imaging for fixational eye motion detection using a high-speed digital micromirror device (DMD)-based ophthalmoscope.

    PubMed

    Vienola, Kari V; Damodaran, Mathi; Braaf, Boy; Vermeer, Koenraad A; de Boer, Johannes F

    2018-02-01

    Retinal motion detection with an accuracy of 0.77 arcmin corresponding to 3.7 µm on the retina is demonstrated with a novel digital micromirror device based ophthalmoscope. By generating a confocal image as a reference, eye motion could be measured from consecutively measured subsampled frames. The subsampled frames provide 7.7 millisecond snapshots of the retina without motion artifacts between the image points of the subsampled frame, distributed over the full field of view. An ophthalmoscope pattern projection speed of 130 Hz enabled a motion detection bandwidth of 65 Hz. A model eye with a scanning mirror was built to test the performance of the motion detection algorithm. Furthermore, an in vivo motion trace was obtained from a healthy volunteer. The obtained eye motion trace clearly shows the three main types of fixational eye movements. Lastly, the obtained eye motion trace was used to correct for the eye motion in consecutively obtained subsampled frames to produce an averaged confocal image correct for motion artefacts.

  3. In vivo retinal imaging for fixational eye motion detection using a high-speed digital micromirror device (DMD)-based ophthalmoscope

    PubMed Central

    Vienola, Kari V.; Damodaran, Mathi; Braaf, Boy; Vermeer, Koenraad A.; de Boer, Johannes F.

    2018-01-01

    Retinal motion detection with an accuracy of 0.77 arcmin corresponding to 3.7 µm on the retina is demonstrated with a novel digital micromirror device based ophthalmoscope. By generating a confocal image as a reference, eye motion could be measured from consecutively measured subsampled frames. The subsampled frames provide 7.7 millisecond snapshots of the retina without motion artifacts between the image points of the subsampled frame, distributed over the full field of view. An ophthalmoscope pattern projection speed of 130 Hz enabled a motion detection bandwidth of 65 Hz. A model eye with a scanning mirror was built to test the performance of the motion detection algorithm. Furthermore, an in vivo motion trace was obtained from a healthy volunteer. The obtained eye motion trace clearly shows the three main types of fixational eye movements. Lastly, the obtained eye motion trace was used to correct for the eye motion in consecutively obtained subsampled frames to produce an averaged confocal image correct for motion artefacts. PMID:29552396

  4. Insights into autofluorescence patterns in Stargardt macular dystrophy using ultra-wide-field imaging.

    PubMed

    Kumar, Vinod

    2017-10-01

    To characterize autofluorescence (AF) patterns occurring in Stargardt macular dystrophy (STGD1) using ultra-wide-field (UWF) imaging. This paper is a cross-sectional observational study of 22 eyes of 11 patients (mean age 23.44 years) with Stargardt disease-fundus flavimaculatus who presented with decrease of vision at a tertiary eye care center. UWF short-wave AF images were obtained from all the patients using an Optos TX200 instrument. The main outcome measures were to assess patterns of AF changes seen on UWF AF imaging. All eyes showed a central area of hypoautofluorescence at the macula along with retinal flecks extending centrifugally as well as to the nasal side of the optic disc. Peripapillary sparing was seen in 100% of the eyes. Flecks were seen to be hypoautofluorescent in the center and hyperautofluorescent in the periphery in 77.8% eyes and were only hyperfluorescent in 27.2%. A background-increased fluorescence was visible in 100% of eyes, the outer boundary of which was marked by distribution of flecks in 81.9% eyes. A characteristic inferonasal vertical line was seen separating the nasal hypoautofluorescent area from the temporal hyperautofluorescent area in all the eyes. UWF AF changes in STGD1 are not limited to the posterior pole and may extend more peripherally. UWF imaging is a useful tool for the assessment of patients with Stargardt macular dystrophy.

  5. The Evolution of Teleophthalmology Programs in the United Kingdom: Beyond Diabetic Retinopathy Screening.

    PubMed

    Sim, Dawn A; Mitry, Danny; Alexander, Philip; Mapani, Adam; Goverdhan, Srini; Aslam, Tariq; Tufail, Adnan; Egan, Catherine A; Keane, Pearse A

    2016-02-01

    Modern ophthalmic practice in the United Kingdom is faced by the challenges of an aging population, increasing prevalence of systemic pathologies with ophthalmic manifestations, and emergent treatments that are revolutionary but dependent on timely monitoring and diagnosis. This represents a huge strain not only on diagnostic services but also outpatient management and surveillance capacity. There is an urgent need for newer means of managing this surge in demand and the socioeconomic burden it places on the health care system. Concurrently, there have been exponential increases in computing power, expansions in the strength and ubiquity of communications technologies, and developments in imaging capabilities. Advances in imaging have been not only in terms of resolution, but also in terms of anatomical coverage, allowing new inferences to be made. In spite of this, image analysis techniques are still currently superseded by expert ophthalmologist interpretation. Teleophthalmology is therefore currently perfectly placed to face this urgent and immediate challenge of provision of optimal and expert care to remote and multiple patients over widespread geographical areas. This article reviews teleophthalmology programs currently deployed in the United Kingdom, focusing on diabetic eye care but also discussing glaucoma, emergency eye care, and other retinal diseases. We examined current programs and levels of evidence for their utility, and explored the relationships between screening, teleophthalmology, disease detection, and monitoring before discussing aspects of health economics pertinent to diabetic eye care. The use of teleophthalmology presents an immense opportunity to manage the steadily increasing demand for eye care, but challenges remain in the delivery of practical, viable, and clinically proven solutions. © 2016 Diabetes Technology Society.

  6. The Evolution of Teleophthalmology Programs in the United Kingdom

    PubMed Central

    Sim, Dawn A.; Mitry, Danny; Alexander, Philip; Mapani, Adam; Goverdhan, Srini; Aslam, Tariq; Tufail, Adnan; Egan, Catherine A.; Keane, Pearse A.

    2016-01-01

    Modern ophthalmic practice in the United Kingdom is faced by the challenges of an aging population, increasing prevalence of systemic pathologies with ophthalmic manifestations, and emergent treatments that are revolutionary but dependent on timely monitoring and diagnosis. This represents a huge strain not only on diagnostic services but also outpatient management and surveillance capacity. There is an urgent need for newer means of managing this surge in demand and the socioeconomic burden it places on the health care system. Concurrently, there have been exponential increases in computing power, expansions in the strength and ubiquity of communications technologies, and developments in imaging capabilities. Advances in imaging have been not only in terms of resolution, but also in terms of anatomical coverage, allowing new inferences to be made. In spite of this, image analysis techniques are still currently superseded by expert ophthalmologist interpretation. Teleophthalmology is therefore currently perfectly placed to face this urgent and immediate challenge of provision of optimal and expert care to remote and multiple patients over widespread geographical areas. This article reviews teleophthalmology programs currently deployed in the United Kingdom, focusing on diabetic eye care but also discussing glaucoma, emergency eye care, and other retinal diseases. We examined current programs and levels of evidence for their utility, and explored the relationships between screening, teleophthalmology, disease detection, and monitoring before discussing aspects of health economics pertinent to diabetic eye care. The use of teleophthalmology presents an immense opportunity to manage the steadily increasing demand for eye care, but challenges remain in the delivery of practical, viable, and clinically proven solutions. PMID:26830492

  7. A remote operating slit lamp microscope system. Development and its utility in ophthalmologic examinations.

    PubMed

    Tanabe, N; Go, K; Sakurada, Y; Imasawa, M; Mabuchi, F; Chiba, T; Abe, K; Kashiwagi, K

    2011-01-01

    To develop a remote-operating slit lamp microscope system (the remote slit lamp) as the core for highly specialized ophthalmology diagnoses, and to compare the utility of this system with the conventional slit lamp microscope system (the conventional slit lamp) in making a diagnosis. The remote slit lamp system was developed. Three factors were evaluated in comparison to the conventional slit lamp. The ability to acquire skills was investigated using a task loading system among specialists and residents in ophthalmology. Participants repeated a task up to ten times and the time required for each task was analyzed. The consistency of the two systems in making a diagnosis was investigated using eyes of patients with ocular diseases as well as healthy volunteers. The remote slit lamp is composed of a patient's unit and ophthalmologist's unit connected by high-speed internet. The two units share images acquired by the slit lamp in addition to the images and voices of patients and ophthalmologists. Both ophthalmology specialists and residents could minimize the completion times after several trials. The remote slit lamp took more time than the conventional slit lamp. Both systems showed a high consistency in evaluations among eyes with healthy eyes or those with ocular diseases. The remote slit lamp has a similar diagnostic ability, but required more examination time in comparison to the conventional slit lamp. The currently developed remote slit lamp has the potential to be employed for tele-medicine purposes in the field of ophthalmology.

  8. Measurement of wavefront aberrations and lens deformation in the accommodated eye with optical coherence tomography-equipped wavefront system.

    PubMed

    He, Ji C; Wang, Jianhua

    2014-04-21

    To quantitatively approach the relationship between optical changes in an accommodated eye and the geometrical deformation of its crystalline lens, a long scan-depth anterior segment OCT equipped wavefront sensor was developed and integrated with a Badal system. With this system, accommodation was stimulated up to 6.0D in the left eye and also measured in the same eye for three subjects. High correlations between the accommodative responses of refractive power and the radius of the anterior lens surface were found for the three subjects (r>0.98). The change in spherical aberration was also highly correlated with the change in lens thickness (r>0.98). The measurement was very well repeated at a 2nd measurement session on the same day for the three subjects and after two weeks for one subject. The novelty of incorporating the Badal system into the OCT equipped wavefront sensor eliminated axial misalignment of the measurement system with the test eye due to accommodative vergence, as in the contralateral paradigm. The design also allowed the wavefront sensor to capture conjugated sharp Hartmann-Shack images in accommodated eyes to accurately analyze wavefront aberrations. In addition, this design extended the accommodation range up to 10.0D. By using this system, for the first time, we demonstrated linear relationships of the changes between the refractive power and the lens curvature and also between the spherical aberration and the lens thickness during accommodation in vivo. This new system provides an accurate and useful technique to quantitatively study accommodation.

  9. Measurement of wavefront aberrations and lens deformation in the accommodated eye with optical coherence tomography-equipped wavefront system

    PubMed Central

    He, Ji C.; Wang, Jianhua

    2014-01-01

    To quantitatively approach the relationship between optical changes in an accommodated eye and the geometrical deformation of its crystalline lens, a long scan-depth anterior segment OCT equipped wavefront sensor was developed and integrated with a Badal system. With this system, accommodation was stimulated up to 6.0D in the left eye and also measured in the same eye for three subjects. High correlations between the accommodative responses of refractive power and the radius of the anterior lens surface were found for the three subjects (r>0.98). The change in spherical aberration was also highly correlated with the change in lens thickness (r>0.98). The measurement was very well repeated at a 2nd measurement session on the same day for the three subjects and after two weeks for one subject. The novelty of incorporating the Badal system into the OCT equipped wavefront sensor eliminated axial misalignment of the measurement system with the test eye due to accommodative vergence, as in the contralateral paradigm. The design also allowed the wavefront sensor to capture conjugated sharp Hartmann-Shack images in accommodated eyes to accurately analyze wavefront aberrations. In addition, this design extended the accommodation range up to 10.0D. By using this system, for the first time, we demonstrated linear relationships of the changes between the refractive power and the lens curvature and also between the spherical aberration and the lens thickness during accommodation in vivo. This new system provides an accurate and useful technique to quantitatively study accommodation. PMID:24787861

  10. EyeMIAS: a cloud-based ophthalmic image reading and auxiliary diagnosis system

    NASA Astrophysics Data System (ADS)

    Wu, Di; Zhao, Heming; Yu, Kai; Chen, Xinjian

    2018-03-01

    Relying solely on ophthalmic equipment is unable to meet the present health needs. It is urgent to find an efficient way to provide a quick screening and early diagnosis on diabetic retinopathy and other ophthalmic diseases. The purpose of this study is to develop a cloud-base system for medical image especially ophthalmic image to store, view and process and accelerate the screening and diagnosis. In this purpose the system with web application, upload client, storage dependency and algorithm support is implemented. After five alpha tests, the system bore the thousands of large traffic access and generated hundreds of reports with diagnosis.

  11. Non-intrusive practitioner pupil detection for unmodified microscope oculars.

    PubMed

    Fuhl, Wolfgang; Santini, Thiago; Reichert, Carsten; Claus, Daniel; Herkommer, Alois; Bahmani, Hamed; Rifai, Katharina; Wahl, Siegfried; Kasneci, Enkelejda

    2016-12-01

    Modern microsurgery is a long and complex task requiring the surgeon to handle multiple microscope controls while performing the surgery. Eye tracking provides an additional means of interaction for the surgeon that could be used to alleviate this situation, diminishing surgeon fatigue and surgery time, thus decreasing risks of infection and human error. In this paper, we introduce a novel algorithm for pupil detection tailored for eye images acquired through an unmodified microscope ocular. The proposed approach, the Hough transform, and six state-of-the-art pupil detection algorithms were evaluated on over 4000 hand-labeled images acquired from a digital operating microscope with a non-intrusive monitoring system for the surgeon eyes integrated. Our results show that the proposed method reaches detection rates up to 71% for an error of ≈3% w.r.t the input image diagonal; none of the state-of-the-art pupil detection algorithms performed satisfactorily. The algorithm and hand-labeled data set can be downloaded at:: www.ti.uni-tuebingen.de/perception. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Visual optics: an engineering approach

    NASA Astrophysics Data System (ADS)

    Toadere, Florin

    2010-11-01

    The human eyes' visual system interprets the information from the visible light in order to build a representation of the world surrounding the body. It derives color by comparing the responses to light from the three types of photoreceptor cones in the eyes. These long medium and short cones are sensitive to blue, green and red portions of the visible spectrum. We simulate the color vision for the normal eyes. We see the effects of the dyes, filters, glasses and windows on color perception when the test image is illuminated with the D65 light sources. In addition to colors' perception, the human eyes can suffer from diseases and disorders. The eye can be seen as an optical instrument which has its own eye print. We present aspects of some nowadays methods and technologies which can capture and correct the human eyes' wavefront aberrations. We focus our attention to Siedel aberrations formula, Zenike polynomials, Shack-Hartmann Sensor, LASIK, interferograms fringes aberrations and Talbot effect.

  13. Automatic retinal interest evaluation system (ARIES).

    PubMed

    Yin, Fengshou; Wong, Damon Wing Kee; Yow, Ai Ping; Lee, Beng Hai; Quan, Ying; Zhang, Zhuo; Gopalakrishnan, Kavitha; Li, Ruoying; Liu, Jiang

    2014-01-01

    In recent years, there has been increasing interest in the use of automatic computer-based systems for the detection of eye diseases such as glaucoma, age-related macular degeneration and diabetic retinopathy. However, in practice, retinal image quality is a big concern as automatic systems without consideration of degraded image quality will likely generate unreliable results. In this paper, an automatic retinal image quality assessment system (ARIES) is introduced to assess both image quality of the whole image and focal regions of interest. ARIES achieves 99.54% accuracy in distinguishing fundus images from other types of images through a retinal image identification step in a dataset of 35342 images. The system employs high level image quality measures (HIQM) to perform image quality assessment, and achieves areas under curve (AUCs) of 0.958 and 0.987 for whole image and optic disk region respectively in a testing dataset of 370 images. ARIES acts as a form of automatic quality control which ensures good quality images are used for processing, and can also be used to alert operators of poor quality images at the time of acquisition.

  14. Effect of Phenylephrine on the Accommodative System

    PubMed Central

    Del Águila-Carrasco, Antonio J.; Bernal-Molina, Paula; Ferrer-Blasco, Teresa; López-Gil, Norberto; Montés-Micó, Robert

    2016-01-01

    Accommodation is controlled by the action of the ciliary muscle and mediated primarily by parasympathetic input through postganglionic fibers that originate from neurons in the ciliary and pterygopalatine ganglia. During accommodation the pupil constricts to increase the depth of focus of the eye and improve retinal image quality. Researchers have traditionally faced the challenge of measuring the accommodative properties of the eye through a small pupil and thus have relied on pharmacological agents to dilate the pupil. Achieving pupil dilation (mydriasis) without affecting the accommodative ability of the eye (cycloplegia) could be useful in many clinical and research contexts. Phenylephrine hydrochloride (PHCl) is a sympathomimetic agent that is used clinically to dilate the pupil. Nevertheless, first investigations suggested some loss of functional accommodation in the human eye after PHCl instillation. Subsequent studies, based on different measurement procedures, obtained contradictory conclusions, causing therefore an unexpected controversy that has been spread almost to the present days. This manuscript reviews and summarizes the main research studies that have been performed to analyze the effect of PHCl on the accommodative system and provides clear conclusions that could help clinicians know the real effects of PHCl on the accommodative system of the human eye. PMID:28053778

  15. A dataset of stereoscopic images and ground-truth disparity mimicking human fixations in peripersonal space

    PubMed Central

    Canessa, Andrea; Gibaldi, Agostino; Chessa, Manuela; Fato, Marco; Solari, Fabio; Sabatini, Silvio P.

    2017-01-01

    Binocular stereopsis is the ability of a visual system, belonging to a live being or a machine, to interpret the different visual information deriving from two eyes/cameras for depth perception. From this perspective, the ground-truth information about three-dimensional visual space, which is hardly available, is an ideal tool both for evaluating human performance and for benchmarking machine vision algorithms. In the present work, we implemented a rendering methodology in which the camera pose mimics realistic eye pose for a fixating observer, thus including convergent eye geometry and cyclotorsion. The virtual environment we developed relies on highly accurate 3D virtual models, and its full controllability allows us to obtain the stereoscopic pairs together with the ground-truth depth and camera pose information. We thus created a stereoscopic dataset: GENUA PESTO—GENoa hUman Active fixation database: PEripersonal space STereoscopic images and grOund truth disparity. The dataset aims to provide a unified framework useful for a number of problems relevant to human and computer vision, from scene exploration and eye movement studies to 3D scene reconstruction. PMID:28350382

  16. Automated grading system for evaluation of ocular redness associated with dry eye

    PubMed Central

    Rodriguez, John D; Johnston, Patrick R; Ousler, George W; Smith, Lisa M; Abelson, Mark B

    2013-01-01

    Background We have observed that dry eye redness is characterized by a prominence of fine horizontal conjunctival vessels in the exposed ocular surface of the interpalpebral fissure, and have incorporated this feature into the grading of redness in clinical studies of dry eye. Aim To develop an automated method of grading dry eye-associated ocular redness in order to expand on the clinical grading system currently used. Methods Ninety nine images from 26 dry eye subjects were evaluated by five graders using a 0–4 (in 0.5 increments) dry eye redness (Ora Calibra™ Dry Eye Redness Scale [OCDER]) scale. For the automated method, the Opencv computer vision library was used to develop software for calculating redness and horizontal conjunctival vessels (noted as “horizontality”). From original photograph, the region of interest (ROI) was selected manually using the open source ImageJ software. Total average redness intensity (Com-Red) was calculated as a single channel 8-bit image as R – 0.83G – 0.17B, where R, G and B were the respective intensities of the red, green and blue channels. The location of vessels was detected by normalizing the blue channel and selecting pixels with an intensity of less than 97% of the mean. The horizontal component (Com-Hor) was calculated by the first order Sobel derivative in the vertical direction and the score was calculated as the average blue channel image intensity of this vertical derivative. Pearson correlation coefficients, accuracy and concordance correlation coefficients (CCC) were calculated after regression and standardized regression of the dataset. Results The agreement (both Pearson’s and CCC) among investigators using the OCDER scale was 0.67, while the agreement of investigator to computer was 0.76. A multiple regression using both redness and horizontality improved the agreement CCC from 0.66 and 0.69 to 0.76, demonstrating the contribution of vessel geometry to the overall grade. Computer analysis of a given image has 100% repeatability and zero variability from session to session. Conclusion This objective means of grading ocular redness in a unified fashion has potential significance as a new clinical endpoint. In comparisons between computer and investigator, computer grading proved to be more reliable than another investigator using the OCDER scale. The best fitting model based on the present sample, and usable for future studies, was C4=−12.24+2.12C2HOR+0.88C2RED:C4 is the predicted investigator grade, and C2HOR and C2RED are logarithmic transformations of the computer calculated parameters COM-Hor and COM-Red. Considering the superior repeatability, computer automated grading might be preferable to investigator grading in multicentered dry eye studies in which the subtle differences in redness incurred by treatment have been historically difficult to define. PMID:23814457

  17. Rapid and coordinated processing of global motion images by local clusters of retinal ganglion cells.

    PubMed

    Matsumoto, Akihiro; Tachibana, Masao

    2017-01-01

    Even when the body is stationary, the whole retinal image is always in motion by fixational eye movements and saccades that move the eye between fixation points. Accumulating evidence indicates that the brain is equipped with specific mechanisms for compensating for the global motion induced by these eye movements. However, it is not yet fully understood how the retina processes global motion images during eye movements. Here we show that global motion images evoke novel coordinated firing in retinal ganglion cells (GCs). We simultaneously recorded the firing of GCs in the goldfish isolated retina using a multi-electrode array, and classified each GC based on the temporal profile of its receptive field (RF). A moving target that accompanied the global motion (simulating a saccade following a period of fixational eye movements) modulated the RF properties and evoked synchronized and correlated firing among local clusters of the specific GCs. Our findings provide a novel concept for retinal information processing during eye movements.

  18. Biomimetic machine vision system.

    PubMed

    Harman, William M; Barrett, Steven F; Wright, Cameron H G; Wilcox, Michael

    2005-01-01

    Real-time application of digital imaging for use in machine vision systems has proven to be prohibitive when used within control systems that employ low-power single processors without compromising the scope of vision or resolution of captured images. Development of a real-time machine analog vision system is the focus of research taking place at the University of Wyoming. This new vision system is based upon the biological vision system of the common house fly. Development of a single sensor is accomplished, representing a single facet of the fly's eye. This new sensor is then incorporated into an array of sensors capable of detecting objects and tracking motion in 2-D space. This system "preprocesses" incoming image data resulting in minimal data processing to determine the location of a target object. Due to the nature of the sensors in the array, hyperacuity is achieved thereby eliminating resolutions issues found in digital vision systems. In this paper, we will discuss the biological traits of the fly eye and the specific traits that led to the development of this machine vision system. We will also discuss the process of developing an analog based sensor that mimics the characteristics of interest in the biological vision system. This paper will conclude with a discussion of how an array of these sensors can be applied toward solving real-world machine vision issues.

  19. Pale Blue Orb

    NASA Image and Video Library

    2006-09-19

    NASA Cassini casts powerful eyes on our home planet, and captures Earth, a pale blue orb, and a faint suggestion of our moon, among the glories of the Saturn system in this image taken Sept. 15, 2006.

  20. Combined laser-ray tracing and OCT system for biometry of the crystalline lens (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Ruggeri, Marco; Maceo Heilman, Bianca M.; Yao, Yue; Chang, Yu-Cherng; Gonzalez, Alex; Rowaan, Cornelis; Mohamed, Ashik; Williams, Siobhan; Durkee, Heather A.; Silgado, Juan; Bernal, Andres; Arrieta-Quintero, Esdras; Ho, Arthur; Parel, Jean-Marie A.; Manns, Fabrice

    2017-02-01

    Age-related changes in the crystalline lens shape and refractive index gradient produce changes in dioptric power and high-order aberrations that influence the optics of the whole eye and contribute to a decrease in overall visual quality. Despite their key role, the changes in lens shape and refractive index gradient with age and accommodation and their effects on high-order aberrations are still not well understood. The goal of this project was to develop a combined laser ray tracing (LRT) and optical coherence tomography (OCT) system to measure high-order aberrations, shape and refractive index gradient in non-human primate and human lenses. A miniature motorized lens stretching system was built to enable imaging and aberrometry of the lens during simulated accommodation. A positioning system was also built to enable on- and off-axis OCT imaging and aberrometry for characterization of the peripheral defocus of the lens. We demonstrated the capability of the LRT-OCT system to produce OCT images and aberration measurements of crystalline lens with age and accommodation in vitro. In future work, the information acquired with the LRT-OCT system will be used to develop an accurate age-dependent lens model to predict the role of the lens in the development of refractive error and aberrations of the whole eye.

  1. Composite Image of the Cat's Eye From Chandra X-Ray Observatory and Hubble Space Telescope

    NASA Technical Reports Server (NTRS)

    2001-01-01

    Left image: The x-ray data from the Chandra X-Ray Observatory (CXO) has revealed a bright central star surrounded by a cloud of multimillion-degree gas in the planetary nebula known as the Cat's Eye. This CXO image, where the intensity of the x-ray emission is correlated to the brightness of the orange coloring, captures the expulsion of material from a star that is expected to collapse into a white dwarf in a few million years. The intensity of x-rays from the central star was unexpected, and it is the first time astronomers have seen such x-ray emission from the central star of a planetary nebula. Right image: An image of Cat's Eye taken by the Hubble Space Telescope (HST). By comparing the CXO data with that from the HST, researchers are able to see where the hotter, x-ray emitting gas appears in relation to the cooler material seen in optical wavelengths by the HST. The CXO team found that the chemical abundance in the region of hot gas (its x-ray intensity is shown in purple) was not like those in the wind from the central star and different from the outer cooler material (the red and green structures.) Although still incredibly energetic and hot enough to radiate x-rays, CXO shows the hot gas to be somewhat cooler than scientists would have expected for such a system. CXO image credit: (NASA/UIUC/Y. Chu et al.) HST image credit: (NASA/HST)

  2. 3D imaging of cone photoreceptors over extended time periods using optical coherence tomography with adaptive optics

    NASA Astrophysics Data System (ADS)

    Kocaoglu, Omer P.; Lee, Sangyeol; Jonnal, Ravi S.; Wang, Qiang; Herde, Ashley E.; Besecker, Jason; Gao, Weihua; Miller, Donald T.

    2011-03-01

    Optical coherence tomography with adaptive optics (AO-OCT) is a highly sensitive, noninvasive method for 3D imaging of the microscopic retina. The purpose of this study is to advance AO-OCT technology by enabling repeated imaging of cone photoreceptors over extended periods of time (days). This sort of longitudinal imaging permits monitoring of 3D cone dynamics in both normal and diseased eyes, in particular the physiological processes of disc renewal and phagocytosis, which are disrupted by retinal diseases such as age related macular degeneration and retinitis pigmentosa. For this study, the existing AO-OCT system at Indiana underwent several major hardware and software improvements to optimize system performance for 4D cone imaging. First, ultrahigh speed imaging was realized using a Basler Sprint camera. Second, a light source with adjustable spectrum was realized by integration of an Integral laser (Femto Lasers, λc=800nm, ▵λ=160nm) and spectral filters in the source arm. For cone imaging, we used a bandpass filter with λc=809nm and ▵λ=81nm (2.6 μm nominal axial resolution in tissue, and 167 KHz A-line rate using 1,408 px), which reduced the impact of eye motion compared to previous AO-OCT implementations. Third, eye motion artifacts were further reduced by custom ImageJ plugins that registered (axially and laterally) the volume videos. In two subjects, cone photoreceptors were imaged and tracked over a ten day period and their reflectance and outer segment (OS) lengths measured. High-speed imaging and image registration/dewarping were found to reduce eye motion to a fraction of a cone width (1 μm root mean square). The pattern of reflections in the cones was found to change dramatically and occurred on a spatial scale well below the resolution of clinical instruments. Normalized reflectance of connecting cilia (CC) and OS posterior tip (PT) of an exemplary cone was 54+/-4, 47+/-4, 48+/-6, 50+/-5, 56+/-1% and 46+/-4, 53+/-4, 52+/-6, 50+/-5, 44+/-1% for days #1,3,6,8,10 respectively. OS length of the same cone was 28.9, 26.4, 26.4, 30.6, and 28.1 ìm for days #1,3,6,8,10 respectively. It is plausible these changes are an optical correlate of the natural process of OS renewal and shedding.

  3. The sophisticated visual system of a tiny Cambrian crustacean: analysis of a stalked fossil compound eye

    PubMed Central

    Schoenemann, Brigitte; Castellani, Christopher; Clarkson, Euan N. K.; Haug, Joachim T.; Maas, Andreas; Haug, Carolin; Waloszek, Dieter

    2012-01-01

    Fossilized compound eyes from the Cambrian, isolated and three-dimensionally preserved, provide remarkable insights into the lifestyle and habitat of their owners. The tiny stalked compound eyes described here probably possessed too few facets to form a proper image, but they represent a sophisticated system for detecting moving objects. The eyes are preserved as almost solid, mace-shaped blocks of phosphate, in which the original positions of the rhabdoms in one specimen are retained as deep cavities. Analysis of the optical axes reveals four visual areas, each with different properties in acuity of vision. They are surveyed by lenses directed forwards, laterally, backwards and inwards, respectively. The most intriguing of these is the putatively inwardly orientated zone, where the optical axes, like those orientated to the front, interfere with axes of the other eye of the contralateral side. The result is a three-dimensional visual net that covers not only the front, but extends also far laterally to either side. Thus, a moving object could be perceived by a two-dimensional coordinate (which is formed by two axes of those facets, one of the left and one of the right eye, which are orientated towards the moving object) in a wide three-dimensional space. This compound eye system enables small arthropods equipped with an eye of low acuity to estimate velocity, size or distance of possible food items efficiently. The eyes are interpreted as having been derived from individuals of the early crustacean Henningsmoenicaris scutula pointing to the existence of highly efficiently developed eyes in the early evolutionary lineage leading towards the modern Crustacea. PMID:22048954

  4. Internal structure changes of eyelash induced by eye makeup.

    PubMed

    Fukami, Ken-Ichi; Inoue, Takafumi; Kawai, Tomomitsu; Takeuchi, Akihisa; Uesugi, Kentaro; Suzuki, Yoshio

    2014-01-01

    To investigate how eye makeup affects eyelash structure, internal structure of eyelashes were observed with a scanning X-ray microscopic tomography system using synchrotron radiation light source. Eyelash samples were obtained from 36 Japanese women aged 20-70 years and whose use of eye makeup differed. Reconstructed cross-sectional images showed that the structure of the eyelash closely resembled that of scalp hair. The eyelash structure is changed by use of eye makeup. There was a positive correlation between the frequency of mascara use and the degree of cracking in cuticle. The positive correlation was also found between the frequency of mascara use and the porosity of the cortex. By contrast, the use of eyelash curler did not affect the eyelash structure with statistical significance.

  5. Combining volumetric edge display and multiview display for expression of natural 3D images

    NASA Astrophysics Data System (ADS)

    Yasui, Ryota; Matsuda, Isamu; Kakeya, Hideki

    2006-02-01

    In the present paper the authors present a novel stereoscopic display method combining volumetric edge display technology and multiview display technology to realize presentation of natural 3D images where the viewers do not suffer from contradiction between binocular convergence and focal accommodation of the eyes, which causes eyestrain and sickness. We adopt volumetric display method only for edge drawing, while we adopt stereoscopic approach for flat areas of the image. Since focal accommodation of our eyes is affected only by the edge part of the image, natural focal accommodation can be induced if the edges of the 3D image are drawn on the proper depth. The conventional stereo-matching technique can give us robust depth values of the pixels which constitute noticeable edges. Also occlusion and gloss of the objects can be roughly expressed with the proposed method since we use stereoscopic approach for the flat area. We can attain a system where many users can view natural 3D objects at the consistent position and posture at the same time in this system. A simple optometric experiment using a refractometer suggests that the proposed method can give us 3-D images without contradiction between binocular convergence and focal accommodation.

  6. Bowman Break and Subbasal Nerve Plexus Changes in a Patient With Dry Eye Presenting With Chronic Ocular Pain and Vitamin D Deficiency.

    PubMed

    Shetty, Rohit; Deshpande, Kalyani; Deshmukh, Rashmi; Jayadev, Chaitra; Shroff, Rushad

    2016-05-01

    To report the case of a 40-year-old patient with persistent bilateral ocular pain and discomfort for 2 years in whom conventional management of dry eye had failed. Detailed ocular examination, meibography, and tear film evaluation were suggestive of bilateral meibomian gland dysfunction and evaporative dry eye. Topical medication failed to alleviate the patient's symptoms. To identify the cause of pain, imaging was performed with in vivo confocal microscopy and anterior segment spectral domain optical coherence tomography. Systemic evaluation revealed severe vitamin D deficiency with a value of 5.86 ng/mL. Case report. In vivo confocal microscopy showed abnormal subbasal nerve plexus morphology, increased dendritic cell density, and enlarged terminal nerve sprouts. A breach in the Bowman layer was detected in both eyes on spectral domain optical coherence tomography. Conventional management having failed, LipiFlow treatment (TearScience, Morrisville, NC) was performed and topical therapy with cyclosporine 0.05%, steroids, and lubricating eye drops was initiated with incomplete symptomatic relief. However, with parenteral therapy for vitamin D deficiency, there was a dramatic improvement in the patient's symptoms. Inflammation aggravated by vitamin D deficiency results in an altered epithelial profile, Bowman layer damage, recruitment of dendritic cells, and altered subbasal nerve plexus features in patients with chronic dry eye disease. These can serve as potential imaging markers for studying the underlying mechanisms in patients with dry eye disease with persisting symptoms despite aggressive conventional treatment.

  7. Single neural code for blur in subjects with different interocular optical blur orientation

    PubMed Central

    Radhakrishnan, Aiswaryah; Sawides, Lucie; Dorronsoro, Carlos; Peli, Eli; Marcos, Susana

    2015-01-01

    The ability of the visual system to compensate for differences in blur orientation between eyes is not well understood. We measured the orientation of the internal blur code in both eyes of the same subject monocularly by presenting pairs of images blurred with real ocular point spread functions (PSFs) of similar blur magnitude but varying in orientations. Subjects assigned a level of confidence to their selection of the best perceived image in each pair. Using a classification-images–inspired paradigm and applying a reverse correlation technique, a classification map was obtained from the weighted averages of the PSFs, representing the internal blur code. Positive and negative neural PSFs were obtained from the classification map, representing the neural blur for best and worse perceived blur, respectively. The neural PSF was found to be highly correlated in both eyes, even for eyes with different ocular PSF orientations (rPos = 0.95; rNeg = 0.99; p < 0.001). We found that in subjects with similar and with different ocular PSF orientations between eyes, the orientation of the positive neural PSF was closer to the orientation of the ocular PSF of the eye with the better optical quality (average difference was ∼10°), while the orientation of the positive and negative neural PSFs tended to be orthogonal. These results suggest a single internal code for blur with orientation driven by the orientation of the optical blur of the eye with better optical quality. PMID:26114678

  8. Photographic Reading Center of the Idiopathic Intracranial Hypertension Treatment Trial (IIHTT): Methods and Baseline Results

    PubMed Central

    Fischer, William S.; Wall, Michael; McDermott, Michael P.; Kupersmith, Mark J.; Feldon, Steven E.

    2015-01-01

    Purpose. To describe the methods used by the Photographic Reading Center (PRC) of the Idiopathic Intracranial Hypertension Treatment Trial (IIHTT) and to report baseline assessments of papilledema severity in participants. Methods. Stereoscopic digital images centered on the optic disc and the macula were collected using certified personnel and photographic equipment. Certification of the camera system included standardization and calibration using a model eye. Lay readers assessed disc photos of all eyes using the Frisén grade and performed quantitative measurements of papilledema. Frisén grades by PRC were compared with site investigator clinical grades. Spearman rank correlations were used to quantify associations among disc features and selected clinical variables. Results. Frisén grades according to the PRC and site investigator's grades, matched exactly in 48% of the study eyes and 42% of the fellow eyes and within one grade in 94% of the study eyes and 92% of the fellow eyes. Frisén grade was strongly correlated (r > 0.65, P < 0.0001) with quantitative measures of disc area. Cerebrospinal fluid pressure was weakly associated with Frisén grade and disc area determinations (r ≤ 0.31). Neither Frisén grade nor any fundus feature was associated with perimetric mean deviation. Conclusions. In a prospective clinical trial, lay readers agreed reasonably well with physicians in assessing Frisén grade. Standardization of camera systems enhanced consistency of photographic quality across study sites. Images were affected more by sensors with poor dynamic range than by poor resolution. Frisén grade is highly correlated with quantitative assessment of disc area. (ClinicalTrials.gov number, NCT01003639.) PMID:26024112

  9. Photographic Reading Center of the Idiopathic Intracranial Hypertension Treatment Trial (IIHTT): Methods and Baseline Results.

    PubMed

    Fischer, William S; Wall, Michael; McDermott, Michael P; Kupersmith, Mark J; Feldon, Steven E

    2015-05-01

    To describe the methods used by the Photographic Reading Center (PRC) of the Idiopathic Intracranial Hypertension Treatment Trial (IIHTT) and to report baseline assessments of papilledema severity in participants. Stereoscopic digital images centered on the optic disc and the macula were collected using certified personnel and photographic equipment. Certification of the camera system included standardization and calibration using a model eye. Lay readers assessed disc photos of all eyes using the Frisén grade and performed quantitative measurements of papilledema. Frisén grades by PRC were compared with site investigator clinical grades. Spearman rank correlations were used to quantify associations among disc features and selected clinical variables. Frisén grades according to the PRC and site investigator's grades, matched exactly in 48% of the study eyes and 42% of the fellow eyes and within one grade in 94% of the study eyes and 92% of the fellow eyes. Frisén grade was strongly correlated (r > 0.65, P < 0.0001) with quantitative measures of disc area. Cerebrospinal fluid pressure was weakly associated with Frisén grade and disc area determinations (r ≤ 0.31). Neither Frisén grade nor any fundus feature was associated with perimetric mean deviation. In a prospective clinical trial, lay readers agreed reasonably well with physicians in assessing Frisén grade. Standardization of camera systems enhanced consistency of photographic quality across study sites. Images were affected more by sensors with poor dynamic range than by poor resolution. Frisén grade is highly correlated with quantitative assessment of disc area. (ClinicalTrials.gov number, NCT01003639.).

  10. Eye vergence responses during a visual memory task.

    PubMed

    Solé Puig, Maria; Romeo, August; Cañete Crespillo, Jose; Supèr, Hans

    2017-02-08

    In a previous report it was shown that covertly attending visual stimuli produce small convergence of the eyes, and that visual stimuli can give rise to different modulations of the angle of eye vergence, depending on their power to capture attention. Working memory is highly dependent on attention. Therefore, in this study we assessed vergence responses in a memory task. Participants scanned a set of 8 or 12 images for 10 s, and thereafter were presented with a series of single images. One half were repeat images - that is, they belonged to the initial set - and the other half were novel images. Participants were asked to indicate whether or not the images were included in the initial image set. We observed that eyes converge during scanning the set of images and during the presentation of the single images. The convergence was stronger for remembered images compared with the vergence for nonremembered images. Modulation in pupil size did not correspond to behavioural responses. The correspondence between vergence and coding/retrieval processes of memory strengthen the idea of a role for vergence in attention processing of visual information.

  11. Underwater binocular imaging of aerial objects versus the position of eyes relative to the flat water surface.

    PubMed

    Barta, András; Horváth, Gábor

    2003-12-01

    The apparent position, size, and shape of aerial objects viewed binocularly from water change as a result of the refraction of light at the water surface. Earlier studies of the refraction-distorted structure of the aerial binocular visual field of underwater observers were restricted to either vertically or horizontally oriented eyes. Here we calculate the position of the binocular image point of an aerial object point viewed by two arbitrarily positioned underwater eyes when the water surface is flat. Assuming that binocular image fusion is performed by appropriate vergent eye movements to bring the object's image onto the foveae, the structure of the aerial binocular visual field is computed and visualized as a function of the relative positions of the eyes. We also analyze two erroneous representations of the underwater imaging of aerial objects that have occurred in the literature. It is demonstrated that the structure of the aerial binocular visual field of underwater observers distorted by refraction is more complex than has been thought previously.

  12. Research on moving object detection based on frog's eyes

    NASA Astrophysics Data System (ADS)

    Fu, Hongwei; Li, Dongguang; Zhang, Xinyuan

    2008-12-01

    On the basis of object's information processing mechanism with frog's eyes, this paper discussed a bionic detection technology which suitable for object's information processing based on frog's vision. First, the bionics detection theory by imitating frog vision is established, it is an parallel processing mechanism which including pick-up and pretreatment of object's information, parallel separating of digital image, parallel processing, and information synthesis. The computer vision detection system is described to detect moving objects which has special color, special shape, the experiment indicates that it can scheme out the detecting result in the certain interfered background can be detected. A moving objects detection electro-model by imitating biologic vision based on frog's eyes is established, the video simulative signal is digital firstly in this system, then the digital signal is parallel separated by FPGA. IN the parallel processing, the video information can be caught, processed and displayed in the same time, the information fusion is taken by DSP HPI ports, in order to transmit the data which processed by DSP. This system can watch the bigger visual field and get higher image resolution than ordinary monitor systems. In summary, simulative experiments for edge detection of moving object with canny algorithm based on this system indicate that this system can detect the edge of moving objects in real time, the feasibility of bionic model was fully demonstrated in the engineering system, and it laid a solid foundation for the future study of detection technology by imitating biologic vision.

  13. Ocular screening tests of elementary school children

    NASA Technical Reports Server (NTRS)

    Richardson, J.

    1983-01-01

    This report presents an analysis of 507 abnormal retinal reflex images taken of Huntsville kindergarten and first grade students. The retinal reflex images were obtained by using an MSFC-developed Generated Retinal Reflex Image System (GRRIS) photorefractor. The system uses a 35 mm camera with a telephoto lens with an electronic flash attachment. Slide images of the eyes were examined for abnormalities. Of a total of 1835 students screened for ocular abnormalities, 507 were found to have abnormal retinal reflexes. The types of ocular abnormalities detected were hyperopia, myopia, astigmatism, esotropia, exotropia, strabismus, and lens obstuctions. The report shows that the use of the photorefractor screening system is an effective low-cost means of screening school children for abnormalities.

  14. In vivo imaging of palisades of Vogt in dry eye versus normal subjects using en-face spectral-domain optical coherence tomography.

    PubMed

    Ghouali, Wajdene; Tahiri Joutei Hassani, Rachid; Djerada, Zoubir; Liang, Hong; El Sanharawi, Mohamed; Labbé, Antoine; Baudouin, Christophe

    2017-01-01

    To evaluate a possible clinical application of spectral-domain optical coherence tomography (SD-OCT) using en-face module for the imaging of the corneoscleral limbus in normal subjects and dry eye patients. Seventy-six subjects were included in this study. Seventy eyes of 35 consecutive patients with dry eye disease and 82 eyes of 41 healthy control subjects were investigated. All subjects were examined with the Avanti RTVue® anterior segment OCT. En-face OCT images of the corneoscleral limbus were acquired in four quadrants (inferior, superior, nasal and temporal) and then were analyzed semi-quantitatively according to whether or not palisades of Vogt (POV) were visible. En-face OCT images were then compared to in vivo confocal microscopy (IVCM) in eleven eyes of 7 healthy and dry eye patients. En-face SD-OCT showed POV as a radially oriented network, located in superficial corneoscleral limbus, with a good correlation with IVCM features. It provided an easy and reproducible identification of POV without any special preparation or any direct contact, with a grading scale from 0 (no visualization) to 3 (high visualization). The POV were found predominantly in superior (P<0.001) and inferior (P<0.001) quadrants when compared to the nasal and temporal quadrants for all subjects examined. The visibility score decreased with age (P<0.001) and was lower in dry eye patients (P<0.01). In addition, the score decreased in accordance with the severity of dry eye disease (P<0.001). En-face SD-OCT is a non-contact imaging technique that can be used to evaluate the POV, thus providing valuable information about differences in the limbal anatomy of dry eye patients as compared to healthy patients.

  15. Using ultrahigh sensitive optical microangiography to achieve comprehensive depth resolved microvasculature mapping for human retina

    NASA Astrophysics Data System (ADS)

    An, Lin; Shen, Tueng T.; Wang, Ruikang K.

    2011-10-01

    This paper presents comprehensive and depth-resolved retinal microvasculature images within human retina achieved by a newly developed ultrahigh sensitive optical microangiography (UHS-OMAG) system. Due to its high flow sensitivity, UHS-OMAG is much more sensitive to tissue motion due to the involuntary movement of the human eye and head compared to the traditional OMAG system. To mitigate these motion artifacts on final imaging results, we propose a new phase compensation algorithm in which the traditional phase-compensation algorithm is repeatedly used to efficiently minimize the motion artifacts. Comparatively, this new algorithm demonstrates at least 8 to 25 times higher motion tolerability, critical for the UHS-OMAG system to achieve retinal microvasculature images with high quality. Furthermore, the new UHS-OMAG system employs a high speed line scan CMOS camera (240 kHz A-line scan rate) to capture 500 A-lines for one B-frame at a 400 Hz frame rate. With this system, we performed a series of in vivo experiments to visualize the retinal microvasculature in humans. Two featured imaging protocols are utilized. The first is of the low lateral resolution (16 μm) and a wide field of view (4 × 3 mm2 with single scan and 7 × 8 mm2 for multiple scans), while the second is of the high lateral resolution (5 μm) and a narrow field of view (1.5 × 1.2 mm2 with single scan). The great imaging performance delivered by our system suggests that UHS-OMAG can be a promising noninvasive alternative to the current clinical retinal microvasculature imaging techniques for the diagnosis of eye diseases with significant vascular involvement, such as diabetic retinopathy and age-related macular degeneration.

  16. New spectral imaging techniques for blood oximetry in the retina

    NASA Astrophysics Data System (ADS)

    Alabboud, Ied; Muyo, Gonzalo; Gorman, Alistair; Mordant, David; McNaught, Andrew; Petres, Clement; Petillot, Yvan R.; Harvey, Andrew R.

    2007-07-01

    Hyperspectral imaging of the retina presents a unique opportunity for direct and quantitative mapping of retinal biochemistry - particularly of the vasculature where blood oximetry is enabled by the strong variation of absorption spectra with oxygenation. This is particularly pertinent both to research and to clinical investigation and diagnosis of retinal diseases such as diabetes, glaucoma and age-related macular degeneration. The optimal exploitation of hyperspectral imaging however, presents a set of challenging problems, including; the poorly characterised and controlled optical environment of structures within the retina to be imaged; the erratic motion of the eye ball; and the compounding effects of the optical sensitivity of the retina and the low numerical aperture of the eye. We have developed two spectral imaging techniques to address these issues. We describe first a system in which a liquid crystal tuneable filter is integrated into the illumination system of a conventional fundus camera to enable time-sequential, random access recording of narrow-band spectral images. Image processing techniques are described to eradicate the artefacts that may be introduced by time-sequential imaging. In addition we describe a unique snapshot spectral imaging technique dubbed IRIS that employs polarising interferometry and Wollaston prism beam splitters to simultaneously replicate and spectrally filter images of the retina into multiple spectral bands onto a single detector array. Results of early clinical trials acquired with these two techniques together with a physical model which enables oximetry map are reported.

  17. Designing and researching of the virtual display system based on the prism elements

    NASA Astrophysics Data System (ADS)

    Vasilev, V. N.; Grimm, V. A.; Romanova, G. E.; Smirnov, S. A.; Bakholdin, A. V.; Grishina, N. Y.

    2014-05-01

    Problems of designing of systems for virtual display systems for augmented reality placed near the observers eye (so called head worn displays) with the light guide prismatic elements are considered. Systems of augmented reality is the complex consists of the image generator (most often it's the microdisplay with the illumination system if the display is not self-luminous), the objective which forms the display image practically in infinity and the combiner which organizes the light splitting so that an observer could see the information of the microdisplay and the surrounding environment as the background at the same time. This work deals with the system with the combiner based on the composite structure of the prism elements. In the work three cases of the prism combiner design are considered and also the results of the modeling with the optical design software are presented. In the model the question of the large pupil zone was analyzed and also the discontinuous character (mosaic structure) of the angular field in transmission of the information from the microdisplay to the observer's eye with the prismatic structure are discussed.

  18. Active Lymphatic Drainage From the Eye Measured by Noninvasive Photoacoustic Imaging of Near-Infrared Nanoparticles.

    PubMed

    Yücel, Yeni H; Cardinell, Kirsten; Khattak, Shireen; Zhou, Xun; Lapinski, Michael; Cheng, Fang; Gupta, Neeru

    2018-06-01

    To visualize and quantify lymphatic drainage of aqueous humor from the eye to cervical lymph nodes in the dynamic state. A near-infrared tracer was injected into the right eye anterior chamber of 10 mice under general anesthesia. Mice were imaged with photoacoustic tomography before and 20 minutes, 2, 4, and 6 hours after injection. Tracer signal intensity was measured in both eyes and right and left neck lymph nodes at every time point and signal intensity slopes were calculated. Slope differences between right and left eyes and right and left nodes were compared using paired t-test. Neck nodes were examined with fluorescence optical imaging and histologically for the presence of tracer. Following right eye intracameral injection of tracer, an exponential decrease in tracer signal was observed from 20 minutes to 6 hours in all mice. Slope differences of the signal intensity between right and left eyes were significant (P < 0.001). Simultaneously, increasing tracer signal was observed in the right neck node from 20 minutes to 6 hours. Slope differences of the signal intensity between right and left neck nodes were significant (P = 0.0051). Ex vivo optical fluorescence imaging and histopathologic examination of neck nodes confirmed tracer presence within submandibular nodes. Active lymphatic drainage of aqueous from the eye to cervical lymph nodes was measured noninvasively by photoacoustic imaging of near-infrared nanoparticles. This unique in vivo assay may help to uncover novel drugs that target alternative outflow routes to lower IOP in glaucoma and may provide new insights into lymphatic drainage in eye health and disease.

  19. Pigmented anatomy in Carboniferous cyclostomes and the evolution of the vertebrate eye.

    PubMed

    Gabbott, Sarah E; Donoghue, Philip C J; Sansom, Robert S; Vinther, Jakob; Dolocan, Andrei; Purnell, Mark A

    2016-08-17

    The success of vertebrates is linked to the evolution of a camera-style eye and sophisticated visual system. In the absence of useful data from fossils, scenarios for evolutionary assembly of the vertebrate eye have been based necessarily on evidence from development, molecular genetics and comparative anatomy in living vertebrates. Unfortunately, steps in the transition from a light-sensitive 'eye spot' in invertebrate chordates to an image-forming camera-style eye in jawed vertebrates are constrained only by hagfish and lampreys (cyclostomes), which are interpreted to reflect either an intermediate or degenerate condition. Here, we report-based on evidence of size, shape, preservation mode and localized occurrence-the presence of melanosomes (pigment-bearing organelles) in fossil cyclostome eyes. Time of flight secondary ion mass spectrometry analyses reveal secondary ions with a relative intensity characteristic of melanin as revealed through principal components analyses. Our data support the hypotheses that extant hagfish eyes are degenerate, not rudimentary, that cyclostomes are monophyletic, and that the ancestral vertebrate had a functional visual system. We also demonstrate integument pigmentation in fossil lampreys, opening up the exciting possibility of investigating colour patterning in Palaeozoic vertebrates. The examples we report add to the record of melanosome preservation in Carboniferous fossils and attest to surprising durability of melanosomes and biomolecular melanin. © 2016 The Authors.

  20. Pigmented anatomy in Carboniferous cyclostomes and the evolution of the vertebrate eye

    PubMed Central

    Gabbott, Sarah E.; Sansom, Robert S.; Vinther, Jakob; Dolocan, Andrei; Purnell, Mark A.

    2016-01-01

    The success of vertebrates is linked to the evolution of a camera-style eye and sophisticated visual system. In the absence of useful data from fossils, scenarios for evolutionary assembly of the vertebrate eye have been based necessarily on evidence from development, molecular genetics and comparative anatomy in living vertebrates. Unfortunately, steps in the transition from a light-sensitive ‘eye spot’ in invertebrate chordates to an image-forming camera-style eye in jawed vertebrates are constrained only by hagfish and lampreys (cyclostomes), which are interpreted to reflect either an intermediate or degenerate condition. Here, we report—based on evidence of size, shape, preservation mode and localized occurrence—the presence of melanosomes (pigment-bearing organelles) in fossil cyclostome eyes. Time of flight secondary ion mass spectrometry analyses reveal secondary ions with a relative intensity characteristic of melanin as revealed through principal components analyses. Our data support the hypotheses that extant hagfish eyes are degenerate, not rudimentary, that cyclostomes are monophyletic, and that the ancestral vertebrate had a functional visual system. We also demonstrate integument pigmentation in fossil lampreys, opening up the exciting possibility of investigating colour patterning in Palaeozoic vertebrates. The examples we report add to the record of melanosome preservation in Carboniferous fossils and attest to surprising durability of melanosomes and biomolecular melanin. PMID:27488650

  1. Hurricane Isadore

    NASA Technical Reports Server (NTRS)

    2002-01-01

    [figure removed for brevity, see original site] [figure removed for brevity, see original site] Figure 1: AIRS channel 2333 (2616 cm-1)Figure 2: HSB channel 2 (150 GHz)

    Three different Views of Hurricane Isidore from the Atmospheric Infrared Sounding System (AIRS) on Aqua.

    At the time Aqua passed over Isidore, it was classified as a Category 3 (possibly 4) hurricane, with minimum pressure of 934 mbar, maximum sustained wind speeds of 110 knots (gusting to 135) and an eye diameter of 20 nautical miles. Isidore was later downgraded to a Tropical Storm before gathering strength again.

    This is a visible/near-infrared image, made with the AIRS instrument. Its 2 km resolution shows fine details of the cloud structure, and can be used to help interpret the other images. For example, some relatively cloud-free regions in the eye of the hurricane can be distinguished. This image was made with wavelengths slightly different than those seen by the human eye, causing plants to appear very red.

    Figure 1 shows high and cold clouds in blue. Figure 2 shows heavy rain cells over Alabama in blue. This image shows the swirling clouds in white and the water of the Gulf of Mexico in blue. The eye of the hurricane is apparent in all three images.

    Figure 1 shows how the hurricane looks through an AIRS Infrared window channel. Window channels measure the temperature of the cloud tops or the surface of the Earth in clear regions. The lowest temperatures are over Alabama and are associated with high, cold cloud tops at the end of the cloud band streaming from the hurricane. Although the eye is visible, it does not appear to be completely cloud free.

    Figure 2 shows the hurricane as seen through a microwave channel of the Humidity Sounder for Brazil (HSB). This channel is sensitive to humidity, clouds and rain. Unlike the AIRS infrared channel, it can penetrate through cloud layers and therefore reveals some of the internal structure of the hurricane. In this image, the green and yellow colors indicate clouds and heavy moisture, while blue indicates scattering by precipitation in intense convection. Orange indicates warm, moist air near the surface. The ocean surface, could it be seen, would appear slightly colder (yellow to green) due to the relatively low emissivity of water. Three sets of eye walls are apparent, and a number of intense convective cells can also be distinguished.

    In the near future, weather data derived from these images will allow us to improve our forecasts and track the paths of hurricanes more accurately. The AIRS sounding system provides 2400 such images, or channels, continuously.

    The Atmospheric Infrared Sounder Experiment, with its visible, infrared, and microwave detectors, provides a three-dimensional look at Earth's weather. Working in tandem, the three instruments can make simultaneous observations all the way down to the Earth's surface, even in the presence of heavy clouds. With more than 2,000 channels sensing different regions of the atmosphere, the system creates a global, 3-D map of atmospheric temperature and humidity and provides information on clouds, greenhouse gases, and many other atmospheric phenomena. The AIRS Infrared Sounder Experiment flies onboard NASA's Aqua spacecraft and is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., under contract to NASA. JPL is a division of the California Institute of Technology in Pasadena.

  2. Eye Disease in Patients with Diabetes Screened with Telemedicine.

    PubMed

    Park, Dong-Wouk; Mansberger, Steven L

    2017-02-01

    Telemedicine with nonmydriatic cameras can detect not only diabetic retinopathy but also other eye disease. To determine the prevalence of eye diseases detected by telemedicine in a population with a high prevalence of minority and American Indian/Alaskan Native (AI/AN) ethnicities. We recruited diabetic patients 18 years and older and used telemedicine with nonmydriatic cameras to detect eye disease. Two trained readers graded the images for diabetic retinopathy, age-related macular degeneration (ARMD), glaucomatous features, macular edema, and other eye disease using a standard protocol. We included both eyes for analysis and excluded images that were too poor to grade. We included 820 eyes from 424 patients with 72.3% nonwhite ethnicity and 50.3% AI/AN heritage. While 283/424 (66.7%) patients had normal eye images, 120/424 (28.3%) had one disease identified; 15/424 (3.5%) had two diseases; and 6/424 (1.4%) had three diseases in one or both eyes. After diabetic retinopathy (104/424, 24.5%), the most common eye diseases were glaucomatous features (44/424, 10.4%) and dry ARMD (24/424, 5.7%). Seventeen percent (72/424, 17.0%) showed eye disease other than diabetic retinopathy. Telemedicine with nonmydriatic cameras detected diabetic retinopathy, as well as other visually significant eye disease. This suggests that a diabetic retinopathy screening program needs to detect and report other eye disease, including glaucoma and macular disease.

  3. Training time and quality of smartphone-based anterior segment screening in rural India.

    PubMed

    Ludwig, Cassie A; Newsom, Megan R; Jais, Alexandre; Myung, David J; Murthy, Somasheila I; Chang, Robert T

    2017-01-01

    We aimed at evaluating the ability of individuals without ophthalmologic training to quickly capture high-quality images of the cornea by using a smartphone and low-cost anterior segment imaging adapter (the "EyeGo" prototype). Seven volunteers photographed 1,502 anterior segments from 751 high school students in Varni, India, by using an iPhone 5S with an attached EyeGo adapter. Primary outcome measures were median photograph quality of the cornea and anterior segment of the eye (validated Fundus Photography vs Ophthalmoscopy Trial Outcomes in the Emergency Department [FOTO-ED] study; 1-5 scale; 5, best) and the time required to take each photograph. Volunteers were surveyed on their familiarity with using a smartphone (1-5 scale; 5, very comfortable) and comfort in assessing problems with the eye (1-5 scale; 5, very comfortable). Binomial logistic regression was performed using image quality (low quality: <4; high quality: ≥4) as the dependent variable and age, comfort using a smartphone, and comfort in assessing problems with the eye as independent variables. Six of the seven volunteers captured high-quality (median ≥4/5) images with a median time of ≤25 seconds per eye for all the eyes screened. Four of the seven volunteers demonstrated significant reductions in time to acquire photographs ( P 1=0.01, P 5=0.01, P 6=0.01, and P 7=0.01), and three of the seven volunteers demonstrated significant improvements in the quality of photographs between the first 100 and last 100 eyes screened ( P 1<0.001, P 2<0.001, and P 6<0.01). Self-reported comfort using a smartphone (odds ratio [OR] =1.25; 95% CI =1.13 to 1.39) and self-reported comfort diagnosing eye conditions (OR =1.17; 95% CI =1.07 to 1.29) were significantly associated with an ability to take a high-quality image (≥4/5). There was a nonsignificant association between younger age and ability to take a high-quality image. Individuals without ophthalmic training were able to quickly capture a high-quality magnified view of the anterior segment of the eye by using a smartphone with an attached imaging adapter.

  4. Ultra-wide-field autofluorescence imaging in non-traumatic rhegmatogenous retinal detachment

    PubMed Central

    Witmer, M T; Cho, M; Favarone, G; Paul Chan, R V; D'Amico, D J; Kiss, S

    2012-01-01

    Purpose Rhegmatogenous retinal detachment (RRD) affects the function of the retina before and after surgical repair. We investigated ultra-wide-field autofluorescence (UAF) abnormalities in patients with acute RRD to improve our understanding of the functional changes in the retina before and after surgery. Methods In this retrospective study, we present the UAF imaging findings of 16 patients with acute, non-traumatic RRD. Imaging was obtained with the Optos 200 Tx (Optos) in 14 eyes preoperatively and in 12 eyes postoperatively. Twelve eyes had RRDs that involved the macula (group A), whereas four eyes had macula-sparing RRDs (group B). Results All patients (100%) with bullous retinal detachments demonstrated hypofluorescence over the area of retinal detachment. A hyperfluorescent leading edge (HLE) to the retinal detachment was observed preoperatively in 100% of eyes in group A and 75% of eyes in group B. Preoperative UAF through the fovea of group A eyes was normal (30%), hypofluorescent (50%) or hyperfluorescent (20%). In all patients with a HLE preoperatively, the HLE resolved by the 1-month postoperative visit. A residual line of demarcation remained in 8 of the 12 eyes (67%). In group A eyes, postoperative granular autofluorescent changes were present in four of the nine (44%) eyes, and were associated with worse preoperative (P=0.04) and postoperative (P=0.09) visual acuity. Conclusion UAF imaging reveals abnormalities in RRDs that allow excellent demarcation of the extent of the retinal detachment and assist in preoperative characterization of the detachment and postoperative counselling. PMID:22722489

  5. MEMS scanner mirror based system for retina scanning and in eye projection

    NASA Astrophysics Data System (ADS)

    Woittennek, Franziska; Knobbe, Jens; Pügner, Tino; Dallmann, Hans-Georg; Schelinski, Uwe; Grüger, Heinrich

    2015-02-01

    Many applications could benefit from miniaturized systems to scan blood vessels behind the retina in the human eye, so called "retina scanning". This reaches from access control to sophisticated security applications and medical devices. High volume systems for consumer applications require low cost and a user friendly operation. For example this includes no need for removal of glasses and self-adjustment, in turn guidance of focus and point of attraction by simultaneous projection for the user. A new system has been designed based on the well-known resonantly driven 2-d scanner mirror of Fraunhofer IPMS. A combined NIR and VIS laser system illuminates the eye through an eye piece designed for an operating distance allowing the use of glasses and granting sufficient field of view. This usability feature was considered to be more important than highest miniaturization. The modulated VIS laser facilitates the projection of an image directly onto the retina. The backscattered light from the continuous NIR laser contains the information of the blood vessels and is detected by a highly sensitive photo diode. A demonstrational setup has been realized including readout and driving electronics. The laser power was adjusted to an eye-secure level. Additional security features were integrated. Test measurements revealed promising results. In a first demonstration application the detection of biometric pattern of the blood vessels was evaluated for issues authentication in.

  6. Images of intravitreal objects projected onto posterior surface of model eye.

    PubMed

    Kawamura, Ryosuke; Shinoda, Kei; Inoue, Makoto; Noda, Toru; Ohnuma, Kazuhiko; Hirakata, Akito

    2013-11-01

    To try to recreate the images reported by patients during vitreous surgery in a model eye. A fluid-filled model eye with a posterior frosted translucent surface which corresponded to the retina was used. Three holes were made in the model eye through which an endoillumination pipe and intraocular forceps could be inserted. A thin plastic sheet simulating an epiretinal membrane and an intraocular lens (IOL) simulating a dislocated IOL were placed on the retina. The images falling on the posterior surface were photographed from the rear. The images seen through the surgical microscope were also recorded. The images from the rear were mirror images of those seen through the surgical microscope. Intraocular instruments were seen as black shafts from the rear. When the plastic sheet was picked up, the tip of the forceps was seen more sharply on the posterior surface. The images of the dislocated IOL from the posterior were similar to that seen through the surgical microscope, including the yellow optics and blue haptics. Intravitreal objects can form images on the surface of a model eye. Objects located closer to the surface are seen more sharply, and the colour of the objects can be identified. © 2013 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  7. Automatic system for 3D reconstruction of the chick eye based on digital photographs.

    PubMed

    Wong, Alexander; Genest, Reno; Chandrashekar, Naveen; Choh, Vivian; Irving, Elizabeth L

    2012-01-01

    The geometry of anatomical specimens is very complex and accurate 3D reconstruction is important for morphological studies, finite element analysis (FEA) and rapid prototyping. Although magnetic resonance imaging, computed tomography and laser scanners can be used for reconstructing biological structures, the cost of the equipment is fairly high and specialised technicians are required to operate the equipment, making such approaches limiting in terms of accessibility. In this paper, a novel automatic system for 3D surface reconstruction of the chick eye from digital photographs of a serially sectioned specimen is presented as a potential cost-effective and practical alternative. The system is designed to allow for automatic detection of the external surface of the chick eye. Automatic alignment of the photographs is performed using a combination of coloured markers and an algorithm based on complex phase order likelihood that is robust to noise and illumination variations. Automatic segmentation of the external boundaries of the eye from the aligned photographs is performed using a novel level-set segmentation approach based on a complex phase order energy functional. The extracted boundaries are sampled to construct a 3D point cloud, and a combination of Delaunay triangulation and subdivision surfaces is employed to construct the final triangular mesh. Experimental results using digital photographs of the chick eye show that the proposed system is capable of producing accurate 3D reconstructions of the external surface of the eye. The 3D model geometry is similar to a real chick eye and could be used for morphological studies and FEA.

  8. Endoscopes with latest technology and concept.

    PubMed

    Gotoh

    2003-09-01

    Endoscopic imaging systems that perform as the "eye" of the operator during endoscopic surgical procedures have developed rapidly due to various technological developments. In addition, since the most recent turn of the century robotic surgery has increased its scope through the utilization of systems such as Intuitive Surgical's da Vinci System. To optimize the imaging required for precise robotic surgery, a unique endoscope has been developed, consisting of both a two dimensional (2D) image optical system for wider observation of the entire surgical field, and a three dimensional (3D) image optical system for observation of the more precise details at the operative site. Additionally, a "near infrared radiation" endoscopic system is under development to detect the sentinel lymph node more readily. Such progress in the area of endoscopic imaging is expected to enhance the surgical procedure from both the patient's and the surgeon's point of view.

  9. Wavelength dependence of the apparent diameter of retinal blood vessels

    NASA Astrophysics Data System (ADS)

    Park, Robert; Twietmeyer, Karen; Chipman, Russell; Beaudry, Neil; Salyer, David

    2005-04-01

    Imaging of retinal blood vessels may assist in the diagnosis and monitoring of diseases such as glaucoma, diabetic retinopathy, and hypertension. However, close examination reveals that the contrast and apparent diameter of vessels are dependent on the wavelength of the illuminating light. In this study multispectral images of large arteries and veins within enucleated swine eyes are obtained with a modified fundus camera by use of intravitreal illumination. The diameters of selected vessels are measured as a function of wavelength by cross-sectional analysis. A fixed scale with spectrally independent dimension is placed above the retina to isolate the chromatic effects of the imaging system and eye. Significant apparent differences between arterial and venous diameters are found, with larger diameters observed at shorter wavelengths. These differences are due primarily to spectral absorption in the cylindrical blood column.

  10. The brilliant beauty of the eye: light reflex from the cornea and tear film.

    PubMed

    Goto, Eiki

    2006-12-01

    Light reflex from the cornea and tear film as contributors to beautiful eyes ("eye sparkling") are reviewed. A systematic literature review was conducted using "Purkinje-Sanson image," "corneal light reflex," "corneal topography," "corneal wavefront aberration," and "tear interference image" as search terms. Articles on corneal surface regularity and stability and tear interferometry of the precorneal tear lipid layer were reviewed. PS-1 image, that is light reflex from the cornea and tear film, is widely used in practical ophthalmic examination. To achieve a brilliant beauty of the eye ("eye sparkling"), it is important that the tear film (aqueous layer) surface is smooth and stable with adequate tear volume and that the tear lipid layer is present in adequate thickness.

  11. Non-contact full-field optical coherence tomography: a novel tool for in vivo imaging of the human cornea (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Mazlin, Viacheslav; Dalimier, Eugénie; Grieve, Katharine F.; Irsch, Kristina; Sahel, José-Alain; Fink, Mathias; Boccara, A. Claude

    2017-02-01

    According to the World Health Organization (WHO), corneal diseases alongside with cataract and retinal diseases are major causes of blindness worldwide. For the 95.5% of corneal blindness cases, prevention or rehabilitation could have been possible without negative consequences for vision, provided that disease is diagnosed early. However, diagnostics at the early stage requires cellular-level resolution, which is not achieved with routinely used Slit-lamp and OCT instruments. Confocal microscopy allows examination of the cornea at a resolution approaching histological detail, however requires contact with a patient's eye. The recently developed full-field OCT technique, in which 2D en face tangential optical slices are directly recorded on a camera, was successfully applied for ex vivo eye imaging. However, in vivo human eye imaging has not been demonstrated yet. Here we present a novel non-contact full-field OCT system, which is capable of imaging in air and, therefore, shows potential for in vivo cornea imaging in patients. The first cellular-level resolution ex vivo images of cornea, obtained in a completely non-contact way, were demonstrated. We were able to scan through the entire cornea (400 µm) and resolve epithelium, Bowman's layer, stroma and endothelium. FFOCT images of the human cornea in vivo were obtained for the first time. The epithelium structures and stromal keratocyte cells were distinguishable. Both ex vivo and in vivo images were acquired with a large (1.26 mm x 1.26 mm) field of view. Cellular details in obtained images make this device a promising candidate for realization of high-resolution in vivo cornea imaging.

  12. Topographic analyses of shape of eyes with pathologic myopia by high-resolution three-dimensional magnetic resonance imaging.

    PubMed

    Moriyama, Muka; Ohno-Matsui, Kyoko; Hayashi, Kengo; Shimada, Noriaki; Yoshida, Takeshi; Tokoro, Takashi; Morita, Ikuo

    2011-08-01

    To analyze the topography of human eyes with pathologic myopia by high-resolution magnetic resonance imaging (MRI) with volume rendering of the acquired images. Observational case series. Eighty-six eyes of 44 patients with high myopia (refractive error ≥-8.00 diopters [D] or axial length >26.5 mm) were studied. Forty emmetropic eyes were examined as controls. The participants were examined with an MRI scanner (Signa HDxt 1.5T, GE Healthcare, Waukesha, WI), and T(2)-weighted cubes were obtained. Volume renderings of the images from high-resolution 3-dimensional (3D) data were done by computer workstation. The margins of globes were then identified semiautomatically by the signal intensity, and the tissues outside the globes were removed. The 3D topographic characteristic of the globes and the distribution of the 4 distinct shapes of globes according to the symmetry and the radius of curvature of the contour of the posterior segment: the barrel, cylindric, nasally distorted, and temporally distorted types. In 69.8% of the patients with bilateral high myopia, both eyes had the same ocular shape. The most protruded part of the globe existed along the central sagittal axis in 78.3% of eyes and was slightly inferior to the central axis in the remaining eyes. In 38 of 68 eyes (55.9%) with bilateral pathologic myopia, multiple protrusions were observed. The eyes with 2 protrusions were subdivided into those with nasal protrusions and those with temporal protrusions. The eyes with 3 protrusions were subdivided into nasal, temporal superior, and temporal inferior protrusions. The eyes with visual field defects that could not be explained by myopic fundus lesions significantly more frequently had a temporally distorted shape. Eyes with ≥2 protrusions had myopic chorioretinal atrophy significantly more frequently than eyes with ≤1 protrusion. Our results demonstrate that it is possible to obtain a complete topographic image of human eyes by high-resolution MRI with volume-rendering techniques. The results showed that there are different ocular shapes in eyes with pathologic myopia, and that the difference in the ocular shape is correlated with the development of vision-threatening conditions in eyes with pathologic myopia. The author(s) have no proprietary or commercial interest in any materials discussed in this article. Copyright © 2011 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  13. Low-cost, high-speed back-end processing system for high-frequency ultrasound B-mode imaging.

    PubMed

    Chang, Jin Ho; Sun, Lei; Yen, Jesse T; Shung, K Kirk

    2009-07-01

    For real-time visualization of the mouse heart (6 to 13 beats per second), a back-end processing system involving high-speed signal processing functions to form and display images has been developed. This back-end system was designed with new signal processing algorithms to achieve a frame rate of more than 400 images per second. These algorithms were implemented in a simple and cost-effective manner with a single field-programmable gate array (FPGA) and software programs written in C++. The operating speed of the back-end system was investigated by recording the time required for transferring an image to a personal computer. Experimental results showed that the back-end system is capable of producing 433 images per second. To evaluate the imaging performance of the back-end system, a complete imaging system was built. This imaging system, which consisted of a recently reported high-speed mechanical sector scanner assembled with the back-end system, was tested by imaging a wire phantom, a pig eye (in vitro), and a mouse heart (in vivo). It was shown that this system is capable of providing high spatial resolution images with fast temporal resolution.

  14. Low-Cost, High-Speed Back-End Processing System for High-Frequency Ultrasound B-Mode Imaging

    PubMed Central

    Chang, Jin Ho; Sun, Lei; Yen, Jesse T.; Shung, K. Kirk

    2009-01-01

    For real-time visualization of the mouse heart (6 to 13 beats per second), a back-end processing system involving high-speed signal processing functions to form and display images has been developed. This back-end system was designed with new signal processing algorithms to achieve a frame rate of more than 400 images per second. These algorithms were implemented in a simple and cost-effective manner with a single field-programmable gate array (FPGA) and software programs written in C++. The operating speed of the back-end system was investigated by recording the time required for transferring an image to a personal computer. Experimental results showed that the back-end system is capable of producing 433 images per second. To evaluate the imaging performance of the back-end system, a complete imaging system was built. This imaging system, which consisted of a recently reported high-speed mechanical sector scanner assembled with the back-end system, was tested by imaging a wire phantom, a pig eye (in vitro), and a mouse heart (in vivo). It was shown that this system is capable of providing high spatial resolution images with fast temporal resolution. PMID:19574160

  15. EYE DEVELOPMENT

    PubMed Central

    Baker, Nicholas E.; Li, Ke; Quiquand, Manon; Ruggiero, Robert; Wang, Lan-Hsin

    2014-01-01

    The eye has been one of the most intensively studied organs in Drosophila. The wealth of knowledge about its development, as well as the reagents that have been developed, and the fact that the eye is dispensable for survival, also make the eye suitable for genetic interaction studies and genetic screens. This chapter provides a brief overview of the methods developed to image and probe eye development at multiple developmental stages, including live imaging, immunostaining of fixed tissues, in situ hybridizations, and scanning electron microscopy and color photography of adult eyes. Also summarized are genetic approaches that can be performed in the eye, including mosaic analysis and conditional mutation, gene misexpression and knockdown, and forward genetic and modifier screens. PMID:24784530

  16. Normative database of donor keratographic readings in an eye-bank setting.

    PubMed

    Lewis, Jennifer R; Bogucki, Jennifer M; Mahmoud, Ashraf M; Lembach, Richard G; Roberts, Cynthia J

    2010-04-01

    To generate a normative donor topographic database from rasterstereography images of whole globes acquired in an eye-bank setting with minimal manipulation or handling. Eye-bank laboratory. In a retrospective study, rasterstereography topographic images that had been prospectively collected in duplicate of donor eyes received by the Central Ohio Lions Eye Bank between 1997 and 1999 were analyzed. Best-fit sphere (BFS) and simulated keratometry (K) values were extracted. These values were recalculated after application of custom software to correct any tilt of the mapped surfaces relative to the image plane. The mean value variances between right eyes and left eyes, between consecutive scans, and after untilting were analyzed by repeated-measures analysis of variance and t tests (P.05, Kolmogorov-Smirnov). There was no difference between right and left eyes or consecutive scans (P>.05). The mean values changed when the images were tilt-corrected (P<.05). The right eye BFS, Kflat, and Ksteep values of 42.03 diopters (D) +/- 1.88 (SD), 42.21 +/- 2.10 D, and 43.82 +/- 2.00 D, respectively, increased to 42.52 +/- 1.73 D, 43.05 +/- 1.99 D, and 44.57 +/- 2.02 D, respectively, after tilt correction. Keratometric parameter frequency distributions from the donor database of tilt-corrected data were normal in distribution and comparable to parameters reported for normal eyes in a living population. These findings show the feasibility and reliability of routine donor-eye topography by rasterstereography. No author has a financial or proprietary interest in any material or method mentioned. Additional disclosures are found in the footnotes. Copyright (c) 2010 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  17. Evaluation of lens absorbed dose with Cone Beam IGRT procedures.

    PubMed

    Palomo, R; Pujades, M C; Gimeno-Olmos, J; Carmona, V; Lliso, F; Candela-Juan, C; Vijande, J; Ballester, F; Perez-Calatayud, J

    2015-12-01

    The purpose of this work is to evaluate the absorbed dose to the eye lenses due to the cone beam computed tomography (CBCT) system used to accurately position the patient during head-and-neck image guided procedures. The on-board imaging (OBI) systems (v.1.5) of Clinac iX and TrueBeam (Varian) accelerators were used to evaluate the imparted dose to the eye lenses and some additional points of the head. All CBCT scans were acquired with the Standard-Dose Head protocol from Varian. Doses were measured using thermoluminescence dosimeters (TLDs) placed in an anthropomorphic phantom. TLDs were calibrated at the beam quality used to reduce their energy dependence. Average dose to the lens due to the OBI systems of the Clinac iX and the TrueBeam were 0.71  ±  0.07 mGy/CBCT and 0.70  ±  0.08 mGy/CBCT, respectively. The extra absorbed dose received by the eye lenses due to one CBCT acquisition with the studied protocol is far below the 500 mGy threshold established by ICRP for cataract formation (ICRP 2011 Statement on Tissue Reactions). However, the incremental effect of several CBCT acquisitions during the whole treatment should be taken into account.

  18. Effect of H-7 on secondary cataract after phacoemulsification in the live rabbit eye.

    PubMed

    Tian, Baohe; Heatley, Gregg A; Filla, Mark S; Kaufman, Paul L

    2010-12-01

    This study is aimed to determine if the serine-threonine kinase inhibitor H-7 inhibits secondary cataract after phacoemulsification in the live rabbit eye. Eighteen rabbits underwent extracapsular lens extraction by phacoemulsification in 1 eye. The eye was treated with intravitreal H-7 (300 or 1,200 μM; n = 6 or 5) or balanced salt solution (BSS) (n = 7) immediately after the surgery and twice weekly for 10 weeks. Each eye received slit lamp biomicroscopy once a week, during which posterior capsule opacification (PCO) was evaluated. The eye was then enucleated and the lens capsule was prepared, fixed, and imaged. PCO was evaluated again on the isolated lens capsule under a phase microscope. Soemmering's ring area (SRA) and the entire lens capsule area were measured from capsule images on a computer and the percentage of SRA (PSRA) in the entire capsule area was calculated. Wet weight of the capsule (WW) was determined on a balance. No significant difference in PCO was observed in any comparison. No significant differences in SRA, PSRA, and WW were observed between the 300 μM H-7-treated eye and the BSS-treated eye. However, SRA, PSRA, and WW in the 1,200 μM H-7-treated eye were significantly smaller than those in the BSS-treated eye [28.3 ± 16.2 vs. 61.4 ± 8.86 mm(2) (P = 0.001), 33% ± 20% vs. 65% ± 15% (P = 0.01), and 65.6 ± 27.9 vs. 127.0 ±37.3 mg (P = 0.01)]. Intravitreal H-7 (1,200 μM) significantly inhibits Soemmering's ring formation in the live rabbit eye, suggesting that agents that inhibit the actomyosin system in cells may prevent secondary cataract after phacoemulsification.

  19. Nonhuman Primate Studies to Advance Vision Science and Prevent Blindness.

    PubMed

    Mustari, Michael J

    2017-12-01

    Most primate behavior is dependent on high acuity vision. Optimal visual performance in primates depends heavily upon frontally placed eyes, retinal specializations, and binocular vision. To see an object clearly its image must be placed on or near the fovea of each eye. The oculomotor system is responsible for maintaining precise eye alignment during fixation and generating eye movements to track moving targets. The visual system of nonhuman primates has a similar anatomical organization and functional capability to that of humans. This allows results obtained in nonhuman primates to be applied to humans. The visual and oculomotor systems of primates are immature at birth and sensitive to the quality of binocular visual and eye movement experience during the first months of life. Disruption of postnatal experience can lead to problems in eye alignment (strabismus), amblyopia, unsteady gaze (nystagmus), and defective eye movements. Recent studies in nonhuman primates have begun to discover the neural mechanisms associated with these conditions. In addition, genetic defects that target the retina can lead to blindness. A variety of approaches including gene therapy, stem cell treatment, neuroprosthetics, and optogenetics are currently being used to restore function associated with retinal diseases. Nonhuman primates often provide the best animal model for advancing fundamental knowledge and developing new treatments and cures for blinding diseases. © The Author(s) 2017. Published by Oxford University Press on behalf of the National Academy of Sciences. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  20. Ex vivo magnetic resonance imaging of crystalline lens dimensions in chicken.

    PubMed

    Tattersall, Rebecca J; Prashar, Ankush; Singh, Krish D; Tokarczuk, Pawel F; Erichsen, Jonathan T; Hocking, Paul M; Guggenheim, Jeremy A

    2010-02-02

    A reduction in the power of the crystalline lens during childhood is thought to be important in the emmetropization of the maturing eye. However, in humans and model organisms, little is known about the factors that determine the dimensions of the crystalline lens and in particular whether these different parameters (axial thickness, surface curvatures, equatorial diameter, and volume) are under a common source of control or regulated independently of other aspects of eye size and shape. Using chickens from a broiler-layer experimental cross as a model system, three-dimensional magnetic resonance imaging (MRI) scans were obtained at 115-microm isotropic resolution for one eye of 501 individuals aged 3-weeks old. After fixation with paraformaldehyde, the excised eyes were scanned overnight (16 h) in groups of 16 arranged in a 2x2x4 array. Lens dimensions were calculated from each image by fitting a three-dimensional mesh model to the lens, using the semi-automated analysis program mri3dX. The lens dimensions were compared to measures of eye and body size obtained in vivo using techniques that included keratometry and A-scan ultrasonography. A striking finding was that axial lens thickness measured using ex vivo MRI was only weakly correlated with lens thickness measured in vivo by ultrasonography (r=0.19, p<0.001). In addition, the MRI lens thickness estimates had a lower mean value and much higher variance. Indeed, about one-third of crystalline lenses showed a kidney-shaped appearance instead of the typical biconvex shape. Since repeat MRI scans of the same eye showed a high degree of reproducibility for the scanning and mri3dX analysis steps (the correlation in repeat lens thickness measurements was r=0.95, p<0.001) and a recent report has shown that paraformaldehyde fixation induces a loss of water from the human crystalline lens, it is likely that the tissue fixation step caused a variable degree of shrinkage and a change in shape to the lenses examined here. Despite this serious source of imprecision, we found significant correlations between lens volume and eye/body size (p<0.001) and between lens equatorial diameter and eye/body size (p<0.001) in these chickens. Our results suggest that certain aspects of lens size (specifically, lens volume and equatorial diameter) are controlled by factors that also regulate the size of the eye and body (presumably, predominantly genetic factors). However, since it has been shown previously that axial lens thickness is regulated almost independently of eye and body size, these results suggest that different systems might operate to control lens volume/diameter and lens thickness in normal chickens.

Top