Variable field-of-view visible and near-infrared polarization compound-eye endoscope.
Kagawa, K; Shogenji, R; Tanaka, E; Yamada, K; Kawahito, S; Tanida, J
2012-01-01
A multi-functional compound-eye endoscope enabling variable field-of-view and polarization imaging as well as extremely deep focus is presented, which is based on a compact compound-eye camera called TOMBO (thin observation module by bound optics). Fixed and movable mirrors are introduced to control the field of view. Metal-wire-grid polarizer thin film applicable to both of visible and near-infrared lights is attached to the lenses in TOMBO and light sources. Control of the field-of-view, polarization and wavelength of the illumination realizes several observation modes such as three-dimensional shape measurement, wide field-of-view, and close-up observation of the superficial tissues and structures beneath the skin.
Fabrication of multi-focal microlens array on curved surface for wide-angle camera module
NASA Astrophysics Data System (ADS)
Pan, Jun-Gu; Su, Guo-Dung J.
2017-08-01
In this paper, we present a wide-angle and compact camera module that consists of microlens array with different focal lengths on curved surface. The design integrates the principle of an insect's compound eye and the human eye. It contains a curved hexagonal microlens array and a spherical lens. Compared with normal mobile phone cameras which usually need no less than four lenses, but our proposed system only uses one lens. Furthermore, the thickness of our proposed system is only 2.08 mm and diagonal full field of view is about 100 degrees. In order to make the critical microlens array, we used the inkjet printing to control the surface shape of each microlens for achieving different focal lengths and use replication method to form curved hexagonal microlens array.
Bio-inspired hemispherical compound eye camera
NASA Astrophysics Data System (ADS)
Xiao, Jianliang; Song, Young Min; Xie, Yizhu; Malyarchuk, Viktor; Jung, Inhwa; Choi, Ki-Joong; Liu, Zhuangjian; Park, Hyunsung; Lu, Chaofeng; Kim, Rak-Hwan; Li, Rui; Crozier, Kenneth B.; Huang, Yonggang; Rogers, John A.
2014-03-01
Compound eyes in arthropods demonstrate distinct imaging characteristics from human eyes, with wide angle field of view, low aberrations, high acuity to motion and infinite depth of field. Artificial imaging systems with similar geometries and properties are of great interest for many applications. However, the challenges in building such systems with hemispherical, compound apposition layouts cannot be met through established planar sensor technologies and conventional optics. We present our recent progress in combining optics, materials, mechanics and integration schemes to build fully functional artificial compound eye cameras. Nearly full hemispherical shapes (about 160 degrees) with densely packed artificial ommatidia were realized. The number of ommatidia (180) is comparable to those of the eyes of fire ants and bark beetles. The devices combine elastomeric compound optical elements with deformable arrays of thin silicon photodetectors, which were fabricated in the planar geometries and then integrated and elastically transformed to hemispherical shapes. Imaging results and quantitative ray-tracing-based simulations illustrate key features of operation. These general strategies seem to be applicable to other compound eye devices, such as those inspired by moths and lacewings (refracting superposition eyes), lobster and shrimp (reflecting superposition eyes), and houseflies (neural superposition eyes).
The visual system of male scale insects
NASA Astrophysics Data System (ADS)
Buschbeck, Elke K.; Hauser, Martin
2009-03-01
Animal eyes generally fall into two categories: (1) their photoreceptive array is convex, as is typical for camera eyes, including the human eye, or (2) their photoreceptive array is concave, as is typical for the compound eye of insects. There are a few rare examples of the latter eye type having secondarily evolved into the former one. When viewed in a phylogenetic framework, the head morphology of a variety of male scale insects suggests that this group could be one such example. In the Margarodidae (Hemiptera, Coccoidea), males have been described as having compound eyes, while males of some more derived groups only have two single-chamber eyes on each side of the head. Those eyes are situated in the place occupied by the compound eye of other insects. Since male scale insects tend to be rare, little is known about how their visual systems are organized, and what anatomical traits are associated with this evolutionary transition. In adult male Margarodidae, one single-chamber eye (stemmateran ocellus) is present in addition to a compound eye-like region. Our histological investigation reveals that the stemmateran ocellus has an extended retina which is formed by concrete clusters of receptor cells that connect to its own first-order neuropil. In addition, we find that the ommatidia of the compound eyes also share several anatomical characteristics with simple camera eyes. These include shallow units with extended retinas, each of which is connected by its own small nerve to the lamina. These anatomical changes suggest that the margarodid compound eye represents a transitional form to the giant unicornal eyes that have been described in more derived species.
Micro-optical artificial compound eyes.
Duparré, J W; Wippermann, F C
2006-03-01
Natural compound eyes combine small eye volumes with a large field of view at the cost of comparatively low spatial resolution. For small invertebrates such as flies or moths, compound eyes are the perfectly adapted solution to obtaining sufficient visual information about their environment without overloading their brains with the necessary image processing. However, to date little effort has been made to adopt this principle in optics. Classical imaging always had its archetype in natural single aperture eyes which, for example, human vision is based on. But a high-resolution image is not always required. Often the focus is on very compact, robust and cheap vision systems. The main question is consequently: what is the better approach for extremely miniaturized imaging systems-just scaling of classical lens designs or being inspired by alternative imaging principles evolved by nature in the case of small insects? In this paper, it is shown that such optical systems can be achieved using state-of-the-art micro-optics technology. This enables the generation of highly precise and uniform microlens arrays and their accurate alignment to the subsequent optics-, spacing- and optoelectronics structures. The results are thin, simple and monolithic imaging devices with a high accuracy of photolithography. Two different artificial compound eye concepts for compact vision systems have been investigated in detail: the artificial apposition compound eye and the cluster eye. Novel optical design methods and characterization tools were developed to allow the layout and experimental testing of the planar micro-optical imaging systems, which were fabricated for the first time by micro-optics technology. The artificial apposition compound eye can be considered as a simple imaging optical sensor while the cluster eye is capable of becoming a valid alternative to classical bulk objectives but is much more complex than the first system.
Arthropod eye-inspired digital camera with unique imaging characteristics
NASA Astrophysics Data System (ADS)
Xiao, Jianliang; Song, Young Min; Xie, Yizhu; Malyarchuk, Viktor; Jung, Inhwa; Choi, Ki-Joong; Liu, Zhuangjian; Park, Hyunsung; Lu, Chaofeng; Kim, Rak-Hwan; Li, Rui; Crozier, Kenneth B.; Huang, Yonggang; Rogers, John A.
2014-06-01
In nature, arthropods have a remarkably sophisticated class of imaging systems, with a hemispherical geometry, a wideangle field of view, low aberrations, high acuity to motion and an infinite depth of field. There are great interests in building systems with similar geometries and properties due to numerous potential applications. However, the established semiconductor sensor technologies and optics are essentially planar, which experience great challenges in building such systems with hemispherical, compound apposition layouts. With the recent advancement of stretchable optoelectronics, we have successfully developed strategies to build a fully functional artificial apposition compound eye camera by combining optics, materials and mechanics principles. The strategies start with fabricating stretchable arrays of thin silicon photodetectors and elastomeric optical elements in planar geometries, which are then precisely aligned and integrated, and elastically transformed to hemispherical shapes. This imaging device demonstrates nearly full hemispherical shape (about 160 degrees), with densely packed artificial ommatidia. The number of ommatidia (180) is comparable to those of the eyes of fire ants and bark beetles. We have illustrated key features of operation of compound eyes through experimental imaging results and quantitative ray-tracing-based simulations. The general strategies shown in this development could be applicable to other compound eye devices, such as those inspired by moths and lacewings (refracting superposition eyes), lobster and shrimp (reflecting superposition eyes), and houseflies (neural superposition eyes).
2011-01-01
Background Coleoid cephalopods (squids and octopuses) have evolved a camera eye, the structure of which is very similar to that found in vertebrates and which is considered a classic example of convergent evolution. Other molluscs, however, possess mirror, pin-hole, or compound eyes, all of which differ from the camera eye in the degree of complexity of the eye structures and neurons participating in the visual circuit. Therefore, genes expressed in the cephalopod eye after divergence from the common molluscan ancestor could be involved in eye evolution through association with the acquisition of new structural components. To clarify the genetic mechanisms that contributed to the evolution of the cephalopod camera eye, we applied comprehensive transcriptomic analysis and conducted developmental validation of candidate genes involved in coleoid cephalopod eye evolution. Results We compared gene expression in the eyes of 6 molluscan (3 cephalopod and 3 non-cephalopod) species and selected 5,707 genes as cephalopod camera eye-specific candidate genes on the basis of homology searches against 3 molluscan species without camera eyes. First, we confirmed the expression of these 5,707 genes in the cephalopod camera eye formation processes by developmental array analysis. Second, using molecular evolutionary (dN/dS) analysis to detect positive selection in the cephalopod lineage, we identified 156 of these genes in which functions appeared to have changed after the divergence of cephalopods from the molluscan ancestor and which contributed to structural and functional diversification. Third, we selected 1,571 genes, expressed in the camera eyes of both cephalopods and vertebrates, which could have independently acquired a function related to eye development at the expression level. Finally, as experimental validation, we identified three functionally novel cephalopod camera eye genes related to optic lobe formation in cephalopods by in situ hybridization analysis of embryonic pygmy squid. Conclusion We identified 156 genes positively selected in the cephalopod lineage and 1,571 genes commonly found in the cephalopod and vertebrate camera eyes from the analysis of cephalopod camera eye specificity at the expression level. Experimental validation showed that the cephalopod camera eye-specific candidate genes include those expressed in the outer part of the optic lobes, which unique to coleoid cephalopods. The results of this study suggest that changes in gene expression and in the primary structure of proteins (through positive selection) from those in the common molluscan ancestor could have contributed, at least in part, to cephalopod camera eye acquisition. PMID:21702923
Oliphant, Huw; Kennedy, Alasdair; Comyn, Oliver; Spalton, David J; Nanavaty, Mayank A
2018-06-16
To compare slit lamp mounted cameras (SLC) versus digital compact camera (DCC) with slit-lamp adaptor when used by an inexperienced technician. In this cross sectional study, where posterior capsule opacification (PCO) was used as a comparator, patients were consented for one photograph with SLC and two with DCC (DCC1 and DCC2), with a slit-lamp adaptor. An inexperienced clinic technician, who took all the photographs and masked the images, recruited one eye of each patient. Images were graded for PCO using ECPO2000 software by two independent masked graders. Repeatability between DCC1 & DCC2 and limits-of-agreement between SLC and DCC1 mounted on slit-lamp with an adaptor were assessed. Coefficient-of-repeatability and Bland-Altmann plots were analyzed. Seventy-two patients (eyes) were recruited in the study. First 9 patients (eyes) were excluded due to unsatisfactory image quality from both the systems. Mean EPCO score for SLC was 2.28 (95% CI: 2.09 -2.45), for DCC1 was 2.28 (95% CI: 2.11-2.45), and for the DCC2 was 2.11 (95% CI: 2.11-2.45). There was no significant difference in EPCO scores between SLC Vs. DCC1 (p = 0.98) and between DCC1 and DCC 2 (p = 0.97). Coefficient of repeatability between DCC images was 0.42, and the coefficient of repeatability between DCC and SLC was 0.58. DCC on slit-lamp with an adaptor is comparable to a SLC. There is an initial learning curve, which is similar for both for an inexperienced person. This opens up the possibility for low cost anterior segment imaging in the clinical, research and teaching settings.
Intraocular camera for retinal prostheses: Refractive and diffractive lens systems
NASA Astrophysics Data System (ADS)
Hauer, Michelle Christine
The focus of this thesis is on the design and analysis of refractive, diffractive, and hybrid refractive/diffractive lens systems for a miniaturized camera that can be surgically implanted in the crystalline lens sac and is designed to work in conjunction with current and future generation retinal prostheses. The development of such an intraocular camera (IOC) would eliminate the need for an external head-mounted or eyeglass-mounted camera. Placing the camera inside the eye would allow subjects to use their natural eye movements for foveation (attention) instead of more cumbersome head tracking, would notably aid in personal navigation and mobility, and would also be significantly more psychologically appealing from the standpoint of personal appearances. The capability for accommodation with no moving parts or feedback control is incorporated by employing camera designs that exhibit nearly infinite depth of field. Such an ultracompact optical imaging system requires a unique combination of refractive and diffractive optical elements and relaxed system constraints derived from human psychophysics. This configuration necessitates an extremely compact, short focal-length lens system with an f-number close to unity. Initially, these constraints appear highly aggressive from an optical design perspective. However, after careful analysis of the unique imaging requirements of a camera intended to work in conjunction with the relatively low pixellation levels of a retinal microstimulator array, it becomes clear that such a design is not only feasible, but could possibly be implemented with a single lens system.
Quasi-Elastic Light Scattering in Ophthalmology
NASA Astrophysics Data System (ADS)
Ansari, Rafat R.
The eye is not just a "window to the soul"; it can also be a "window to the human body." The eye is built like a camera. Light which travels from the cornea to the retina traverses through tissues that are representative of nearly every tissue type and fluid type in the human body. Therefore, it is possible to diagnose ocular and systemic diseases through the eye. Quasi-elastic light scattering (QELS) also known as dynamic light scattering (DLS) is a laboratory technique routinely used in the characterization of macromolecular dispersions. QELS instrumentation has now become more compact, sensitive, flexible, and easy to use. These developments have made QELS/DLS an important tool in ophthalmic research where disease can be detected early and noninvasively before the clinical symptoms appear.
NASA Astrophysics Data System (ADS)
Lai, Xiaochun; Meng, Ling-Jian
2018-02-01
In this paper, we present simulation studies for the second-generation MRI compatible SPECT system, MRC-SPECT-II, based on an inverted compound eye (ICE) gamma camera concept. The MRC-SPECT-II system consists of a total of 1536 independent micro-pinhole-camera-elements (MCEs) distributed in a ring with an inner diameter of 6 cm. This system provides a FOV of 1 cm diameter and a peak geometrical efficiency of approximately 1.3% (the typical levels of 0.1%-0.01% found in modern pre-clinical SPECT instrumentations), while maintaining a sub-500 μm spatial resolution. Compared to the first-generation MRC-SPECT system (MRC-SPECT-I) (Cai 2014 Nucl. Instrum. Methods Phys. Res. A 734 147-51) developed in our lab, the MRC-SPECT-II system offers a similar resolution with dramatically improved sensitivity and greatly reduced physical dimension. The latter should allow the system to be placed inside most clinical and pre-clinical MRI scanners for high-performance simultaneous MRI and SPECT imaging.
Murine fundus fluorescein angiography: An alternative approach using a handheld camera.
Ehrenberg, Moshe; Ehrenberg, Scott; Schwob, Ouri; Benny, Ofra
2016-07-01
In today's modern pharmacologic approach to treating sight-threatening retinal vascular disorders, there is an increasing demand for a compact, mobile, lightweight and cost-effective fluorescein fundus camera to document the effects of antiangiogenic drugs on laser-induced choroidal neovascularization (CNV) in mice and other experimental animals. We have adapted the use of the Kowa Genesis Df Camera to perform Fundus Fluorescein Angiography (FFA) in mice. The 1 kg, 28 cm high camera has built-in barrier and exciter filters to allow digital FFA recording to a Compact Flash memory card. Furthermore, this handheld unit has a steady Indirect Lens Holder that firmly attaches to the main unit, that securely holds a 90 diopter lens in position, in order to facilitate appropriate focus and stability, for photographing the delicate central murine fundus. This easily portable fundus fluorescein camera can effectively record exceptional central retinal vascular detail in murine laser-induced CNV, while readily allowing the investigator to adjust the camera's position according to the variable head and eye movements that can randomly occur while the mouse is optimally anesthetized. This movable image recording device, with efficiencies of space, time, cost, energy and personnel, has enabled us to accurately document the alterations in the central choroidal and retinal vasculature following induction of CNV, implemented by argon-green laser photocoagulation and disruption of Bruch's Membrane, in the experimental murine model of exudative macular degeneration. Copyright © 2016 Elsevier Ltd. All rights reserved.
An Insect Eye Inspired Miniaturized Multi-Camera System for Endoscopic Imaging.
Cogal, Omer; Leblebici, Yusuf
2017-02-01
In this work, we present a miniaturized high definition vision system inspired by insect eyes, with a distributed illumination method, which can work in dark environments for proximity imaging applications such as endoscopy. Our approach is based on modeling biological systems with off-the-shelf miniaturized cameras combined with digital circuit design for real time image processing. We built a 5 mm radius hemispherical compound eye, imaging a 180 ° ×180 ° degrees field of view while providing more than 1.1 megapixels (emulated ommatidias) as real-time video with an inter-ommatidial angle ∆ϕ = 0.5 ° at 18 mm radial distance. We made an FPGA implementation of the image processing system which is capable of generating 25 fps video with 1080 × 1080 pixel resolution at a 120 MHz processing clock frequency. When compared to similar size insect eye mimicking systems in literature, the system proposed in this paper features 1000 × resolution increase. To the best of our knowledge, this is the first time that a compound eye with built-in illumination idea is reported. We are offering our miniaturized imaging system for endoscopic applications like colonoscopy or laparoscopic surgery where there is a need for large field of view high definition imagery. For that purpose we tested our system inside a human colon model. We also present the resulting images and videos from the human colon model in this paper.
Optics design of laser spotter camera for ex-CCD sensor
NASA Astrophysics Data System (ADS)
Nautiyal, R. P.; Mishra, V. K.; Sharma, P. K.
2015-06-01
Development of Laser based instruments like laser range finder and laser ranger designator has received prominence in modern day military application. Aiming the laser on the target is done with the help of a bore sighted graticule as human eye cannot see the laser beam directly. To view Laser spot there are two types of detectors available, InGaAs detector and Ex-CCD detector, the latter being a cost effective solution. In this paper optics design for Ex-CCD based camera is discussed. The designed system is light weight and compact and has the ability to see the 1064nm pulsed laser spot upto a range of 5 km.
Adaptive optics with pupil tracking for high resolution retinal imaging
Sahin, Betul; Lamory, Barbara; Levecq, Xavier; Harms, Fabrice; Dainty, Chris
2012-01-01
Adaptive optics, when integrated into retinal imaging systems, compensates for rapidly changing ocular aberrations in real time and results in improved high resolution images that reveal the photoreceptor mosaic. Imaging the retina at high resolution has numerous potential medical applications, and yet for the development of commercial products that can be used in the clinic, the complexity and high cost of the present research systems have to be addressed. We present a new method to control the deformable mirror in real time based on pupil tracking measurements which uses the default camera for the alignment of the eye in the retinal imaging system and requires no extra cost or hardware. We also present the first experiments done with a compact adaptive optics flood illumination fundus camera where it was possible to compensate for the higher order aberrations of a moving model eye and in vivo in real time based on pupil tracking measurements, without the real time contribution of a wavefront sensor. As an outcome of this research, we showed that pupil tracking can be effectively used as a low cost and practical adaptive optics tool for high resolution retinal imaging because eye movements constitute an important part of the ocular wavefront dynamics. PMID:22312577
Adaptive optics with pupil tracking for high resolution retinal imaging.
Sahin, Betul; Lamory, Barbara; Levecq, Xavier; Harms, Fabrice; Dainty, Chris
2012-02-01
Adaptive optics, when integrated into retinal imaging systems, compensates for rapidly changing ocular aberrations in real time and results in improved high resolution images that reveal the photoreceptor mosaic. Imaging the retina at high resolution has numerous potential medical applications, and yet for the development of commercial products that can be used in the clinic, the complexity and high cost of the present research systems have to be addressed. We present a new method to control the deformable mirror in real time based on pupil tracking measurements which uses the default camera for the alignment of the eye in the retinal imaging system and requires no extra cost or hardware. We also present the first experiments done with a compact adaptive optics flood illumination fundus camera where it was possible to compensate for the higher order aberrations of a moving model eye and in vivo in real time based on pupil tracking measurements, without the real time contribution of a wavefront sensor. As an outcome of this research, we showed that pupil tracking can be effectively used as a low cost and practical adaptive optics tool for high resolution retinal imaging because eye movements constitute an important part of the ocular wavefront dynamics.
NASA Astrophysics Data System (ADS)
Topakova, Anastassia A.; Salmin, Vladimir V.; Gar'kavenko, Victor V.; Levchenko, Julia S.; Lazarenko, Victor I.
2016-04-01
Fluorimetry of eye is a perspective technique for research and diagnostics in ophthalmology. It is connected to the structural and functional characteristics of eye that is, actually, the optical system allowing transferring the radiation both for excitation and for registration of fluorescence in different eye's compartments: cornea, lens, vitreous body, and fundus of the eye. At present, different models of ophthalmologic fluorophotometers for the analysis of eye fluorescence as well as more advanced models - scanning fluorophotometers - are offered. Assessment of corneal status in persons wearing contact lenses or in patients with pathological changes (i.e. diabetes mellitus) would give us an opportunity to identify the initial manifestations of corneal pathology at the pre-symptomatic phase. In this paper, we present data on the compact spectrofluorimeter with UV LEDs-induced excitation as well as the method for assessing hypoxic alterations in the eye limb zone caused by contact lenses wearing. We demonstrate dependence of autofluorescence spectra on the contact lenses types and duration of their permanent wearing.
The Photoluminescence of a Fluorescent Lamp: Didactic Experiments on the Exponential Decay
ERIC Educational Resources Information Center
Onorato, Pasquale; Gratton, Luigi; Malgieri, Massimiliano; Oss, Stefano
2017-01-01
The lifetimes of the photoluminescent compounds contained in the coating of fluorescent compact lamps are usually measured using specialised instruments, including pulsed lasers and/or spectrofluorometers. Here we discuss how some low cost apparatuses, based on the use of either sensors for the educational lab or commercial digital photo cameras,…
Measuring high-resolution sky luminance distributions with a CCD camera.
Tohsing, Korntip; Schrempf, Michael; Riechelmann, Stefan; Schilke, Holger; Seckmeyer, Gunther
2013-03-10
We describe how sky luminance can be derived from a newly developed hemispherical sky imager (HSI) system. The system contains a commercial compact charge coupled device (CCD) camera equipped with a fish-eye lens. The projection of the camera system has been found to be nearly equidistant. The luminance from the high dynamic range images has been calculated and then validated with luminance data measured by a CCD array spectroradiometer. The deviation between both datasets is less than 10% for cloudless and completely overcast skies, and differs by no more than 20% for all sky conditions. The global illuminance derived from the HSI pictures deviates by less than 5% and 20% under cloudless and cloudy skies for solar zenith angles less than 80°, respectively. This system is therefore capable of measuring sky luminance with the high spatial and temporal resolution of more than a million pixels and every 20 s respectively.
NASA Astrophysics Data System (ADS)
Viard, Clément; Nakashima, Kiyoko; Lamory, Barbara; Pâques, Michel; Levecq, Xavier; Château, Nicolas
2011-03-01
This research is aimed at characterizing in vivo differences between healthy and pathological retinal tissues at the microscopic scale using a compact adaptive optics (AO) retinal camera. Tests were performed in 120 healthy eyes and 180 eyes suffering from 19 different pathological conditions, including age-related maculopathy (ARM), glaucoma and rare diseases such as inherited retinal dystrophies. Each patient was first examined using SD-OCT and infrared SLO. Retinal areas of 4°x4° were imaged using an AO flood-illumination retinal camera based on a large-stroke deformable mirror. Contrast was finally enhanced by registering and averaging rough images using classical algorithms. Cellular-resolution images could be obtained in most cases. In ARM, AO images revealed granular contents in drusen, which were invisible in SLO or OCT images, and allowed the observation of the cone mosaic between drusen. In glaucoma cases, visual field was correlated to changes in cone visibility. In inherited retinal dystrophies, AO helped to evaluate cone loss across the retina. Other microstructures, slightly larger in size than cones, were also visible in several retinas. AO provided potentially useful diagnostic and prognostic information in various diseases. In addition to cones, other microscopic structures revealed by AO images may also be of interest in monitoring retinal diseases.
Ogura, Atsushi; Ikeo, Kazuho; Gojobori, Takashi
2004-01-01
Although the camera eye of the octopus is very similar to that of humans, phylogenetic and embryological analyses have suggested that their camera eyes have been acquired independently. It has been known as a typical example of convergent evolution. To study the molecular basis of convergent evolution of camera eyes, we conducted a comparative analysis of gene expression in octopus and human camera eyes. We sequenced 16,432 ESTs of the octopus eye, leading to 1052 nonredundant genes that have matches in the protein database. Comparing these 1052 genes with 13,303 already-known ESTs of the human eye, 729 (69.3%) genes were commonly expressed between the human and octopus eyes. On the contrary, when we compared octopus eye ESTs with human connective tissue ESTs, the expression similarity was quite low. To trace the evolutionary changes that are potentially responsible for camera eye formation, we also compared octopus-eye ESTs with the completed genome sequences of other organisms. We found that 1019 out of the 1052 genes had already existed at the common ancestor of bilateria, and 875 genes were conserved between humans and octopuses. It suggests that a larger number of conserved genes and their similar gene expression may be responsible for the convergent evolution of the camera eye. PMID:15289475
Web Camera Based Eye Tracking to Assess Visual Memory on a Visual Paired Comparison Task.
Bott, Nicholas T; Lange, Alex; Rentz, Dorene; Buffalo, Elizabeth; Clopton, Paul; Zola, Stuart
2017-01-01
Background: Web cameras are increasingly part of the standard hardware of most smart devices. Eye movements can often provide a noninvasive "window on the brain," and the recording of eye movements using web cameras is a burgeoning area of research. Objective: This study investigated a novel methodology for administering a visual paired comparison (VPC) decisional task using a web camera.To further assess this method, we examined the correlation between a standard eye-tracking camera automated scoring procedure [obtaining images at 60 frames per second (FPS)] and a manually scored procedure using a built-in laptop web camera (obtaining images at 3 FPS). Methods: This was an observational study of 54 clinically normal older adults.Subjects completed three in-clinic visits with simultaneous recording of eye movements on a VPC decision task by a standard eye tracker camera and a built-in laptop-based web camera. Inter-rater reliability was analyzed using Siegel and Castellan's kappa formula. Pearson correlations were used to investigate the correlation between VPC performance using a standard eye tracker camera and a built-in web camera. Results: Strong associations were observed on VPC mean novelty preference score between the 60 FPS eye tracker and 3 FPS built-in web camera at each of the three visits ( r = 0.88-0.92). Inter-rater agreement of web camera scoring at each time point was high (κ = 0.81-0.88). There were strong relationships on VPC mean novelty preference score between 10, 5, and 3 FPS training sets ( r = 0.88-0.94). Significantly fewer data quality issues were encountered using the built-in web camera. Conclusions: Human scoring of a VPC decisional task using a built-in laptop web camera correlated strongly with automated scoring of the same task using a standard high frame rate eye tracker camera.While this method is not suitable for eye tracking paradigms requiring the collection and analysis of fine-grained metrics, such as fixation points, built-in web cameras are a standard feature of most smart devices (e.g., laptops, tablets, smart phones) and can be effectively employed to track eye movements on decisional tasks with high accuracy and minimal cost.
Wave study of compound eyes for efficient infrared detection
NASA Astrophysics Data System (ADS)
Kilinc, Takiyettin Oytun; Hayran, Zeki; Kocer, Hasan; Kurt, Hamza
2017-08-01
Improving sensitivity in the infrared spectrum is a challenging task. Detecting infrared light over a wide bandwidth and at low power consumption is very important. Novel solutions can be acquired by mimicking biological eyes such as compound eye with many individual lenses inspired from the nature. The nature provides many ingenious approaches of sensing and detecting the surrounding environment. Even though compound eye consists of small optical units, it can detect wide-angle electromagnetic waves and it has high transmission and low reflection loss. Insects have eyes that are superior compared to human eyes (single-aperture eyes) in terms of compactness, robustness, wider field of view, higher sensitivity of light intensity and being cheap vision systems. All these desired properties are accompanied by an important drawback: lower spatial resolution. The first step to investigate the feasibility of bio-inspired optics in photodetectors is to perform light interaction with the optical system that gather light and detect it. The most common method used in natural vision systems is the ray analysis. Light wave characteristics are not taken into consideration in such analyses, such as the amount of energy at the focal point or photoreceptor site, the losses caused by reflection at the interfaces and absorption cannot be investigated. In this study, we present a bio-inspired optical detection system investigated by wave analysis. We numerically model the wave analysis based on Maxwell equations from the viewpoint of efficient light detection and revealing the light propagation after intercepting the first interface of the eye towards the photoreceptor site.
Mertens, Jan E.J.; Roie, Martijn Van; Merckx, Jonas; Dekoninck, Wouter
2017-01-01
Abstract Digitization of specimen collections has become a key priority of many natural history museums. The camera systems built for this purpose are expensive, providing a barrier in institutes with limited funding, and therefore hampering progress. An assessment is made on whether a low cost compact camera with image stacking functionality can help expedite the digitization process in large museums or provide smaller institutes and amateur entomologists with the means to digitize their collections. Images of a professional setup were compared with the Olympus Stylus TG-4 Tough, a low-cost compact camera with internal focus stacking functions. Parameters considered include image quality, digitization speed, price, and ease-of-use. The compact camera’s image quality, although inferior to the professional setup, is exceptional considering its fourfold lower price point. Producing the image slices in the compact camera is a matter of seconds and when optimal image quality is less of a priority, the internal stacking function omits the need for dedicated stacking software altogether, further decreasing the cost and speeding up the process. In general, it is found that, aware of its limitations, this compact camera is capable of digitizing entomological collections with sufficient quality. As technology advances, more institutes and amateur entomologists will be able to easily and affordably catalogue their specimens. PMID:29134038
Ali, S M; Reisner, L A; King, B; Cao, A; Auner, G; Klein, M; Pandya, A K
2008-01-01
A redesigned motion control system for the medical robot Aesop allows automating and programming its movements. An IR eye tracking system has been integrated with this control interface to implement an intelligent, autonomous eye gaze-based laparoscopic positioning system. A laparoscopic camera held by Aesop can be moved based on the data from the eye tracking interface to keep the user's gaze point region at the center of a video feedback monitor. This system setup provides autonomous camera control that works around the surgeon, providing an optimal robotic camera platform.
NASA Astrophysics Data System (ADS)
Meng, Qinggang; Lee, M. H.
2007-03-01
Advanced autonomous artificial systems will need incremental learning and adaptive abilities similar to those seen in humans. Knowledge from biology, psychology and neuroscience is now inspiring new approaches for systems that have sensory-motor capabilities and operate in complex environments. Eye/hand coordination is an important cross-modal cognitive function, and is also typical of many of the other coordinations that must be involved in the control and operation of embodied intelligent systems. This paper examines a biologically inspired approach for incrementally constructing compact mapping networks for eye/hand coordination. We present a simplified node-decoupled extended Kalman filter for radial basis function networks, and compare this with other learning algorithms. An experimental system consisting of a robot arm and a pan-and-tilt head with a colour camera is used to produce results and test the algorithms in this paper. We also present three approaches for adapting to structural changes during eye/hand coordination tasks, and the robustness of the algorithms under noise are investigated. The learning and adaptation approaches in this paper have similarities with current ideas about neural growth in the brains of humans and animals during tool-use, and infants during early cognitive development.
[Virtual reality in ophthalmological education].
Wagner, C; Schill, M; Hennen, M; Männer, R; Jendritza, B; Knorz, M C; Bender, H J
2001-04-01
We present a computer-based medical training workstation for the simulation of intraocular eye surgery. The surgeon manipulates two original instruments inside a mechanical model of the eye. The instrument positions are tracked by CCD cameras and monitored by a PC which renders the scenery using a computer-graphic model of the eye and the instruments. The simulator incorporates a model of the operation table, a mechanical eye, three CCD cameras for the position tracking, the stereo display, and a computer. The three cameras are mounted under the operation table from where they can observe the interior of the mechanical eye. Using small markers the cameras recognize the instruments and the eye. Their position and orientation in space is determined by stereoscopic back projection. The simulation runs with more than 20 frames per second and provides a realistic impression of the surgery. It includes the cold light source which can be moved inside the eye and the shadow of the instruments on the retina which is important for navigational purposes.
Performance benefits and limitations of a camera network
NASA Astrophysics Data System (ADS)
Carr, Peter; Thomas, Paul J.; Hornsey, Richard
2005-06-01
Visual information is of vital significance to both animals and artificial systems. The majority of mammals rely on two images, each with a resolution of 107-108 'pixels' per image. At the other extreme are insect eyes where the field of view is segmented into 103-105 images, each comprising effectively one pixel/image. The great majority of artificial imaging systems lie nearer to the mammalian characteristics in this parameter space, although electronic compound eyes have been developed in this laboratory and elsewhere. If the definition of a vision system is expanded to include networks or swarms of sensor elements, then schools of fish, flocks of birds and ant or termite colonies occupy a region where the number of images and the pixels/image may be comparable. A useful system might then have 105 imagers, each with about 104-105 pixels. Artificial analogs to these situations include sensor webs, smart dust and co-ordinated robot clusters. As an extreme example, we might consider the collective vision system represented by the imminent existence of ~109 cellular telephones, each with a one-megapixel camera. Unoccupied regions in this resolution-segmentation parameter space suggest opportunities for innovative artificial sensor network systems. Essential for the full exploitation of these opportunities is the availability of custom CMOS image sensor chips whose characteristics can be tailored to the application. Key attributes of such a chip set might include integrated image processing and control, low cost, and low power. This paper compares selected experimentally determined system specifications for an inward-looking array of 12 cameras with the aid of a camera-network model developed to explore the tradeoff between camera resolution and the number of cameras.
Evaluation of Eye Metrics as a Detector of Fatigue
2010-03-01
eyeglass frames . The cameras are angled upward toward the eyes and extract real-time pupil diameter, eye-lid movement, and eye-ball movement. The...because the cameras were mounted on eyeglass -like frames , the system was able to continuously monitor the eye throughout all sessions. Overall, the...of “ fitness for duty” testing and “real-time monitoring” of operator performance has been slow (Institute of Medicine, 2004). Oculometric-based
Elemental mapping and microimaging by x-ray capillary optics.
Hampai, D; Dabagov, S B; Cappuccio, G; Longoni, A; Frizzi, T; Cibin, G; Guglielmotti, V; Sala, M
2008-12-01
Recently, many experiments have highlighted the advantage of using polycapillary optics for x-ray fluorescence studies. We have developed a special confocal scheme for micro x-ray fluorescence measurements that enables us to obtain not only elemental mapping of the sample but also simultaneously its own x-ray imaging. We have designed the prototype of a compact x-ray spectrometer characterized by a spatial resolution of less than 100 microm for fluorescence and less than 10 microm for imaging. A couple of polycapillary lenses in a confocal configuration together with a silicon drift detector allow elemental studies of extended samples (approximately 3 mm) to be performed, while a CCD camera makes it possible to record an image of the same samples with 6 microm spatial resolution, which is limited only by the pixel size of the camera. By inserting a compound refractive lens between the sample and the CCD camera, we hope to develop an x-ray microscope for more enlarged images of the samples under test.
Fast and compact internal scanning CMOS-based hyperspectral camera: the Snapscan
NASA Astrophysics Data System (ADS)
Pichette, Julien; Charle, Wouter; Lambrechts, Andy
2017-02-01
Imec has developed a process for the monolithic integration of optical filters on top of CMOS image sensors, leading to compact, cost-efficient and faster hyperspectral cameras. Linescan cameras are typically used in remote sensing or for conveyor belt applications. Translation of the target is not always possible for large objects or in many medical applications. Therefore, we introduce a novel camera, the Snapscan (patent pending), exploiting internal movement of a linescan sensor enabling fast and convenient acquisition of high-resolution hyperspectral cubes (up to 2048x3652x150 in spectral range 475-925 nm). The Snapscan combines the spectral and spatial resolutions of a linescan system with the convenience of a snapshot camera.
Marcus, Inna; Tung, Irene T; Dosunmu, Eniolami O; Thiamthat, Warakorn; Freedman, Sharon F
2013-12-01
To compare anterior segment findings identified in young children using digital photographic images from the Lytro light field camera to those observed clinically. This was a prospective study of children <9 years of age with an anterior segment abnormality. Clinically observed anterior segment examination findings for each child were recorded and several digital images of the anterior segment of each eye captured with the Lytro camera. The images were later reviewed by a masked examiner. Sensitivity of abnormal examination findings on Lytro imaging was calculated and compared to the clinical examination as the gold standard. A total of 157 eyes of 80 children (mean age, 4.4 years; range, 0.1-8.9) were included. Clinical examination revealed 206 anterior segment abnormalities altogether: lids/lashes (n = 21 eyes), conjunctiva/sclera (n = 28 eyes), cornea (n = 71 eyes), anterior chamber (n = 14 eyes), iris (n = 43 eyes), and lens (n = 29 eyes). Review of Lytro photographs of eyes with clinically diagnosed anterior segment abnormality correctly identified 133 of 206 (65%) of all abnormalities. Additionally, 185 abnormalities in 50 children were documented at examination under anesthesia. The Lytro camera was able to document most abnormal anterior segment findings in un-sedated young children. Its unique ability to allow focus change after image capture is a significant improvement on prior technology. Copyright © 2013 American Association for Pediatric Ophthalmology and Strabismus. Published by Mosby, Inc. All rights reserved.
Optimizing Optics For Remotely Controlled Underwater Vehicles
NASA Astrophysics Data System (ADS)
Billet, A. B.
1984-09-01
The past decade has shown a dramatic increase in the use of unmanned tethered vehicles in worldwide marine fields. These vehicles are used for inspection, debris removal and object retrieval. With advanced robotic technology, remotely operated vehicles (ROVs) are now able to perform a variety of jobs previously accomplished only by divers. The ROVs can be used at greater depths and for riskier jobs, and safety to the diver is increased, freeing him for safer, more cost-effective tasks requiring human capabilities. Secondly, the ROV operation becomes more cost effective to use as work depth increases. At 1000 feet a diver's 10 minutes of work can cost over $100,000 including support personnel, while an ROV operational cost might be 1/20 of the diver cost per day, based on the condition that the cost for ROV operation does not change with depth, as it does for divers. In the ROV operation the television lens must be as good as the human eye, with better light gathering capability than the human eye. The RCV-150 system is an example of these advanced technology vehicles. With the requirements of manueuverability and unusual inspection, a responsive, high performance, compact vehicle was developed. The RCV-150 viewing subsystem consists of a television camera, lights, and topside monitors. The vehicle uses a low light level Newvicon television camera. The camera is equipped with a power-down iris that closes for burn protection when the power is off. The camera can pan f 50 degrees and tilt f 85 degrees on command from the surface. Four independently controlled 250 watt quartz halogen flood lamps illuminate the viewing area as required; in addition, two 250 watt spotlights are fitted. A controlled nine inch CRT monitor provides real time camera pictures for the operator. The RCV-150 vehicle component system consists of the vehicle structure, the vehicle electronics, and hydraulic system which powers the thruster assemblies and the manipulator. For this vehicle, a light weight, high response hydraulic system was developed in a very small package.
WiseEye: Next Generation Expandable and Programmable Camera Trap Platform for Wildlife Research.
Nazir, Sajid; Newey, Scott; Irvine, R Justin; Verdicchio, Fabio; Davidson, Paul; Fairhurst, Gorry; Wal, René van der
2017-01-01
The widespread availability of relatively cheap, reliable and easy to use digital camera traps has led to their extensive use for wildlife research, monitoring and public outreach. Users of these units are, however, often frustrated by the limited options for controlling camera functions, the generation of large numbers of images, and the lack of flexibility to suit different research environments and questions. We describe the development of a user-customisable open source camera trap platform named 'WiseEye', designed to provide flexible camera trap technology for wildlife researchers. The novel platform is based on a Raspberry Pi single-board computer and compatible peripherals that allow the user to control its functions and performance. We introduce the concept of confirmatory sensing, in which the Passive Infrared triggering is confirmed through other modalities (i.e. radar, pixel change) to reduce the occurrence of false positives images. This concept, together with user-definable metadata, aided identification of spurious images and greatly reduced post-collection processing time. When tested against a commercial camera trap, WiseEye was found to reduce the incidence of false positive images and false negatives across a range of test conditions. WiseEye represents a step-change in camera trap functionality, greatly increasing the value of this technology for wildlife research and conservation management.
Hand-eye calibration using a target registration error model.
Chen, Elvis C S; Morgan, Isabella; Jayarathne, Uditha; Ma, Burton; Peters, Terry M
2017-10-01
Surgical cameras are prevalent in modern operating theatres and are often used as a surrogate for direct vision. Visualisation techniques (e.g. image fusion) made possible by tracking the camera require accurate hand-eye calibration between the camera and the tracking system. The authors introduce the concept of 'guided hand-eye calibration', where calibration measurements are facilitated by a target registration error (TRE) model. They formulate hand-eye calibration as a registration problem between homologous point-line pairs. For each measurement, the position of a monochromatic ball-tip stylus (a point) and its projection onto the image (a line) is recorded, and the TRE of the resulting calibration is predicted using a TRE model. The TRE model is then used to guide the placement of the calibration tool, so that the subsequent measurement minimises the predicted TRE. Assessing TRE after each measurement produces accurate calibration using a minimal number of measurements. As a proof of principle, they evaluated guided calibration using a webcam and an endoscopic camera. Their endoscopic camera results suggest that millimetre TRE is achievable when at least 15 measurements are acquired with the tracker sensor ∼80 cm away on the laparoscope handle for a target ∼20 cm away from the camera.
The seam visual tracking method for large structures
NASA Astrophysics Data System (ADS)
Bi, Qilin; Jiang, Xiaomin; Liu, Xiaoguang; Cheng, Taobo; Zhu, Yulong
2017-10-01
In this paper, a compact and flexible weld visual tracking method is proposed. Firstly, there was the interference between the visual device and the work-piece to be welded when visual tracking height cannot change. a kind of weld vision system with compact structure and tracking height is researched. Secondly, according to analyze the relative spatial pose between the camera, the laser and the work-piece to be welded and study with the theory of relative geometric imaging, The mathematical model between image feature parameters and three-dimensional trajectory of the assembly gap to be welded is established. Thirdly, the visual imaging parameters of line structured light are optimized by experiment of the weld structure of the weld. Fourth, the interference that line structure light will be scatters at the bright area of metal and the area of surface scratches will be bright is exited in the imaging. These disturbances seriously affect the computational efficiency. The algorithm based on the human eye visual attention mechanism is used to extract the weld characteristics efficiently and stably. Finally, in the experiment, It is verified that the compact and flexible weld tracking method has the tracking accuracy of 0.5mm in the tracking of large structural parts. It is a wide range of industrial application prospects.
Assembly of the cnidarian camera-type eye from vertebrate-like components.
Kozmik, Zbynek; Ruzickova, Jana; Jonasova, Kristyna; Matsumoto, Yoshifumi; Vopalensky, Pavel; Kozmikova, Iryna; Strnad, Hynek; Kawamura, Shoji; Piatigorsky, Joram; Paces, Vaclav; Vlcek, Cestmir
2008-07-01
Animal eyes are morphologically diverse. Their assembly, however, always relies on the same basic principle, i.e., photoreceptors located in the vicinity of dark shielding pigment. Cnidaria as the likely sister group to the Bilateria are the earliest branching phylum with a well developed visual system. Here, we show that camera-type eyes of the cubozoan jellyfish, Tripedalia cystophora, use genetic building blocks typical of vertebrate eyes, namely, a ciliary phototransduction cascade and melanogenic pathway. Our findings indicative of parallelism provide an insight into eye evolution. Combined, the available data favor the possibility that vertebrate and cubozoan eyes arose by independent recruitment of orthologous genes during evolution.
Zhang, Wenjing; Cao, Yu; Zhang, Xuanzhe; Liu, Zejin
2015-10-20
Stable information of a sky light polarization pattern can be used for navigation with various advantages such as better performance of anti-interference, no "error cumulative effect," and so on. But the existing method of sky light polarization measurement is weak in real-time performance or with a complex system. Inspired by the navigational capability of a Cataglyphis with its compound eyes, we introduce a new approach to acquire the all-sky image under different polarization directions with one camera and without a rotating polarizer, so as to detect the polarization pattern across the full sky in a single snapshot. Our system is based on a handheld light field camera with a wide-angle lens and a triplet linear polarizer placed over its aperture stop. Experimental results agree with the theoretical predictions. Not only real-time detection but simple and costless architecture demonstrates the superiority of the approach proposed in this paper.
Eye Disease in Patients with Diabetes Screened with Telemedicine.
Park, Dong-Wouk; Mansberger, Steven L
2017-02-01
Telemedicine with nonmydriatic cameras can detect not only diabetic retinopathy but also other eye disease. To determine the prevalence of eye diseases detected by telemedicine in a population with a high prevalence of minority and American Indian/Alaskan Native (AI/AN) ethnicities. We recruited diabetic patients 18 years and older and used telemedicine with nonmydriatic cameras to detect eye disease. Two trained readers graded the images for diabetic retinopathy, age-related macular degeneration (ARMD), glaucomatous features, macular edema, and other eye disease using a standard protocol. We included both eyes for analysis and excluded images that were too poor to grade. We included 820 eyes from 424 patients with 72.3% nonwhite ethnicity and 50.3% AI/AN heritage. While 283/424 (66.7%) patients had normal eye images, 120/424 (28.3%) had one disease identified; 15/424 (3.5%) had two diseases; and 6/424 (1.4%) had three diseases in one or both eyes. After diabetic retinopathy (104/424, 24.5%), the most common eye diseases were glaucomatous features (44/424, 10.4%) and dry ARMD (24/424, 5.7%). Seventeen percent (72/424, 17.0%) showed eye disease other than diabetic retinopathy. Telemedicine with nonmydriatic cameras detected diabetic retinopathy, as well as other visually significant eye disease. This suggests that a diabetic retinopathy screening program needs to detect and report other eye disease, including glaucoma and macular disease.
Hand–eye calibration using a target registration error model
Morgan, Isabella; Jayarathne, Uditha; Ma, Burton; Peters, Terry M.
2017-01-01
Surgical cameras are prevalent in modern operating theatres and are often used as a surrogate for direct vision. Visualisation techniques (e.g. image fusion) made possible by tracking the camera require accurate hand–eye calibration between the camera and the tracking system. The authors introduce the concept of ‘guided hand–eye calibration’, where calibration measurements are facilitated by a target registration error (TRE) model. They formulate hand–eye calibration as a registration problem between homologous point–line pairs. For each measurement, the position of a monochromatic ball-tip stylus (a point) and its projection onto the image (a line) is recorded, and the TRE of the resulting calibration is predicted using a TRE model. The TRE model is then used to guide the placement of the calibration tool, so that the subsequent measurement minimises the predicted TRE. Assessing TRE after each measurement produces accurate calibration using a minimal number of measurements. As a proof of principle, they evaluated guided calibration using a webcam and an endoscopic camera. Their endoscopic camera results suggest that millimetre TRE is achievable when at least 15 measurements are acquired with the tracker sensor ∼80 cm away on the laparoscope handle for a target ∼20 cm away from the camera. PMID:29184657
[Studies of vision by Leonardo da Vinci].
Berggren, L
2001-01-01
Leonardo was an advocate of the intromission theory of vision. Light rays from the object to the eye caused visual perceptions which were transported to the brain ventricles via a hollow optic nerve. Leonardo introduced wax injections to explore the ventricular system. Perceptions were assumed to go to the "senso comune" in the middle (3rd) ventricle, also the seat of the soul. The processing station "imprensiva" in the anterior lateral horns together with memory "memoria" in th posterior (4th) ventricle integrated the visual perceptions to visual experience. - Leonardo's sketches with circular lenses in the center of the eye reveal that his dependence on medieval optics prevailed over anatomical observations. Drawings of the anatomy of the sectioned eye are missing although Leonardo had invented a new embedding technique. In order to dissect the eye without spilling its contents, the eye was first boiled in egg white and then cut. The procedure was now repeated and showed that the ovoid lens after boiling had become spherical. - Leonardo described that light rays were refracted and reflected in the eye but his imperfect anatomy prevented a development of physiological optics. He was, however, the first to compare the eye with a pin-hole camera (camera obscura). Leonardo's drawings of the inverted pictures on the back wall of a camera obscura inspired to its use as an instrument for artistic practice. The camera obscura was for centuries a model for explaining human vision.
Statis omnidirectional stereoscopic display system
NASA Astrophysics Data System (ADS)
Barton, George G.; Feldman, Sidney; Beckstead, Jeffrey A.
1999-11-01
A unique three camera stereoscopic omnidirectional viewing system based on the periscopic panoramic camera described in the 11/98 SPIE proceedings (AM13). The 3 panoramic cameras are equilaterally combined so each leg of the triangle approximates the human inter-ocular spacing allowing each panoramic camera to view 240 degree(s) of the panoramic scene, the most counter clockwise 120 degree(s) being the left eye field and the other 120 degree(s) segment being the right eye field. Field definition may be by green/red filtration or time discrimination of the video signal. In the first instance a 2 color spectacle is used in viewing the display or in the 2nd instance LCD goggles are used to differentiate the R/L fields. Radially scanned vidicons or re-mapped CCDs may be used. The display consists of three vertically stacked 120 degree(s) segments of the panoramic field of view with 2 fields/frame. Field A being the left eye display and Field B the right eye display.
Mapping and correcting the influence of gaze position on pupil size measurements
Petrov, Alexander A.
2015-01-01
Pupil size is correlated with a wide variety of important cognitive variables and is increasingly being used by cognitive scientists. Pupil data can be recorded inexpensively and non-invasively by many commonly used video-based eye-tracking cameras. Despite the relative ease of data collection and increasing prevalence of pupil data in the cognitive literature, researchers often underestimate the methodological challenges associated with controlling for confounds that can result in misinterpretation of their data. One serious confound that is often not properly controlled is pupil foreshortening error (PFE)—the foreshortening of the pupil image as the eye rotates away from the camera. Here we systematically map PFE using an artificial eye model and then apply a geometric model correction. Three artificial eyes with different fixed pupil sizes were used to systematically measure changes in pupil size as a function of gaze position with a desktop EyeLink 1000 tracker. A grid-based map of pupil measurements was recorded with each artificial eye across three experimental layouts of the eye-tracking camera and display. Large, systematic deviations in pupil size were observed across all nine maps. The measured PFE was corrected by a geometric model that expressed the foreshortening of the pupil area as a function of the cosine of the angle between the eye-to-camera axis and the eye-to-stimulus axis. The model reduced the root mean squared error of pupil measurements by 82.5 % when the model parameters were pre-set to the physical layout dimensions, and by 97.5 % when they were optimized to fit the empirical error surface. PMID:25953668
ERIC Educational Resources Information Center
Edmunds, Sarah R.; Rozga, Agata; Li, Yin; Karp, Elizabeth A.; Ibanez, Lisa V.; Rehg, James M.; Stone, Wendy L.
2017-01-01
Children with autism spectrum disorder (ASD) show reduced gaze to social partners. Eye contact during live interactions is often measured using stationary cameras that capture various views of the child, but determining a child's precise gaze target within another's face is nearly impossible. This study compared eye gaze coding derived from…
A new mapping function in table-mounted eye tracker
NASA Astrophysics Data System (ADS)
Tong, Qinqin; Hua, Xiao; Qiu, Jian; Luo, Kaiqing; Peng, Li; Han, Peng
2018-01-01
Eye tracker is a new apparatus of human-computer interaction, which has caught much attention in recent years. Eye tracking technology is to obtain the current subject's "visual attention (gaze)" direction by using mechanical, electronic, optical, image processing and other means of detection. While the mapping function is one of the key technology of the image processing, and is also the determination of the accuracy of the whole eye tracker system. In this paper, we present a new mapping model based on the relationship among the eyes, the camera and the screen that the eye gazed. Firstly, according to the geometrical relationship among the eyes, the camera and the screen, the framework of mapping function between the pupil center and the screen coordinate is constructed. Secondly, in order to simplify the vectors inversion of the mapping function, the coordinate of the eyes, the camera and screen was modeled by the coaxial model systems. In order to verify the mapping function, corresponding experiment was implemented. It is also compared with the traditional quadratic polynomial function. And the results show that our approach can improve the accuracy of the determination of the gazing point. Comparing with other methods, this mapping function is simple and valid.
Assembly of the cnidarian camera-type eye from vertebrate-like components
Kozmik, Zbynek; Ruzickova, Jana; Jonasova, Kristyna; Matsumoto, Yoshifumi; Vopalensky, Pavel; Kozmikova, Iryna; Strnad, Hynek; Kawamura, Shoji; Piatigorsky, Joram; Paces, Vaclav; Vlcek, Cestmir
2008-01-01
Animal eyes are morphologically diverse. Their assembly, however, always relies on the same basic principle, i.e., photoreceptors located in the vicinity of dark shielding pigment. Cnidaria as the likely sister group to the Bilateria are the earliest branching phylum with a well developed visual system. Here, we show that camera-type eyes of the cubozoan jellyfish, Tripedalia cystophora, use genetic building blocks typical of vertebrate eyes, namely, a ciliary phototransduction cascade and melanogenic pathway. Our findings indicative of parallelism provide an insight into eye evolution. Combined, the available data favor the possibility that vertebrate and cubozoan eyes arose by independent recruitment of orthologous genes during evolution. PMID:18577593
A compact neutron scatter camera for field deployment
Goldsmith, John E. M.; Gerling, Mark D.; Brennan, James S.
2016-08-23
Here, we describe a very compact (0.9 m high, 0.4 m diameter, 40 kg) battery operable neutron scatter camera designed for field deployment. Unlike most other systems, the configuration of the sixteen liquid-scintillator detection cells are arranged to provide omnidirectional (4π) imaging with sensitivity comparable to a conventional two-plane system. Although designed primarily to operate as a neutron scatter camera for localizing energetic neutron sources, it also functions as a Compton camera for localizing gamma sources. In addition to describing the radionuclide source localization capabilities of this system, we demonstrate how it provides neutron spectra that can distinguish plutonium metalmore » from plutonium oxide sources, in addition to the easier task of distinguishing AmBe from fission sources.« less
Wide-angle camera with multichannel architecture using microlenses on a curved surface.
Liang, Wei-Lun; Shen, Hui-Kai; Su, Guo-Dung J
2014-06-10
We propose a multichannel imaging system that combines the principles of an insect's compound eye and the human eye. The optical system enables a reduction in track length of the imaging device to achieve miniaturization. The multichannel structure is achieved by a curved microlens array, and a Hypergon lens is used as the main lens to simulate the human eye, achieving large field of view (FOV). With this architecture, each microlens of the array transmits a segment of the overall FOV. The partial images are recorded in separate channels and stitched together to form the final image of the whole FOV by image processing. The design is 2.7 mm thick, with 59 channels; the 100°×80° full FOV is optimized using ZEMAX ray-tracing software on an image plane. The image plane size is 4.53 mm×3.29 mm. Given the recent progress in the fabrication of microlenses, this image system has the potential to be commercialized in the near future.
Cat-eye effect reflected beam profiles of an optical system with sensor array.
Gong, Mali; He, Sifeng; Guo, Rui; Wang, Wei
2016-06-01
In this paper, we propose an applicable propagation model for Gaussian beams passing through any cat-eye target instead of traditional simplification consisting of only a mirror placed at the focal plane of a lens. According to the model, the cat-eye effect of CCD cameras affected by defocus is numerically simulated. An excellent agreement of experiment results with theoretical analysis is obtained. It is found that the reflectivity distribution at the focal plane of the cat-eye optical lens has great influence on the results, while the cat-eye effect reflected beam profiles of CCD cameras show obvious periodicity.
Gao, Taotao; Xiang, Jie; Jin, Runsen; Zhang, Yajie; Wu, Han; Li, Hecheng
2016-10-01
Camera assistant plays a very important role in uniportal video-assisted thoracoscopic surgery (VATS), who acts as the eye of the surgeon, providing the VATS team with a stable and clear operating view. Thus, a good assistant should cooperate with surgeon and manipulate the camera expertly, to ensure eye-hand coordination. We have performed more than 100 uniportal VATS in the Department Of Thoracic Surgery in Ruijin Hospital. Based on our experiences, we summarized the method of holding camera, known as "ipsilateral, high, single-hand, sideways", which largely improves the comfort and fluency of surgery.
NASA Astrophysics Data System (ADS)
Santamaría, Beatriz; Laguna, María. Fe; López-Romero, David; López-Hernandez, A.; Sanza, F. J.; Lavín, A.; Casquel, R.; Maigler, M.; Holgado, M.
2018-02-01
A novel compact optical biochip based on a thin layer-sensing BICELL surface of nitrocellulose is used for in-situ labelfree detection of dry eye disease (DED). In this work the development of a compact biosensor that allows obtaining quantitative diagnosis with a limited volume of sample is reported. The designed sensors can be analyzed with an optical integrated Point-of-Care read-out system based on the "Increase Relative Optical Power" principle which enhances the performance and Limit of Detection. Several proteins involved with dry eye dysfunction have been validated as biomarkers. Presented biochip analyzes three of those biomarkers: MMP9, S100A6 and CST4. BICELLs based on nitrocellulose permit to immobilize antibodies for each biomarker recognition. The optical response obtained from the biosensor through the readout platform is capable to recognize specifically the desired proteins in the concentrations range for control eye (CE) and dry eye syndrome (DES). Preliminary results obtained will allow the development of a dry eye detection device useful in the area of ophthalmology and applicable to other possible diseases related to the eye dysfunction.
NASA Astrophysics Data System (ADS)
González-Jorge, Higinio; Riveiro, Belén; Varela, María; Arias, Pedro
2012-07-01
A low-cost image orthorectification tool based on the utilization of compact cameras and scale bars is developed to obtain the main geometric parameters of masonry bridges for inventory and routine inspection purposes. The technique is validated in three different bridges by comparison with laser scanning data. The surveying process is very delicate and must make a balance between working distance and angle. Three different cameras are used in the study to establish the relationship between the error and the camera model. Results depict nondependence in error between the length of the bridge element, the type of bridge, and the type of element. Error values for all the cameras are below 4 percent (95 percent of the data). A compact Canon camera, the model with the best technical specifications, shows an error level ranging from 0.5 to 1.5 percent.
Rotatable prism for pan and tilt
NASA Technical Reports Server (NTRS)
Ball, W. B.
1980-01-01
Compact, inexpensive, motor-driven prisms change field of view of TV camera. Camera and prism rotate about lens axis to produce pan effect. Rotating prism around axis parallel to lens produces tilt. Size of drive unit and required clearance are little more than size of camera.
21 CFR 886.1120 - Opthalmic camera.
Code of Federal Regulations, 2010 CFR
2010-04-01
... DEVICES OPHTHALMIC DEVICES Diagnostic Devices § 886.1120 Opthalmic camera. (a) Identification. An ophthalmic camera is an AC-powered device intended to take photographs of the eye and the surrounding area...
Miniature Wide-Angle Lens for Small-Pixel Electronic Camera
NASA Technical Reports Server (NTRS)
Mouroulils, Pantazis; Blazejewski, Edward
2009-01-01
A proposed wideangle lens is shown that would be especially well suited for an electronic camera in which the focal plane is occupied by an image sensor that has small pixels. The design of the lens is intended to satisfy requirements for compactness, high image quality, and reasonably low cost, while addressing issues peculiar to the operation of small-pixel image sensors. Hence, this design is expected to enable the development of a new generation of compact, high-performance electronic cameras. The lens example shown has a 60 degree field of view and a relative aperture (f-number) of 3.2. The main issues affecting the design are also shown.
Toward a digital camera to rival the human eye
NASA Astrophysics Data System (ADS)
Skorka, Orit; Joseph, Dileepan
2011-07-01
All things considered, electronic imaging systems do not rival the human visual system despite notable progress over 40 years since the invention of the CCD. This work presents a method that allows design engineers to evaluate the performance gap between a digital camera and the human eye. The method identifies limiting factors of the electronic systems by benchmarking against the human system. It considers power consumption, visual field, spatial resolution, temporal resolution, and properties related to signal and noise power. A figure of merit is defined as the performance gap of the weakest parameter. Experimental work done with observers and cadavers is reviewed to assess the parameters of the human eye, and assessment techniques are also covered for digital cameras. The method is applied to 24 modern image sensors of various types, where an ideal lens is assumed to complete a digital camera. Results indicate that dynamic range and dark limit are the most limiting factors. The substantial functional gap, from 1.6 to 4.5 orders of magnitude, between the human eye and digital cameras may arise from architectural differences between the human retina, arranged in a multiple-layer structure, and image sensors, mostly fabricated in planar technologies. Functionality of image sensors may be significantly improved by exploiting technologies that allow vertical stacking of active tiers.
Intermediate view synthesis for eye-gazing
NASA Astrophysics Data System (ADS)
Baek, Eu-Ttuem; Ho, Yo-Sung
2015-01-01
Nonverbal communication, also known as body language, is an important form of communication. Nonverbal behaviors such as posture, eye contact, and gestures send strong messages. In regard to nonverbal communication, eye contact is one of the most important forms that an individual can use. However, lack of eye contact occurs when we use video conferencing system. The disparity between locations of the eyes and a camera gets in the way of eye contact. The lock of eye gazing can give unapproachable and unpleasant feeling. In this paper, we proposed an eye gazing correction for video conferencing. We use two cameras installed at the top and the bottom of the television. The captured two images are rendered with 2D warping at virtual position. We implement view morphing to the detected face, and synthesize the face and the warped image. Experimental results verify that the proposed system is effective in generating natural gaze-corrected images.
WiseEye: Next Generation Expandable and Programmable Camera Trap Platform for Wildlife Research
Nazir, Sajid; Newey, Scott; Irvine, R. Justin; Verdicchio, Fabio; Davidson, Paul; Fairhurst, Gorry; van der Wal, René
2017-01-01
The widespread availability of relatively cheap, reliable and easy to use digital camera traps has led to their extensive use for wildlife research, monitoring and public outreach. Users of these units are, however, often frustrated by the limited options for controlling camera functions, the generation of large numbers of images, and the lack of flexibility to suit different research environments and questions. We describe the development of a user-customisable open source camera trap platform named ‘WiseEye’, designed to provide flexible camera trap technology for wildlife researchers. The novel platform is based on a Raspberry Pi single-board computer and compatible peripherals that allow the user to control its functions and performance. We introduce the concept of confirmatory sensing, in which the Passive Infrared triggering is confirmed through other modalities (i.e. radar, pixel change) to reduce the occurrence of false positives images. This concept, together with user-definable metadata, aided identification of spurious images and greatly reduced post-collection processing time. When tested against a commercial camera trap, WiseEye was found to reduce the incidence of false positive images and false negatives across a range of test conditions. WiseEye represents a step-change in camera trap functionality, greatly increasing the value of this technology for wildlife research and conservation management. PMID:28076444
STS-34 Pilot Michael J. McCulley uses ARRIFLEX camera equipment
1989-04-13
STS-34 Atlantis, Orbiter Vehicle (OV) 104, Pilot Michael J. McCulley squints while looking through ARRIFLEX camera eye piece during camera briefing at JSC. McCulley rests part of the camera on his shoulder as he operates it.
Depth-estimation-enabled compound eyes
NASA Astrophysics Data System (ADS)
Lee, Woong-Bi; Lee, Heung-No
2018-04-01
Most animals that have compound eyes determine object distances by using monocular cues, especially motion parallax. In artificial compound eye imaging systems inspired by natural compound eyes, object depths are typically estimated by measuring optic flow; however, this requires mechanical movement of the compound eyes or additional acquisition time. In this paper, we propose a method for estimating object depths in a monocular compound eye imaging system based on the computational compound eye (COMPU-EYE) framework. In the COMPU-EYE system, acceptance angles are considerably larger than interommatidial angles, causing overlap between the ommatidial receptive fields. In the proposed depth estimation technique, the disparities between these receptive fields are used to determine object distances. We demonstrate that the proposed depth estimation technique can estimate the distances of multiple objects.
Gao, Taotao; Xiang, Jie; Jin, Runsen; Zhang, Yajie; Wu, Han
2016-01-01
Camera assistant plays a very important role in uniportal video-assisted thoracoscopic surgery (VATS), who acts as the eye of the surgeon, providing the VATS team with a stable and clear operating view. Thus, a good assistant should cooperate with surgeon and manipulate the camera expertly, to ensure eye-hand coordination. We have performed more than 100 uniportal VATS in the Department Of Thoracic Surgery in Ruijin Hospital. Based on our experiences, we summarized the method of holding camera, known as “ipsilateral, high, single-hand, sideways”, which largely improves the comfort and fluency of surgery. PMID:27867573
Zhang, Shuqing; Zhou, Luyang; Xue, Changxi; Wang, Lei
2017-09-10
Compound eyes offer a promising field of miniaturized imaging systems. In one application of a compound eye, superposition of compound eye systems forms a composite image by superposing the images produced by different channels. The geometric configuration of superposition compound eye systems is achieved by three micro-lens arrays with different pitches and focal lengths. High resolution is indispensable for the practicability of superposition compound eye systems. In this paper, hybrid diffractive-refractive lenses are introduced into the design of a compound eye system for this purpose. With the help of ZEMAX, two superposition compound eye systems with and without hybrid diffractive-refractive lenses were separately designed. Then, we demonstrate the effectiveness of using a hybrid diffractive-refractive lens to improve the image quality.
NASA Technical Reports Server (NTRS)
1993-01-01
The Visi Screen OSS-C, marketed by Vision Research Corporation, incorporates image processing technology originally developed by Marshall Space Flight Center. Its advantage in eye screening is speed. Because it requires no response from a subject, it can be used to detect eye problems in very young children. An electronic flash from a 35 millimeter camera sends light into a child's eyes, which is reflected back to the camera lens. The photorefractor then analyzes the retinal reflexes generated and produces an image of the child's eyes, which enables a trained observer to identify any defects. The device is used by pediatricians, day care centers and civic organizations that concentrate on children with special needs.
Virtual view image synthesis for eye-contact in TV conversation system
NASA Astrophysics Data System (ADS)
Murayama, Daisuke; Kimura, Keiichi; Hosaka, Tadaaki; Hamamoto, Takayuki; Shibuhisa, Nao; Tanaka, Seiichi; Sato, Shunichi; Saito, Sakae
2010-02-01
Eye-contact plays an important role for human communications in the sense that it can convey unspoken information. However, it is highly difficult to realize eye-contact in teleconferencing systems because of camera configurations. Conventional methods to overcome this difficulty mainly resorted to space-consuming optical devices such as half mirrors. In this paper, we propose an alternative approach to achieve eye-contact by techniques of arbitrary view image synthesis. In our method, multiple images captured by real cameras are converted to the virtual viewpoint (the center of the display) by homography, and evaluation of matching errors among these projected images provides the depth map and the virtual image. Furthermore, we also propose a simpler version of this method by using a single camera to save the computational costs, in which the only one real image is transformed to the virtual viewpoint based on the hypothesis that the subject is located at a predetermined distance. In this simple implementation, eye regions are separately generated by comparison with pre-captured frontal face images. Experimental results of both the methods show that the synthesized virtual images enable the eye-contact favorably.
Extracting information of fixational eye movements through pupil tracking
NASA Astrophysics Data System (ADS)
Xiao, JiangWei; Qiu, Jian; Luo, Kaiqin; Peng, Li; Han, Peng
2018-01-01
Human eyes are never completely static even when they are fixing a stationary point. These irregular, small movements, which consist of micro-tremors, micro-saccades and drifts, can prevent the fading of the images that enter our eyes. The importance of researching the fixational eye movements has been experimentally demonstrated recently. However, the characteristics of fixational eye movements and their roles in visual process have not been explained clearly, because these signals can hardly be completely extracted by now. In this paper, we developed a new eye movement detection device with a high-speed camera. This device includes a beam splitter mirror, an infrared light source and a high-speed digital video camera with a frame rate of 200Hz. To avoid the influence of head shaking, we made the device wearable by fixing the camera on a safety helmet. Using this device, the experiments of pupil tracking were conducted. By localizing the pupil center and spectrum analysis, the envelope frequency spectrum of micro-saccades, micro-tremors and drifts are shown obviously. The experimental results show that the device is feasible and effective, so that the device can be applied in further characteristic analysis.
NASA Technical Reports Server (NTRS)
1987-01-01
Used to detect eye problems in children through analysis of retinal reflexes, the system incorporates image processing techniques. VISISCREEN's photorefractor is basically a 35 millimeter camera with a telephoto lens and an electronic flash. By making a color photograph, the system can test the human eye for refractive error and obstruction in the cornea or lens. Ocular alignment problems are detected by imaging both eyes simultaneously. Electronic flash sends light into the eyes and the light is reflected from the retina back to the camera lens. Photorefractor analyzes the retinal reflexes generated by the subject's response to the flash and produces an image of the subject's eyes in which the pupils are variously colored. The nature of a defect, where such exists, is identifiable by atrained observer's visual examination.
Photorefractor ocular screening system
NASA Technical Reports Server (NTRS)
Richardson, John R. (Inventor); Kerr, Joseph H. (Inventor)
1987-01-01
A method and apparatus for detecting human eye defects, particularly detection of refractive error is presented. Eye reflex is recorded on color film when the eyes are exposed to a flash of light. The photographs are compared with predetermined standards to detect eye defects. The base structure of the ocular screening system is a folding interconnect structure, comprising hinged sections. Attached to one end of the structure is a head positioning station which comprises vertical support, a head positioning bracket having one end attached to the top of the support, and two head positioning lamps to verify precise head positioning. At the opposite end of the interconnect structure is a camera station with camera, electronic flash unit, and blinking fixation lamp, for photographing the eyes of persons being evaluated.
Questions Students Ask: The Red-Eye Effect.
ERIC Educational Resources Information Center
Physics Teacher, 1985
1985-01-01
Addresses the question of why a dog's eyes appear red and glow when a flash photograph is taken. Conditions for the red-eye effect, light paths involved, structure of the eye, and typical cameras and lenses are discussed. Also notes differences between the eyes of nocturnal animals and humans. (JN)
Galactic Dust Bunnies Found to Contain Carbon After All
NASA Technical Reports Server (NTRS)
2009-01-01
The 'Cat's Eye' nebula, or NGC 6543, is a well-studied example of a 'planetary nebula.' Such objects are the glowing remnants of dust and gas expelled from moderate-sized stars during their last stages of life. Our own sun will generate such a nebula in about five billion years. NASA's Spitzer Space Telescope has studied many such planetary nebulae in infrared light, including a variety of more distant ones, which have helped scientists identify a population of carbon-bearing stars near our galaxy's center. The infrared emission from the Cat's Eye is generated by a variety of elements and molecules. The bright inner region of this nebula shows a complex structure reminiscent of a feline eye. Outside this compact region lies a series of other structures representing material that was ejected slightly earlier in the central star's life, when it was a giant star. The image is a composite of data from Spitzer's infrared array camera. Light with a wavelength of 3.6 microns is rendered as blue, 5.8 microns is displayed as green and 8.0 microns is represented in red. The brightness of the central area has been greatly reduced to make it possible to maintain its visibility while enhancing the brightness of the much fainter outer features. Overall colors have been enhanced to better show slight variations in hue.Design and build a compact Raman sensor for identification of chemical composition
NASA Astrophysics Data System (ADS)
Garcia, Christopher S.; Abedin, M. Nurul; Ismail, Syed; Sharma, Shiv K.; Misra, Anupam K.; Sandford, Stephen P.; Elsayed-Ali, Hani
2008-04-01
A compact remote Raman sensor system was developed at NASA Langley Research Center. This sensor is an improvement over the previously reported system, which consisted of a 532 nm pulsed laser, a 4-inch telescope, a spectrograph, and an intensified CCD camera. One of the attractive features of the previous system was its portability, thereby making it suitable for applications such as planetary surface explorations, homeland security and defense applications where a compact portable instrument is important. The new system was made more compact by replacing bulky components with smaller and lighter components. The new compact system uses a smaller spectrograph measuring 9 x 4 x 4 in. and a smaller intensified CCD camera measuring 5 in. long and 2 in. in diameter. The previous system was used to obtain the Raman spectra of several materials that are important to defense and security applications. Furthermore, the new compact Raman sensor system is used to obtain the Raman spectra of a diverse set of materials to demonstrate the sensor system's potential use in the identification of unknown materials.
Design and Build a Compact Raman Sensor for Identification of Chemical Composition
NASA Technical Reports Server (NTRS)
Garcia, Christopher S.; Abedin, M. Nurul; Ismail, Syed; Sharma, Shiv K.; Misra, Anupam K.; Sandford, Stephen P.; Elsayed-Ali, Hani
2008-01-01
A compact remote Raman sensor system was developed at NASA Langley Research Center. This sensor is an improvement over the previously reported system, which consisted of a 532 nm pulsed laser, a 4-inch telescope, a spectrograph, and an intensified charge-coupled devices (CCD) camera. One of the attractive features of the previous system was its portability, thereby making it suitable for applications such as planetary surface explorations, homeland security and defense applications where a compact portable instrument is important. The new system was made more compact by replacing bulky components with smaller and lighter components. The new compact system uses a smaller spectrograph measuring 9 x 4 x 4 in. and a smaller intensified CCD camera measuring 5 in. long and 2 in. in diameter. The previous system was used to obtain the Raman spectra of several materials that are important to defense and security applications. Furthermore, the new compact Raman sensor system is used to obtain the Raman spectra of a diverse set of materials to demonstrate the sensor system's potential use in the identification of unknown materials.
A novel smartphone ophthalmic imaging adapter: User feasibility studies in Hyderabad, India
Ludwig, Cassie A; Murthy, Somasheila I; Pappuru, Rajeev R; Jais, Alexandre; Myung, David J; Chang, Robert T
2016-01-01
Aim of Study: To evaluate the ability of ancillary health staff to use a novel smartphone imaging adapter system (EyeGo, now known as Paxos Scope) to capture images of sufficient quality to exclude emergent eye findings. Secondary aims were to assess user and patient experiences during image acquisition, interuser reproducibility, and subjective image quality. Materials and Methods: The system captures images using a macro lens and an indirect ophthalmoscopy lens coupled with an iPhone 5S. We conducted a prospective cohort study of 229 consecutive patients presenting to L. V. Prasad Eye Institute, Hyderabad, India. Primary outcome measure was mean photographic quality (FOTO-ED study 1–5 scale, 5 best). 210 patients and eight users completed surveys assessing comfort and ease of use. For 46 patients, two users imaged the same patient's eyes sequentially. For 182 patients, photos taken with the EyeGo system were compared to images taken by existing clinic cameras: a BX 900 slit-lamp with a Canon EOS 40D Digital Camera and an FF 450 plus Fundus Camera with VISUPAC™ Digital Imaging System. Images were graded post hoc by a reviewer blinded to diagnosis. Results: Nine users acquired 719 useable images and 253 videos of 229 patients. Mean image quality was ≥ 4.0/5.0 (able to exclude subtle findings) for all users. 8/8 users and 189/210 patients surveyed were comfortable with the EyeGo device on a 5-point Likert scale. For 21 patients imaged with the anterior adapter by two users, a weighted κ of 0.597 (95% confidence interval: 0.389–0.806) indicated moderate reproducibility. High level of agreement between EyeGo and existing clinic cameras (92.6% anterior, 84.4% posterior) was found. Conclusion: The novel, ophthalmic imaging system is easily learned by ancillary eye care providers, well tolerated by patients, and captures high-quality images of eye findings. PMID:27146928
NASA Astrophysics Data System (ADS)
McIntosh, Benjamin Patrick
Blindness due to Age-Related Macular Degeneration and Retinitis Pigmentosa is unfortunately both widespread and largely incurable. Advances in visual prostheses that can restore functional vision in those afflicted by these diseases have evolved rapidly from new areas of research in ophthalmology and biomedical engineering. This thesis is focused on further advancing the state-of-the-art of both visual prostheses and implantable biomedical devices. A novel real-time system with a high performance head-mounted display is described that enables enhanced realistic simulation of intraocular retinal prostheses. A set of visual psychophysics experiments is presented using the visual prosthesis simulator that quantify, in several ways, the benefit of foveation afforded by an eye-pointed camera (such as an eye-tracked extraocular camera or an implantable intraocular camera) as compared with a head-pointed camera. A visual search experiment demonstrates a significant improvement in the time to locate a target on a screen when using an eye-pointed camera. A reach and grasp experiment demonstrates a 20% to 70% improvement in time to grasp an object when using an eye-pointed camera, with the improvement maximized when the percept is blurred. A navigation and mobility experiment shows a 10% faster walking speed and a 50% better ability to avoid obstacles when using an eye-pointed camera. Improvements to implantable biomedical devices are also described, including the design and testing of VLSI-integrable positive mobile ion contamination sensors and humidity sensors that can validate the hermeticity of biomedical device packages encapsulated by hermetic coatings, and can provide early warning of leaks or contamination that may jeopardize the implant. The positive mobile ion contamination sensors are shown to be sensitive to externally applied contamination. A model is proposed to describe sensitivity as a function of device geometry, and verified experimentally. Guidelines are provided on the use of spare CMOS oxide and metal layers to maximize the hermeticity of an implantable microchip. In addition, results are presented on the design and testing of small form factor, very low power, integrated CMOS clock generation circuits that are stable enough to drive commercial image sensor arrays, and therefore can be incorporated in an intraocular camera for retinal prostheses.
Ambient-Light-Canceling Camera Using Subtraction of Frames
NASA Technical Reports Server (NTRS)
Morookian, John Michael
2004-01-01
The ambient-light-canceling camera (ALCC) is a proposed near-infrared electronic camera that would utilize a combination of (1) synchronized illumination during alternate frame periods and (2) subtraction of readouts from consecutive frames to obtain images without a background component of ambient light. The ALCC is intended especially for use in tracking the motion of an eye by the pupil center corneal reflection (PCCR) method. Eye tracking by the PCCR method has shown potential for application in human-computer interaction for people with and without disabilities, and for noninvasive monitoring, detection, and even diagnosis of physiological and neurological deficiencies. In the PCCR method, an eye is illuminated by near-infrared light from a lightemitting diode (LED). Some of the infrared light is reflected from the surface of the cornea. Some of the infrared light enters the eye through the pupil and is reflected from back of the eye out through the pupil a phenomenon commonly observed as the red-eye effect in flash photography. An electronic camera is oriented to image the user's eye. The output of the camera is digitized and processed by algorithms that locate the two reflections. Then from the locations of the centers of the two reflections, the direction of gaze is computed. As described thus far, the PCCR method is susceptible to errors caused by reflections of ambient light. Although a near-infrared band-pass optical filter can be used to discriminate against ambient light, some sources of ambient light have enough in-band power to compete with the LED signal. The mode of operation of the ALCC would complement or supplant spectral filtering by providing more nearly complete cancellation of the effect of ambient light. In the operation of the ALCC, a near-infrared LED would be pulsed on during one camera frame period and off during the next frame period. Thus, the scene would be illuminated by both the LED (signal) light and the ambient (background) light during one frame period, and would be illuminated with only ambient (background) light during the next frame period. The camera output would be digitized and sent to a computer, wherein the pixel values of the background-only frame would be subtracted from the pixel values of the signal-plus-background frame to obtain signal-only pixel values (see figure). To prevent artifacts of motion from entering the images, it would be necessary to acquire image data at a rate greater than the standard video rate of 30 frames per second. For this purpose, the ALCC would exploit a novel control technique developed at NASA s Jet Propulsion Laboratory for advanced charge-coupled-device (CCD) cameras. This technique provides for readout from a subwindow [region of interest (ROI)] within the image frame. Because the desired reflections from the eye would typically occupy a small fraction of the area within the image frame, the ROI capability would make it possible to acquire and subtract pixel values at rates of several hundred frames per second considerably greater than the standard video rate and sufficient to both (1) suppress motion artifacts and (2) track the motion of the eye between consecutive subtractive frame pairs.
Minimizing camera-eye optical aberrations during the 3D reconstruction of retinal structures
NASA Astrophysics Data System (ADS)
Aldana-Iuit, Javier; Martinez-Perez, M. Elena; Espinosa-Romero, Arturo; Diaz-Uribe, Rufino
2010-05-01
3D reconstruction of blood vessels is a powerful visualization tool for physicians, since it allows them to refer to qualitative representation of their subject of study. In this paper we propose a 3D reconstruction method of retinal vessels from fundus images. The reconstruction method propose herein uses images of the same retinal structure in epipolar geometry. Images are preprocessed by RISA system for segmenting blood vessels and obtaining feature points for correspondences. The correspondence points process is solved using correlation. The LMedS analysis and Graph Transformation Matching algorithm are used for outliers suppression. Camera projection matrices are computed with the normalized eight point algorithm. Finally, we retrieve 3D position of the retinal tree points by linear triangulation. In order to increase the power of visualization, 3D tree skeletons are represented by surfaces via generalized cylinders whose radius correspond to morphological measurements obtained by RISA. In this paper the complete calibration process including the fundus camera and the optical properties of the eye, the so called camera-eye system is proposed. On one hand, the internal parameters of the fundus camera are obtained by classical algorithms using a reference pattern. On the other hand, we minimize the undesirable efects of the aberrations induced by the eyeball optical system assuming that contact enlarging lens corrects astigmatism, spherical and coma aberrations are reduced changing the aperture size and eye refractive errors are suppressed adjusting camera focus during image acquisition. Evaluation of two self-calibration proposals and results of 3D blood vessel surface reconstruction are presented.
Free-form reflective optics for mid-infrared camera and spectrometer on board SPICA
NASA Astrophysics Data System (ADS)
Fujishiro, Naofumi; Kataza, Hirokazu; Wada, Takehiko; Ikeda, Yuji; Sakon, Itsuki; Oyabu, Shinki
2017-11-01
SPICA (Space Infrared Telescope for Cosmology and Astrophysics) is an astronomical mission optimized for mid-and far-infrared astronomy with a cryogenically cooled 3-m class telescope, envisioned for launch in early 2020s. Mid-infrared Camera and Spectrometer (MCS) is a focal plane instrument for SPICA with imaging and spectroscopic observing capabilities in the mid-infrared wavelength range of 5-38μm. MCS consists of two relay optical modules and following four scientific optical modules of WFC (Wide Field Camera; 5'x 5' field of view, f/11.7 and f/4.2 cameras), LRS (Low Resolution Spectrometer; 2'.5 long slits, prism dispersers, f/5.0 and f/1.7 cameras, spectral resolving power R ∼ 50-100), MRS (Mid Resolution Spectrometer; echelles, integral field units by image slicer, f/3.3 and f/1.9 cameras, R ∼ 1100-3000) and HRS (High Resolution Spectrometer; immersed echelles, f/6.0 and f/3.6 cameras, R ∼ 20000-30000). Here, we present optical design and expected optical performance of MCS. Most parts of MCS optics adopt off-axis reflective system for covering the wide wavelength range of 5-38μm without chromatic aberration and minimizing problems due to changes in shapes and refractive indices of materials from room temperature to cryogenic temperature. In order to achieve the high specification requirements of wide field of view, small F-number and large spectral resolving power with compact size, we employed the paraxial and aberration analysis of off-axial optical systems (Araki 2005 [1]) which is a design method using free-form surfaces for compact reflective optics such as head mount displays. As a result, we have successfully designed compact reflective optics for MCS with as-built performance of diffraction-limited image resolution.
Etalon Array Reconstructive Spectrometry
NASA Astrophysics Data System (ADS)
Huang, Eric; Ma, Qian; Liu, Zhaowei
2017-01-01
Compact spectrometers are crucial in areas where size and weight may need to be minimized. These types of spectrometers often contain no moving parts, which makes for an instrument that can be highly durable. With the recent proliferation in low-cost and high-resolution cameras, camera-based spectrometry methods have the potential to make portable spectrometers small, ubiquitous, and cheap. Here, we demonstrate a novel method for compact spectrometry that uses an array of etalons to perform spectral encoding, and uses a reconstruction algorithm to recover the incident spectrum. This spectrometer has the unique capability for both high resolution and a large working bandwidth without sacrificing sensitivity, and we anticipate that its simplicity makes it an excellent candidate whenever a compact, robust, and flexible spectrometry solution is needed.
A compact multichannel spectrometer for Thomson scatteringa)
NASA Astrophysics Data System (ADS)
Schoenbeck, N. L.; Schlossberg, D. J.; Dowd, A. S.; Fonck, R. J.; Winz, G. R.
2012-10-01
The availability of high-efficiency volume phase holographic (VPH) gratings and intensified CCD (ICCD) cameras have motivated a simplified, compact spectrometer for Thomson scattering detection. Measurements of Te < 100 eV are achieved by a 2971 l/mm VPH grating and measurements Te > 100 eV by a 2072 l/mm VPH grating. The spectrometer uses a fast-gated (˜2 ns) ICCD camera for detection. A Gen III image intensifier provides ˜45% quantum efficiency in the visible region. The total read noise of the image is reduced by on-chip binning of the CCD to match the 8 spatial channels and the 10 spectral bins on the camera. Three spectrometers provide a minimum of 12 spatial channels and 12 channels for background subtraction.
A compact multichannel spectrometer for Thomson scattering.
Schoenbeck, N L; Schlossberg, D J; Dowd, A S; Fonck, R J; Winz, G R
2012-10-01
The availability of high-efficiency volume phase holographic (VPH) gratings and intensified CCD (ICCD) cameras have motivated a simplified, compact spectrometer for Thomson scattering detection. Measurements of T(e) < 100 eV are achieved by a 2971 l∕mm VPH grating and measurements T(e) > 100 eV by a 2072 l∕mm VPH grating. The spectrometer uses a fast-gated (~2 ns) ICCD camera for detection. A Gen III image intensifier provides ~45% quantum efficiency in the visible region. The total read noise of the image is reduced by on-chip binning of the CCD to match the 8 spatial channels and the 10 spectral bins on the camera. Three spectrometers provide a minimum of 12 spatial channels and 12 channels for background subtraction.
Exide eyeing technology for high-powered battery
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1999-11-01
Exide Corp. said recently it may soon produce a graphite battery with more than three times the power of today's most advanced production batteries--but with half their weight, far smaller size, and only a third the cost. The Reading-based Exide, the world's largest maker of lead-acid batteries, said it has preliminarily agreed to pay $20 million for a controlling interest in Lion Compact Energy, a privately held company that's researching dual-graphite battery technology said to be cleaner cheaper and more efficient. Exide hopes to turn the technology into the products; it said initial applications include smaller battery-operated devices such asmore » cell phones, cameras, laptop computers, power tools and certain military equipment. Larger devices would follow, and could include wheel chairs, motorcycles, replacement for lead-acid batteries in cars and trucks and, potentially, all-electric vehicles.« less
Generating Stereoscopic Television Images With One Camera
NASA Technical Reports Server (NTRS)
Coan, Paul P.
1996-01-01
Straightforward technique for generating stereoscopic television images involves use of single television camera translated laterally between left- and right-eye positions. Camera acquires one of images (left- or right-eye image), and video signal from image delayed while camera translated to position where it acquires other image. Length of delay chosen so both images displayed simultaneously or as nearly simultaneously as necessary to obtain stereoscopic effect. Technique amenable to zooming in on small areas within broad scenes. Potential applications include three-dimensional viewing of geological features and meteorological events from spacecraft and aircraft, inspection of workpieces moving along conveyor belts, and aiding ground and water search-and-rescue operations. Also used to generate and display imagery for public education and general information, and possible for medical purposes.
Photometry of compact galaxies.
NASA Technical Reports Server (NTRS)
Shen, B. S. P.; Usher, P. D.; Barrett, J. W.
1972-01-01
Photometric histories of the N galaxies 3C 390.3 and PKS 0521-36. Four other compact galaxies, Markarian 9, I Zw 92, 2 Zw 136, and III Zw 77 showed no evidence of variability. The photometric histories were obtained from an exhaustive study of those plates of the Harvard collection taken with large aperture cameras. The images of all galaxies reported were indistinguishable from stars due to the camera f-ratios and low surface brightness of the outlying nebulosities of the galaxies. Standard techniques for the study of variable stars are therefore applicable.
Enhanced Video-Oculography System
NASA Technical Reports Server (NTRS)
Moore, Steven T.; MacDougall, Hamish G.
2009-01-01
A previously developed video-oculography system has been enhanced for use in measuring vestibulo-ocular reflexes of a human subject in a centrifuge, motor vehicle, or other setting. The system as previously developed included a lightweight digital video camera mounted on goggles. The left eye was illuminated by an infrared light-emitting diode via a dichroic mirror, and the camera captured images of the left eye in infrared light. To extract eye-movement data, the digitized video images were processed by software running in a laptop computer. Eye movements were calibrated by having the subject view a target pattern, fixed with respect to the subject s head, generated by a goggle-mounted laser with a diffraction grating. The system as enhanced includes a second camera for imaging the scene from the subject s perspective, and two inertial measurement units (IMUs) for measuring linear accelerations and rates of rotation for computing head movements. One IMU is mounted on the goggles, the other on the centrifuge or vehicle frame. All eye-movement and head-motion data are time-stamped. In addition, the subject s point of regard is superimposed on each scene image to enable analysis of patterns of gaze in real time.
Retinal axial focusing and multi-layer imaging with a liquid crystal adaptive optics camera
NASA Astrophysics Data System (ADS)
Liu, Rui-Xue; Zheng, Xian-Liang; Li, Da-Yu; Xia, Ming-Liang; Hu, Li-Fa; Cao, Zhao-Liang; Mu, Quan-Quan; Xuan, Li
2014-09-01
With the help of adaptive optics (AO) technology, cellular level imaging of living human retina can be achieved. Aiming to reduce distressing feelings and to avoid potential drug induced diseases, we attempted to image retina with dilated pupil and froze accommodation without drugs. An optimized liquid crystal adaptive optics camera was adopted for retinal imaging. A novel eye stared system was used for stimulating accommodation and fixating imaging area. Illumination sources and imaging camera kept linkage for focusing and imaging different layers. Four subjects with diverse degree of myopia were imaged. Based on the optical properties of the human eye, the eye stared system reduced the defocus to less than the typical ocular depth of focus. In this way, the illumination light can be projected on certain retina layer precisely. Since that the defocus had been compensated by the eye stared system, the adopted 512 × 512 liquid crystal spatial light modulator (LC-SLM) corrector provided the crucial spatial fidelity to fully compensate high-order aberrations. The Strehl ratio of a subject with -8 diopter myopia was improved to 0.78, which was nearly close to diffraction-limited imaging. By finely adjusting the axial displacement of illumination sources and imaging camera, cone photoreceptors, blood vessels and nerve fiber layer were clearly imaged successfully.
Kim, Ki Wan; Hong, Hyung Gil; Nam, Gi Pyo; Park, Kang Ryoung
2017-06-30
The necessity for the classification of open and closed eyes is increasing in various fields, including analysis of eye fatigue in 3D TVs, analysis of the psychological states of test subjects, and eye status tracking-based driver drowsiness detection. Previous studies have used various methods to distinguish between open and closed eyes, such as classifiers based on the features obtained from image binarization, edge operators, or texture analysis. However, when it comes to eye images with different lighting conditions and resolutions, it can be difficult to find an optimal threshold for image binarization or optimal filters for edge and texture extraction. In order to address this issue, we propose a method to classify open and closed eye images with different conditions, acquired by a visible light camera, using a deep residual convolutional neural network. After conducting performance analysis on both self-collected and open databases, we have determined that the classification accuracy of the proposed method is superior to that of existing methods.
Automated Analysis of a Nematode Population-based Chemosensory Preference Assay
Chai, Cynthia M.; Cronin, Christopher J.; Sternberg, Paul W.
2017-01-01
The nematode, Caenorhabditis elegans' compact nervous system of only 302 neurons underlies a diverse repertoire of behaviors. To facilitate the dissection of the neural circuits underlying these behaviors, the development of robust and reproducible behavioral assays is necessary. Previous C. elegans behavioral studies have used variations of a "drop test", a "chemotaxis assay", and a "retention assay" to investigate the response of C. elegans to soluble compounds. The method described in this article seeks to combine the complementary strengths of the three aforementioned assays. Briefly, a small circle in the middle of each assay plate is divided into four quadrants with the control and experimental solutions alternately placed. After the addition of the worms, the assay plates are loaded into a behavior chamber where microscope cameras record the worms' encounters with the treated regions. Automated video analysis is then performed and a preference index (PI) value for each video is generated. The video acquisition and automated analysis features of this method minimizes the experimenter's involvement and any associated errors. Furthermore, minute amounts of the experimental compound are used per assay and the behavior chamber's multi-camera setup increases experimental throughput. This method is particularly useful for conducting behavioral screens of genetic mutants and novel chemical compounds. However, this method is not appropriate for studying stimulus gradient navigation due to the close proximity of the control and experimental solution regions. It should also not be used when only a small population of worms is available. While suitable for assaying responses only to soluble compounds in its current form, this method can be easily modified to accommodate multimodal sensory interaction and optogenetic studies. This method can also be adapted to assay the chemosensory responses of other nematode species. PMID:28745641
Development of compact Compton camera for 3D image reconstruction of radioactive contamination
NASA Astrophysics Data System (ADS)
Sato, Y.; Terasaka, Y.; Ozawa, S.; Nakamura Miyamura, H.; Kaburagi, M.; Tanifuji, Y.; Kawabata, K.; Torii, T.
2017-11-01
The Fukushima Daiichi Nuclear Power Station (FDNPS), operated by Tokyo Electric Power Company Holdings, Inc., went into meltdown after the large tsunami caused by the Great East Japan Earthquake of March 11, 2011. Very large amounts of radionuclides were released from the damaged plant. Radiation distribution measurements inside FDNPS buildings are indispensable to execute decommissioning tasks in the reactor buildings. We have developed a compact Compton camera to measure the distribution of radioactive contamination inside the FDNPS buildings three-dimensionally (3D). The total weight of the Compton camera is lower than 1.0 kg. The gamma-ray sensor of the Compton camera employs Ce-doped GAGG (Gd3Al2Ga3O12) scintillators coupled with a multi-pixel photon counter. Angular correction of the detection efficiency of the Compton camera was conducted. Moreover, we developed a 3D back-projection method using the multi-angle data measured with the Compton camera. We successfully observed 3D radiation images resulting from the two 137Cs radioactive sources, and the image of the 9.2 MBq source appeared stronger than that of the 2.7 MBq source.
Nakanishi, Nagayasu; Camara, Anthony C; Yuan, David C; Gold, David A; Jacobs, David K
2015-01-01
In Bilateria, Pax6, Six, Eya and Dach families of transcription factors underlie the development and evolution of morphologically and phyletically distinct eyes, including the compound eyes in Drosophila and the camera-type eyes in vertebrates, indicating that bilaterian eyes evolved under the strong influence of ancestral developmental gene regulation. However the conservation in eye developmental genetics deeper in the Eumetazoa, and the origin of the conserved gene regulatory apparatus controlling eye development remain unclear due to limited comparative developmental data from Cnidaria. Here we show in the eye-bearing scyphozoan cnidarian Aurelia that the ectodermal photosensory domain of the developing medusa sensory structure known as the rhopalium expresses sine oculis (so)/six1/2 and eyes absent/eya, but not optix/six3/6 or pax (A&B). In addition, the so and eya co-expression domain encompasses the region of active cell proliferation, neurogenesis, and mechanoreceptor development in rhopalia. Consistent with the role of so and eya in rhopalial development, developmental transcriptome data across Aurelia life cycle stages show upregulation of so and eya, but not optix or pax (A&B), during medusa formation. Moreover, pax6 and dach are absent in the Aurelia genome, and thus are not required for eye development in Aurelia. Our data are consistent with so and eya, but not optix, pax or dach, having conserved functions in sensory structure specification across Eumetazoa. The lability of developmental components including Pax genes relative to so-eya is consistent with a model of sense organ development and evolution that involved the lineage specific modification of a combinatorial code that specifies animal sense organs.
Remote gaze tracking system on a large display.
Lee, Hyeon Chang; Lee, Won Oh; Cho, Chul Woo; Gwon, Su Yeong; Park, Kang Ryoung; Lee, Heekyung; Cha, Jihun
2013-10-07
We propose a new remote gaze tracking system as an intelligent TV interface. Our research is novel in the following three ways: first, because a user can sit at various positions in front of a large display, the capture volume of the gaze tracking system should be greater, so the proposed system includes two cameras which can be moved simultaneously by panning and tilting mechanisms, a wide view camera (WVC) for detecting eye position and an auto-focusing narrow view camera (NVC) for capturing enlarged eye images. Second, in order to remove the complicated calibration between the WVC and NVC and to enhance the capture speed of the NVC, these two cameras are combined in a parallel structure. Third, the auto-focusing of the NVC is achieved on the basis of both the user's facial width in the WVC image and a focus score calculated on the eye image of the NVC. Experimental results showed that the proposed system can be operated with a gaze tracking accuracy of ±0.737°~±0.775° and a speed of 5~10 frames/s.
Remote Gaze Tracking System on a Large Display
Lee, Hyeon Chang; Lee, Won Oh; Cho, Chul Woo; Gwon, Su Yeong; Park, Kang Ryoung; Lee, Heekyung; Cha, Jihun
2013-01-01
We propose a new remote gaze tracking system as an intelligent TV interface. Our research is novel in the following three ways: first, because a user can sit at various positions in front of a large display, the capture volume of the gaze tracking system should be greater, so the proposed system includes two cameras which can be moved simultaneously by panning and tilting mechanisms, a wide view camera (WVC) for detecting eye position and an auto-focusing narrow view camera (NVC) for capturing enlarged eye images. Second, in order to remove the complicated calibration between the WVC and NVC and to enhance the capture speed of the NVC, these two cameras are combined in a parallel structure. Third, the auto-focusing of the NVC is achieved on the basis of both the user's facial width in the WVC image and a focus score calculated on the eye image of the NVC. Experimental results showed that the proposed system can be operated with a gaze tracking accuracy of ±0.737°∼±0.775° and a speed of 5∼10 frames/s. PMID:24105351
CAMERA: An integrated strategy for compound spectra extraction and annotation of LC/MS data sets
Kuhl, Carsten; Tautenhahn, Ralf; Böttcher, Christoph; Larson, Tony R.; Neumann, Steffen
2013-01-01
Liquid chromatography coupled to mass spectrometry is routinely used for metabolomics experiments. In contrast to the fairly routine and automated data acquisition steps, subsequent compound annotation and identification require extensive manual analysis and thus form a major bottle neck in data interpretation. Here we present CAMERA, a Bioconductor package integrating algorithms to extract compound spectra, annotate isotope and adduct peaks, and propose the accurate compound mass even in highly complex data. To evaluate the algorithms, we compared the annotation of CAMERA against a manually defined annotation for a mixture of known compounds spiked into a complex matrix at different concentrations. CAMERA successfully extracted accurate masses for 89.7% and 90.3% of the annotatable compounds in positive and negative ion mode, respectively. Furthermore, we present a novel annotation approach that combines spectral information of data acquired in opposite ion modes to further improve the annotation rate. We demonstrate the utility of CAMERA in two different, easily adoptable plant metabolomics experiments, where the application of CAMERA drastically reduced the amount of manual analysis. PMID:22111785
Canessa, Andrea; Gibaldi, Agostino; Chessa, Manuela; Fato, Marco; Solari, Fabio; Sabatini, Silvio P.
2017-01-01
Binocular stereopsis is the ability of a visual system, belonging to a live being or a machine, to interpret the different visual information deriving from two eyes/cameras for depth perception. From this perspective, the ground-truth information about three-dimensional visual space, which is hardly available, is an ideal tool both for evaluating human performance and for benchmarking machine vision algorithms. In the present work, we implemented a rendering methodology in which the camera pose mimics realistic eye pose for a fixating observer, thus including convergent eye geometry and cyclotorsion. The virtual environment we developed relies on highly accurate 3D virtual models, and its full controllability allows us to obtain the stereoscopic pairs together with the ground-truth depth and camera pose information. We thus created a stereoscopic dataset: GENUA PESTO—GENoa hUman Active fixation database: PEripersonal space STereoscopic images and grOund truth disparity. The dataset aims to provide a unified framework useful for a number of problems relevant to human and computer vision, from scene exploration and eye movement studies to 3D scene reconstruction. PMID:28350382
ERIC Educational Resources Information Center
Smith, Linda B.; Yu, Chen; Yoshida, Hanako; Fausey, Caitlin M.
2015-01-01
Head-mounted video cameras (with and without an eye camera to track gaze direction) are being increasingly used to study infants' and young children's visual environments and provide new and often unexpected insights about the visual world from a child's point of view. The challenge in using head cameras is principally conceptual and concerns the…
System Construction of the Stilbene Compact Neutron Scatter Camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldsmith, John E. M.; Gerling, Mark D.; Brennan, James S.
This report documents the construction of a stilbene-crystal-based compact neutron scatter camera. This system is essentially identical to the MINER (Mobile Imager of Neutrons for Emergency Responders) system previously built and deployed under DNN R&D funding,1 but with the liquid scintillator in the detection cells replaced by stilbene crystals. The availability of these two systems for side-by-side performance comparisons will enable us to unambiguously identify the performance enhancements provided by the stilbene crystals, which have only recently become commercially available in the large size required (3” diameter, 3” deep).
2003-01-22
One concern about human adaptation to space is how returning from the microgravity of orbit to Earth can affect an astronaut's ability to fly safely. There are monitors and infrared video cameras to measure eye movements without having to affect the crew member. A computer screen provides moving images which the eye tracks while the brain determines what it is seeing. A video camera records movement of the subject's eyes. Researchers can then correlate perception and response. Test subjects perceive different images when a moving object is covered by a mask that is visible or invisible (above). Early results challenge the accepted theory that smooth pursuit -- the fluid eye movement that humans and primates have -- does not involve the higher brain. NASA results show that: Eye movement can predict human perceptual performance, smooth pursuit and saccadic (quick or ballistic) movement share some signal pathways, and common factors can make both smooth pursuit and visual perception produce errors in motor responses.
Understanding Visible Perception
NASA Technical Reports Server (NTRS)
2003-01-01
One concern about human adaptation to space is how returning from the microgravity of orbit to Earth can affect an astronaut's ability to fly safely. There are monitors and infrared video cameras to measure eye movements without having to affect the crew member. A computer screen provides moving images which the eye tracks while the brain determines what it is seeing. A video camera records movement of the subject's eyes. Researchers can then correlate perception and response. Test subjects perceive different images when a moving object is covered by a mask that is visible or invisible (above). Early results challenge the accepted theory that smooth pursuit -- the fluid eye movement that humans and primates have -- does not involve the higher brain. NASA results show that: Eye movement can predict human perceptual performance, smooth pursuit and saccadic (quick or ballistic) movement share some signal pathways, and common factors can make both smooth pursuit and visual perception produce errors in motor responses.
DOT National Transportation Integrated Search
2004-10-01
The parking assistance system evaluated consisted of four outward facing cameras whose images could be presented on a monitor on the center console. The images presented varied in the location of the virtual eye point of the camera (the height above ...
Camera Perspective Bias in Videotaped Confessions: Evidence that Visual Attention Is a Mediator
ERIC Educational Resources Information Center
Ware, Lezlee J.; Lassiter, G. Daniel; Patterson, Stephen M.; Ransom, Michael R.
2008-01-01
Several experiments have demonstrated a "camera perspective bias" in evaluations of videotaped confessions: videotapes with the camera focused on the suspect lead to judgments of greater voluntariness than alternative presentation formats. The present research investigated potential mediators of this bias. Using eye tracking to measure visual…
NASA Technical Reports Server (NTRS)
Schade, David J.; Elson, Rebecca A. W.
1993-01-01
We describe experiments with deconvolutions of simulations of deep HST Wide Field Camera images containing faint, compact galaxies to determine under what circumstances there is a quantitative advantage to image deconvolution, and explore whether it is (1) helpful for distinguishing between stars and compact galaxies, or between spiral and elliptical galaxies, and whether it (2) improves the accuracy with which characteristic radii and integrated magnitudes may be determined. The Maximum Entropy and Richardson-Lucy deconvolution algorithms give the same results. For medium and low S/N images, deconvolution does not significantly improve our ability to distinguish between faint stars and compact galaxies, nor between spiral and elliptical galaxies. Measurements from both raw and deconvolved images are biased and must be corrected; it is easier to quantify and remove the biases for cases that have not been deconvolved. We find no benefit from deconvolution for measuring luminosity profiles, but these results are limited to low S/N images of very compact (often undersampled) galaxies.
Portable retinal imaging for eye disease screening using a consumer-grade digital camera
NASA Astrophysics Data System (ADS)
Barriga, Simon; Larichev, Andrey; Zamora, Gilberto; Soliz, Peter
2012-03-01
The development of affordable means to image the retina is an important step toward the implementation of eye disease screening programs. In this paper we present the i-RxCam, a low-cost, hand-held, retinal camera for widespread applications such as tele-retinal screening for eye diseases like diabetic retinopathy (DR), glaucoma, and age-related ocular diseases. Existing portable retinal imagers do not meet the requirements of a low-cost camera with sufficient technical capabilities (field of view, image quality, portability, battery power, and ease-of-use) to be distributed widely to low volume clinics, such as the offices of single primary care physicians serving rural communities. The i-RxCam uses a Nikon D3100 digital camera body. The camera has a CMOS sensor with 14.8 million pixels. We use a 50mm focal lens that gives a retinal field of view of 45 degrees. The internal autofocus can compensate for about 2D (diopters) of focusing error. The light source is an LED produced by Philips with a linear emitting area that is transformed using a light pipe to the optimal shape at the eye pupil, an annulus. To eliminate corneal reflex we use a polarization technique in which the light passes through a nano-wire polarizer plate. This is a novel type of polarizer featuring high polarization separation (contrast ratio of more than 1000) and very large acceptance angle (>45 degrees). The i-RxCam approach will yield a significantly more economical retinal imaging device that would allow mass screening of the at-risk population.
NASA Technical Reports Server (NTRS)
Vaughan, Andrew T. (Inventor); Riedel, Joseph E. (Inventor)
2016-01-01
A single, compact, lower power deep space positioning system (DPS) configured to determine a location of a spacecraft anywhere in the solar system, and provide state information relative to Earth, Sun, or any remote object. For example, the DPS includes a first camera and, possibly, a second camera configured to capture a plurality of navigation images to determine a state of a spacecraft in a solar system. The second camera is located behind, or adjacent to, a secondary reflector of a first camera in a body of a telescope.
Looking into the Eye with a Smartphone
NASA Astrophysics Data System (ADS)
Colicchia, Giuseppe; Wiesner, Hartmut
2015-02-01
Thanks to their sensors and the large number of apps available, smartphones can be used as a useful tool to carry out new laboratory experiments in physics.1-3 Such devices, very popular among young people, may be a successful approach to improve students' interest in the subject, particularly in a medical context. In addition to their small camera, smartphones usually have an integrated LED light source that is in line with the visual axis of the camera sensor. Using a smartphone, it is hence possible to take photos or videos of the fundus (retina) inside the eye through the pupil. We will explain the optical principles underlying the methods for observing the fundus of the eye (ophthalmoscopy) and describe how students can perform "fundus" photography on eye models using a smartphone.
Periodicity analysis on cat-eye reflected beam profiles of optical detectors
NASA Astrophysics Data System (ADS)
Gong, Mali; He, Sifeng
2017-05-01
The cat-eye effect reflected beam profiles of most optical detectors have a certain characteristic of periodicity, which is caused by array arrangement of sensors at their optical focal planes. It is the first time to find and prove that the reflected beam profile becomes several periodic spots at the reflected propagation distance corresponding to half the imaging distance of a CCD camera. Furthermore, the spatial cycle of these spots is approximately constant, independent of the CCD camera's imaging distance, which is related only to the focal length and pixel size of the CCD sensor. Thus, we can obtain the imaging distance and intrinsic parameters of the optical detector by analyzing its cat-eye reflected beam profiles. This conclusion can be applied in the field of non-cooperative cat-eye target recognition.
Fukuda, Shinichi; Beheregaray, Simone; Hoshi, Sujin; Yamanari, Masahiro; Lim, Yiheng; Hiraoka, Takahiro; Yasuno, Yoshiaki; Oshika, Tetsuro
2013-12-01
To evaluate the ability of parameters measured by three-dimensional (3D) corneal and anterior segment optical coherence tomography (CAS-OCT) and a rotating Scheimpflug camera combined with a Placido topography system (Scheimpflug camera with topography) to discriminate between normal eyes and forme fruste keratoconus. Forty-eight eyes of 48 patients with keratoconus, 25 eyes of 25 patients with forme fruste keratoconus and 128 eyes of 128 normal subjects were evaluated. Anterior and posterior keratometric parameters (steep K, flat K, average K), elevation, topographic parameters, regular and irregular astigmatism (spherical, asymmetry, regular and higher-order astigmatism) and five pachymetric parameters (minimum, minimum-median, inferior-superior, inferotemporal-superonasal, vertical thinnest location of the cornea) were measured using 3D CAS-OCT and a Scheimpflug camera with topography. The area under the receiver operating curve (AUROC) was calculated to assess the discrimination ability. Compatibility and repeatability of both devices were evaluated. Posterior surface elevation showed higher AUROC values in discrimination analysis of forme fruste keratoconus using both devices. Both instruments showed significant linear correlations (p<0.05, Pearson's correlation coefficient) and good repeatability (ICCs: 0.885-0.999) for normal and forme fruste keratoconus. Posterior elevation was the best discrimination parameter for forme fruste keratoconus. Both instruments presented good correlation and repeatability for this condition.
The photoluminescence of a fluorescent lamp: didactic experiments on the exponential decay
NASA Astrophysics Data System (ADS)
Onorato, Pasquale; Gratton, Luigi; Malgieri, Massimiliano; Oss, Stefano
2017-01-01
The lifetimes of the photoluminescent compounds contained in the coating of fluorescent compact lamps are usually measured using specialised instruments, including pulsed lasers and/or spectrofluorometers. Here we discuss how some low cost apparatuses, based on the use of either sensors for the educational lab or commercial digital photo cameras, can be employed to the same aim. The experiments do not require that luminescent phosphors are hazardously extracted from the compact fluorescent lamp, that also contains mercury. We obtain lifetime measurements for specific fluorescent elements of the bulb coating, in good agreement with the known values. We also address the physical mechanisms on which fluorescence lamps are based in a simplified way, suitable for undergraduate students; and we discuss in detail the physics of the lamp switch-off by analysing the time dependent spectrum, measured through a commercial fiber-optic spectrometer. Since the experiment is not hazardous in any way, requires a simple setup up with instruments which are commonly found in educational labs, and focuses on the typical features of the exponential decay, it is suitable for being performed in the undergraduate laboratory.
A Study of Deep CNN-Based Classification of Open and Closed Eyes Using a Visible Light Camera Sensor
Kim, Ki Wan; Hong, Hyung Gil; Nam, Gi Pyo; Park, Kang Ryoung
2017-01-01
The necessity for the classification of open and closed eyes is increasing in various fields, including analysis of eye fatigue in 3D TVs, analysis of the psychological states of test subjects, and eye status tracking-based driver drowsiness detection. Previous studies have used various methods to distinguish between open and closed eyes, such as classifiers based on the features obtained from image binarization, edge operators, or texture analysis. However, when it comes to eye images with different lighting conditions and resolutions, it can be difficult to find an optimal threshold for image binarization or optimal filters for edge and texture extraction. In order to address this issue, we propose a method to classify open and closed eye images with different conditions, acquired by a visible light camera, using a deep residual convolutional neural network. After conducting performance analysis on both self-collected and open databases, we have determined that the classification accuracy of the proposed method is superior to that of existing methods. PMID:28665361
Compact Video Microscope Imaging System Implemented in Colloid Studies
NASA Technical Reports Server (NTRS)
McDowell, Mark
2002-01-01
Long description Photographs showing fiber-optic light source, microscope and charge-coupled discharge (CCD) camera head connected to camera body, CCD camera body feeding data to image acquisition board in PC, and Cartesian robot controlled via PC board. The Compact Microscope Imaging System (CMIS) is a diagnostic tool with intelligent controls for use in space, industrial, medical, and security applications. CMIS can be used in situ with a minimum amount of user intervention. This system can scan, find areas of interest in, focus on, and acquire images automatically. Many multiple-cell experiments require microscopy for in situ observations; this is feasible only with compact microscope systems. CMIS is a miniature machine vision system that combines intelligent image processing with remote control. The software also has a user-friendly interface, which can be used independently of the hardware for further post-experiment analysis. CMIS has been successfully developed in the SML Laboratory at the NASA Glenn Research Center and adapted for use for colloid studies and is available for telescience experiments. The main innovations this year are an improved interface, optimized algorithms, and the ability to control conventional full-sized microscopes in addition to compact microscopes. The CMIS software-hardware interface is being integrated into our SML Analysis package, which will be a robust general-purpose image-processing package that can handle over 100 space and industrial applications.
Toslak, Devrim; Liu, Changgeng; Alam, Minhaj Nur; Yao, Xincheng
2018-06-01
A portable fundus imager is essential for emerging telemedicine screening and point-of-care examination of eye diseases. However, existing portable fundus cameras have limited field of view (FOV) and frequently require pupillary dilation. We report here a miniaturized indirect ophthalmoscopy-based nonmydriatic fundus camera with a snapshot FOV up to 67° external angle, which corresponds to a 101° eye angle. The wide-field fundus camera consists of a near-infrared light source (LS) for retinal guidance and a white LS for color retinal imaging. By incorporating digital image registration and glare elimination methods, a dual-image acquisition approach was used to achieve reflection artifact-free fundus photography.
CMOS Camera Array With Onboard Memory
NASA Technical Reports Server (NTRS)
Gat, Nahum
2009-01-01
A compact CMOS (complementary metal oxide semiconductor) camera system has been developed with high resolution (1.3 Megapixels), a USB (universal serial bus) 2.0 interface, and an onboard memory. Exposure times, and other operating parameters, are sent from a control PC via the USB port. Data from the camera can be received via the USB port and the interface allows for simple control and data capture through a laptop computer.
Predicting Sets and Lists: Theory and Practice
2015-01-01
school. No work stands in isolation and this work would not have been possible without my co-authors: • “Contextual Optimization of Lists”: Tommy Liu... IMU Microstrain 3DM-GX3-25 PlayStation Eye camera (640x480 @ 30Hz) Onboard ARM-based Linux computer PlayStation Eye camera (640x480 @ 30Hz) Bumblebee...of the IMU integrated in the Ardupilot unit, we added a Microstrain 3DM-GX3-25 IMU which is used to aid real time pose estimation. There are two
ProxiScanâ¢: A Novel Camera for Imaging Prostate Cancer
Ralph James
2017-12-09
ProxiScan is a compact gamma camera suited for high-resolution imaging of prostate cancer. Developed by Brookhaven National Laboratory and Hybridyne Imaging Technologies, Inc., ProxiScan won a 2009 R&D 100 Award, sponsored by R&D Magazine to recognize t
Realization of the ergonomics design and automatic control of the fundus cameras
NASA Astrophysics Data System (ADS)
Zeng, Chi-liang; Xiao, Ze-xin; Deng, Shi-chao; Yu, Xin-ye
2012-12-01
The principles of ergonomics design in fundus cameras should be extending the agreeableness by automatic control. Firstly, a 3D positional numerical control system is designed for positioning the eye pupils of the patients who are doing fundus examinations. This system consists of a electronically controlled chin bracket for moving up and down, a lateral movement of binocular with the detector and the automatic refocusing of the edges of the eye pupils. Secondly, an auto-focusing device for the object plane of patient's fundus is designed, which collects the patient's fundus images automatically whether their eyes is ametropic or not. Finally, a moving visual target is developed for expanding the fields of the fundus images.
Growth of the eye lens: II. Allometric studies.
Augusteyn, Robert C
2014-01-01
The purpose of this study was to examine the ontogeny and phylogeny of lens growth in a variety of species using allometry. Data on the accumulation of wet and/or dry lens weight as a function of bodyweight were obtained for 40 species and subjected to allometric analysis to examine ontogenic growth and compaction. Allometric analysis was also used to compare the maximum adult lens weights for 147 species with the maximum adult bodyweight and to compare lens volumes calculated from wet and dry weights with eye volumes calculated from axial length. Linear allometric relationships were obtained for the comparison of ontogenic lens and bodyweight accumulation. The body mass exponent (BME) decreased with increasing animal size from around 1.0 in small rodents to 0.4 in large ungulates for both wet and dry weights. Compaction constants for the ontogenic growth ranged from 1.00 in birds and reptiles up to 1.30 in mammals. Allometric comparison of maximum lens wet and dry weights with maximum bodyweights also yielded linear plots with a BME of 0.504 for all warm blooded species except primates which had a BME of 0.25. When lens volumes were compared with eye volumes, all species yielded a scaling constant of 0.75 but the proportionality constants for primates and birds were lower. Ontogenic lens growth is fastest, relative to body growth, in small animals and slowest in large animals. Fiber cell compaction takes place throughout life in most species, but not in birds and reptiles. Maximum adult lens size scales with eye size with the same exponent in all species, but birds and primates have smaller lenses relative to eye size than other species. Optical properties of the lens are generated through the combination of variations in the rate of growth, rate of compaction, shape and size.
Cluster of Martian Mesas on Lower Mount Sharp, Sols 1438 and 1439
2016-10-03
The mesa in the center of this scene from the "Murray Buttes" area on Mars' lower Mount Sharp is longer than a football field. It extends more than 361 feet (110 meters) from the left-most outcrop low on the slope to the right side where rock debris is behind a light-toned, dust-covered dune. The panorama combines sets of images taken by the left-eye camera of the Mast Camera (Mastcam) on NASA's Curiosity Mars rover, for the left half of the scene, and by Mastcam's right-eye camera for the right half of the scene. The component images from the left-eye camera were taken on Aug. 22, 2016, during the 1,438th Martian day, or sol, of the rover's work on Mars. The ones from the right-eye camera, which has a telephoto lens, were taken the following day, on Sol 1439. From the rover's position when the component images were taken, the top of the central mesa is about 310 feet (about 95 meters) away and about 52 feet (about 16 meters) above the rover. The relatively flat foreground is part of a geological layer called the Murray formation, which includes lakebed mud deposits. The buttes and mesas rising above this surface are eroded remnants of ancient sandstone that originated when winds deposited sand after lower Mount Sharp had formed. They are capped by material that is relatively resistant to erosion, just as is the case with many similarly shaped buttes and mesas on Earth. The scene is presented with a color adjustment that approximates white balancing, to resemble how the rocks and sand would appear under daytime lighting conditions on Earth. http://photojournal.jpl.nasa.gov/catalog/PIA20842
Phiri, R; Keeffe, J E; Harper, C A; Taylor, H R
2006-08-01
To show that the non-mydriatic retinal camera (NMRC) using polaroid film is as effective as the NMRC using digital imaging in detecting referrable retinopathy. A series of patients with diabetes attending the eye out-patients department at the Royal Victorian Eye and Ear Hospital had single-field non-mydriatic fundus photographs taken using first a digital and then a polaroid camera. Dilated 30 degrees seven-field stereo fundus photographs were then taken of each eye as the gold standard. The photographs were graded in a masked fashion. Retinopathy levels were defined using the simplified Wisconsin Grading system. We used the kappa statistics for inter-reader and intrareader agreement and the generalized linear model to derive the odds ratio. There were 196 participants giving 325 undilated retinal photographs. Of these participants 111 (57%) were males. The mean age of the patients was 68.8 years. There were 298 eyes with all three sets of photographs from 154 patients. The digital NMRC had a sensitivity of 86.2%[95% confidence interval (CI) 65.8, 95.3], whilst the polaroid NMRC had a sensitivity of 84.1% (95% CI 65.5, 93.7). The specificities of the two cameras were identical at 71.2% (95% CI 58.8, 81.1). There was no difference in the ability of the polaroid and digital camera to detect referrable retinopathy (odds ratio 1.06, 95% CI 0.80, 1.40, P = 0.68). This study suggests that non-mydriatic retinal photography using polaroid film is as effective as digital imaging in the detection of referrable retinopathy in countries such as the USA and Australia or others that use the same criterion for referral.
Compact 3D Camera for Shake-the-Box Particle Tracking
NASA Astrophysics Data System (ADS)
Hesseling, Christina; Michaelis, Dirk; Schneiders, Jan
2017-11-01
Time-resolved 3D-particle tracking usually requires the time-consuming optical setup and calibration of 3 to 4 cameras. Here, a compact four-camera housing has been developed. The performance of the system using Shake-the-Box processing (Schanz et al. 2016) is characterized. It is shown that the stereo-base is large enough for sensible 3D velocity measurements. Results from successful experiments in water flows using LED illumination are presented. For large-scale wind tunnel measurements, an even more compact version of the system is mounted on a robotic arm. Once calibrated for a specific measurement volume, the necessity for recalibration is eliminated even when the system moves around. Co-axial illumination is provided through an optical fiber in the middle of the housing, illuminating the full measurement volume from one viewing direction. Helium-filled soap bubbles are used to ensure sufficient particle image intensity. This way, the measurement probe can be moved around complex 3D-objects. By automatic scanning and stitching of recorded particle tracks, the detailed time-averaged flow field of a full volume of cubic meters in size is recorded and processed. Results from an experiment at TU-Delft of the flow field around a cyclist are shown.
Three-dimensional and multienergy gamma-ray simultaneous imaging by using a Si/CdTe Compton camera.
Suzuki, Yoshiyuki; Yamaguchi, Mitsutaka; Odaka, Hirokazu; Shimada, Hirofumi; Yoshida, Yukari; Torikai, Kota; Satoh, Takahiro; Arakawa, Kazuo; Kawachi, Naoki; Watanabe, Shigeki; Takeda, Shin'ichiro; Ishikawa, Shin-nosuke; Aono, Hiroyuki; Watanabe, Shin; Takahashi, Tadayuki; Nakano, Takashi
2013-06-01
To develop a silicon (Si) and cadmium telluride (CdTe) imaging Compton camera for biomedical application on the basis of technologies used for astrophysical observation and to test its capacity to perform three-dimensional (3D) imaging. All animal experiments were performed according to the Animal Care and Experimentation Committee (Gunma University, Maebashi, Japan). Flourine 18 fluorodeoxyglucose (FDG), iodine 131 ((131)I) methylnorcholestenol, and gallium 67 ((67)Ga) citrate, separately compacted into micro tubes, were inserted subcutaneously into a Wistar rat, and the distribution of the radioisotope compounds was determined with 3D imaging by using the Compton camera after the rat was sacrificed (ex vivo model). In a separate experiment, indium 111((111)In) chloride and (131)I-methylnorcholestenol were injected into a rat intravenously, and copper 64 ((64)Cu) chloride was administered into the stomach orally just before imaging. The isotope distributions were determined with 3D imaging after sacrifice by means of the list-mode-expectation-maximizing-maximum-likelihood method. The Si/CdTe Compton camera demonstrated its 3D multinuclear imaging capability by separating out the distributions of FDG, (131)I-methylnorcholestenol, and (67)Ga-citrate clearly in a test-tube-implanted ex vivo model. In the more physiologic model with tail vein injection prior to sacrifice, the distributions of (131)I-methylnorcholestenol and (64)Cu-chloride were demonstrated with 3D imaging, and the difference in distribution of the two isotopes was successfully imaged although the accumulation on the image of (111)In-chloride was difficult to visualize because of blurring at the low-energy region. The Si/CdTe Compton camera clearly resolved the distribution of multiple isotopes in 3D imaging and simultaneously in the ex vivo model.
Carlson, Jay; Kowalczuk, Jędrzej; Psota, Eric; Pérez, Lance C
2012-01-01
Robotic surgical platforms require vision feedback systems, which often consist of low-resolution, expensive, single-imager analog cameras. These systems are retooled for 3D display by simply doubling the cameras and outboard control units. Here, a fully-integrated digital stereoscopic video camera employing high-definition sensors and a class-compliant USB video interface is presented. This system can be used with low-cost PC hardware and consumer-level 3D displays for tele-medical surgical applications including military medical support, disaster relief, and space exploration.
Registration of an on-axis see-through head-mounted display and camera system
NASA Astrophysics Data System (ADS)
Luo, Gang; Rensing, Noa M.; Weststrate, Evan; Peli, Eli
2005-02-01
An optical see-through head-mounted display (HMD) system integrating a miniature camera that is aligned with the user's pupil is developed and tested. Such an HMD system has a potential value in many augmented reality applications, in which registration of the virtual display to the real scene is one of the critical aspects. The camera alignment to the user's pupil results in a simple yet accurate calibration and a low registration error across a wide range of depth. In reality, a small camera-eye misalignment may still occur in such a system due to the inevitable variations of HMD wearing position with respect to the eye. The effects of such errors are measured. Calculation further shows that the registration error as a function of viewing distance behaves nearly the same for different virtual image distances, except for a shift. The impact of prismatic effect of the display lens on registration is also discussed.
Structure and function of a compound eye, more than half a billion years old.
Schoenemann, Brigitte; Pärnaste, Helje; Clarkson, Euan N K
2017-12-19
Until now, the fossil record has not been capable of revealing any details of the mechanisms of complex vision at the beginning of metazoan evolution. Here, we describe functional units, at a cellular level, of a compound eye from the base of the Cambrian, more than half a billion years old. Remains of early Cambrian arthropods showed the external lattices of enormous compound eyes, but not the internal structures or anything about how those compound eyes may have functioned. In a phosphatized trilobite eye from the lower Cambrian of the Baltic, we found lithified remnants of cellular systems, typical of a modern focal apposition eye, similar to those of a bee or dragonfly. This shows that sophisticated eyes already existed at the beginning of the fossil record of higher organisms, while the differences between the ancient system and the internal structures of a modern apposition compound eye open important insights into the evolution of vision. Copyright © 2017 the Author(s). Published by PNAS.
NASA Astrophysics Data System (ADS)
Luquet, Ph.; Brouard, L.; Chinal, E.
2017-11-01
Astrium has developed a product line of compact and versatile instruments for HR and VHR missions in Earth Observation. These cameras consist on a Silicon Carbide Korsch-type telescope, a focal plane with one or several retina modules - including five lines CCD, optical filters and front end electronics - and the instrument main electronics. Several versions have been developed with a telescope pupil diameter from 200 mm up to 650 mm, covering a large range of GSD (from 2.5 m down to sub-metric) and swath (from 10km up to 30 km) and compatible with different types of platform. Nine cameras have already been manufactured for five different programs: ALSAT2 (Algeria), SSOT (Chile), SPOT6 & SPOT7 (France), KRS (Kazakhstan) and VNREDSat (Vietnam). Two of them have already been launched and are delivering high quality images.
Arnalich-Montiel, Francisco; Ortiz-Toquero, Sara; Auladell, Clara; Couceiro, Ana
2018-06-01
To assess intraobserver repeatability, intersession reproducibility, and agreement of swept-source Fourier-domain optical coherence tomography (SS-OCT) and the Scheimpflug camera in measuring corneal thickness in virgin and grafted eyes with Fuchs endothelial corneal dystrophy (FECD). Thirty-six control eyes, 35 FECD eyes, 30 FECD with corneal edema eyes, 25 Descemet stripping automated endothelial keratoplasty (DSAEK) eyes, and 29 Descemet membrane endothelial keratoplasty (DMEK) eyes were included. The apical center, pupillary center, and thinnest corneal thickness were determined in 3 consecutive images and repeated 2 weeks later. Repeatability and reproducibility coefficients, intraclass correlation coefficients, and 95% limits of agreement (LOA) between measurements were calculated. Agreement between devices was assessed using Bland-Altman analysis. Corneal thickness measurements were highly reproducible and repeatable with both systems. SS-OCT showed better repeatability in all corneal locations in the normal, FECD, FECD with edema, DSAEK, and DMEK groups (coefficient of variation ≤0.60%, ≤0.36%, ≤0.43%, ≤1.09%, and ≤0.48%, respectively) than the Scheimpflug (coefficient of variation ≤1.15%, ≤0.92%, ≤1.10%, ≤1.25%, and ≤1.14%, respectively). Between-session 95% LOA for SS-OCT was less than 3% for all groups except for the FECD with edema group, being almost double using the Scheimpflug camera. Differences between instruments were statistically significant in all groups and locations (P < 0.01) except in the DSAEK group (P ≤ 0.51); however, SS-OCT underestimated all measurements. SS-OCT provides better reproducible and repeatable measurements of corneal thickness than those obtained with the Scheimpflug camera in patients with FECD or an endothelial transplant. Variations between examinations higher than the 95% LOA observed in our study should raise awareness of changes in the endothelial function.
Penna, Rachele R; de Sanctis, Ugo; Catalano, Martina; Brusasco, Luca; Grignolo, Federico M
2017-01-01
To compare the repeatability/reproducibility of measurement by high-resolution Placido disk-based topography with that of a high-resolution rotating Scheimpflug camera and assess the agreement between the two instruments in measuring corneal power in eyes with keratoconus and post-laser in situ keratomileusis (LASIK). One eye each of 36 keratoconic patients and 20 subjects who had undergone LASIK was included in this prospective observational study. Two independent examiners worked in a random order to take three measurements of each eye with both instruments. Four parameters were measured on the anterior cornea: steep keratometry (Ks), flat keratometry (Kf), mean keratometry (Km), and astigmatism (Ks-Kf). Intra-examiner repeatability and inter-examiner reproducibility were evaluated by calculating the within-subject standard deviation (Sw) the coefficient of repeatability (R), the coefficient of variation (CoV), and the intraclass correlation coefficient (ICC). Agreement between instruments was tested with the Bland-Altman method by calculating the 95% limits of agreement (95% LoA). In keratoconic eyes, the intra-examiner and inter-examiner ICC were >0.95. As compared with measurement by high-resolution Placido disk-based topography, the intra-examiner R of the high-resolution rotating Scheimpflug camera was lower for Kf (0.32 vs 0.88), Ks (0.61 vs 0.88), and Km (0.32 vs 0.84) but higher for Ks-Kf (0.70 vs 0.57). Inter-examiner R values were lower for all parameters measured using the high-resolution rotating Scheimpflug camera. The 95% LoA were -1.28 to +0.55 for Kf, -1.36 to +0.99 for Ks, -1.08 to +0.50 for Km, and -1.11 to +1.48 for Ks-Kf. In the post-LASIK eyes, the intra-examiner and inter-examiner ICC were >0.87 for all parameters. The intra-examiner and inter-examiner R were lower for all parameters measured using the high-resolution rotating Scheimpflug camera. The intra-examiner R was 0.17 vs 0.88 for Kf, 0.21 vs 0.88 for Ks, 0.17 vs 0.86 for Km, and 0.28 vs 0.33 for Ks-Kf. The inter-examiner R was 0.09 vs 0.64 for Kf, 0.15 vs 0.56 for Ks, 0.09 vs 0.59 for Km, and 0.18 vs 0.23 for Ks-Kf. The 95% LoA were -0.54 to +0.58 for Kf, -0.51 to +0.53 for Ks and Km, and -0.28 to +0.27 for Ks-Kf. As compared with Placido disk-based topography, the high-resolution rotating Scheimpflug camera provides more repeatable and reproducible measurements of Ks, Kf and Ks in keratoconic and post-LASIK eyes. Agreement between instruments is fair in keratoconus and very good in post-LASIK eyes.
Penna, Rachele R.; de Sanctis, Ugo; Catalano, Martina; Brusasco, Luca; Grignolo, Federico M.
2017-01-01
AIM To compare the repeatability/reproducibility of measurement by high-resolution Placido disk-based topography with that of a high-resolution rotating Scheimpflug camera and assess the agreement between the two instruments in measuring corneal power in eyes with keratoconus and post-laser in situ keratomileusis (LASIK). METHODS One eye each of 36 keratoconic patients and 20 subjects who had undergone LASIK was included in this prospective observational study. Two independent examiners worked in a random order to take three measurements of each eye with both instruments. Four parameters were measured on the anterior cornea: steep keratometry (Ks), flat keratometry (Kf), mean keratometry (Km), and astigmatism (Ks-Kf). Intra-examiner repeatability and inter-examiner reproducibility were evaluated by calculating the within-subject standard deviation (Sw) the coefficient of repeatability (R), the coefficient of variation (CoV), and the intraclass correlation coefficient (ICC). Agreement between instruments was tested with the Bland-Altman method by calculating the 95% limits of agreement (95% LoA). RESULTS In keratoconic eyes, the intra-examiner and inter-examiner ICC were >0.95. As compared with measurement by high-resolution Placido disk-based topography, the intra-examiner R of the high-resolution rotating Scheimpflug camera was lower for Kf (0.32 vs 0.88), Ks (0.61 vs 0.88), and Km (0.32 vs 0.84) but higher for Ks-Kf (0.70 vs 0.57). Inter-examiner R values were lower for all parameters measured using the high-resolution rotating Scheimpflug camera. The 95% LoA were -1.28 to +0.55 for Kf, -1.36 to +0.99 for Ks, -1.08 to +0.50 for Km, and -1.11 to +1.48 for Ks-Kf. In the post-LASIK eyes, the intra-examiner and inter-examiner ICC were >0.87 for all parameters. The intra-examiner and inter-examiner R were lower for all parameters measured using the high-resolution rotating Scheimpflug camera. The intra-examiner R was 0.17 vs 0.88 for Kf, 0.21 vs 0.88 for Ks, 0.17 vs 0.86 for Km, and 0.28 vs 0.33 for Ks-Kf. The inter-examiner R was 0.09 vs 0.64 for Kf, 0.15 vs 0.56 for Ks, 0.09 vs 0.59 for Km, and 0.18 vs 0.23 for Ks-Kf. The 95% LoA were -0.54 to +0.58 for Kf, -0.51 to +0.53 for Ks and Km, and -0.28 to +0.27 for Ks-Kf. CONCLUSION As compared with Placido disk-based topography, the high-resolution rotating Scheimpflug camera provides more repeatable and reproducible measurements of Ks, Kf and Ks in keratoconic and post-LASIK eyes. Agreement between instruments is fair in keratoconus and very good in post-LASIK eyes. PMID:28393039
Parameterizations for reducing camera reprojection error for robot-world hand-eye calibration
USDA-ARS?s Scientific Manuscript database
Accurate robot-world, hand-eye calibration is crucial to automation tasks. In this paper, we discuss the robot-world, hand-eye calibration problem which has been modeled as the linear relationship AX equals ZB, where X and Z are the unknown calibration matrices composed of rotation and translation ...
Miniature curved artificial compound eyes
Floreano, Dario; Pericet-Camara, Ramon; Viollet, Stéphane; Ruffier, Franck; Brückner, Andreas; Leitel, Robert; Buss, Wolfgang; Menouni, Mohsine; Expert, Fabien; Juston, Raphaël; Dobrzynski, Michal Karol; L’Eplattenier, Geraud; Recktenwald, Fabian; Mallot, Hanspeter A.; Franceschini, Nicolas
2013-01-01
In most animal species, vision is mediated by compound eyes, which offer lower resolution than vertebrate single-lens eyes, but significantly larger fields of view with negligible distortion and spherical aberration, as well as high temporal resolution in a tiny package. Compound eyes are ideally suited for fast panoramic motion perception. Engineering a miniature artificial compound eye is challenging because it requires accurate alignment of photoreceptive and optical components on a curved surface. Here, we describe a unique design method for biomimetic compound eyes featuring a panoramic, undistorted field of view in a very thin package. The design consists of three planar layers of separately produced arrays, namely, a microlens array, a neuromorphic photodetector array, and a flexible printed circuit board that are stacked, cut, and curved to produce a mechanically flexible imager. Following this method, we have prototyped and characterized an artificial compound eye bearing a hemispherical field of view with embedded and programmable low-power signal processing, high temporal resolution, and local adaptation to illumination. The prototyped artificial compound eye possesses several characteristics similar to the eye of the fruit fly Drosophila and other arthropod species. This design method opens up additional vistas for a broad range of applications in which wide field motion detection is at a premium, such as collision-free navigation of terrestrial and aerospace vehicles, and for the experimental testing of insect vision theories. PMID:23690574
An inexpensive compact automatic camera system for wildlife research
William R. Danielson; Richard M. DeGraaf; Todd K. Fuller
1996-01-01
This paper describes the design, conversion, and deployment of a reliable, compact, automatic multiple-exposure photographic system that was used to photograph nest predation events. This system may be the most versatile yet described in the literature because of its simplicity, portability, and dependability. The system was very reliable because it was designed around...
Compact instrument for fluorescence image-guided surgery
NASA Astrophysics Data System (ADS)
Wang, Xinghua; Bhaumik, Srabani; Li, Qing; Staudinger, V. Paul; Yazdanfar, Siavash
2010-03-01
Fluorescence image-guided surgery (FIGS) is an emerging technique in oncology, neurology, and cardiology. To adapt intraoperative imaging for various surgical applications, increasingly flexible and compact FIGS instruments are necessary. We present a compact, portable FIGS system and demonstrate its use in cardiovascular mapping in a preclinical model of myocardial ischemia. Our system uses fiber optic delivery of laser diode excitation, custom optics with high collection efficiency, and compact consumer-grade cameras as a low-cost and compact alternative to open surgical FIGS systems. Dramatic size and weight reduction increases flexibility and access, and allows for handheld use or unobtrusive positioning over the surgical field.
MONICA: A Compact, Portable Dual Gamma Camera System for Mouse Whole-Body Imaging
Xi, Wenze; Seidel, Jurgen; Karkareka, John W.; Pohida, Thomas J.; Milenic, Diane E.; Proffitt, James; Majewski, Stan; Weisenberger, Andrew G.; Green, Michael V.; Choyke, Peter L.
2009-01-01
Introduction We describe a compact, portable dual-gamma camera system (named “MONICA” for MObile Nuclear Imaging CAmeras) for visualizing and analyzing the whole-body biodistribution of putative diagnostic and therapeutic single photon emitting radiotracers in animals the size of mice. Methods Two identical, miniature pixelated NaI(Tl) gamma cameras were fabricated and installed “looking up” through the tabletop of a compact portable cart. Mice are placed directly on the tabletop for imaging. Camera imaging performance was evaluated with phantoms and field performance was evaluated in a weeklong In-111 imaging study performed in a mouse tumor xenograft model. Results Tc-99m performance measurements, using a photopeak energy window of 140 keV ± 10%, yielded the following results: spatial resolution (FWHM at 1-cm), 2.2-mm; sensitivity, 149 cps/MBq (5.5 cps/μCi); energy resolution (FWHM), 10.8%; count rate linearity (count rate vs. activity), r2 = 0.99 for 0–185 MBq (0–5 mCi) in the field-of-view (FOV); spatial uniformity, < 3% count rate variation across the FOV. Tumor and whole-body distributions of the In-111 agent were well visualized in all animals in 5-minute images acquired throughout the 168-hour study period. Conclusion Performance measurements indicate that MONICA is well suited to whole-body single photon mouse imaging. The field study suggests that inter-device communications and user-oriented interfaces included in the MONICA design facilitate use of the system in practice. We believe that MONICA may be particularly useful early in the (cancer) drug development cycle where basic whole-body biodistribution data can direct future development of the agent under study and where logistical factors, e.g. limited imaging space, portability, and, potentially, cost are important. PMID:20346864
Arthropod eyes: The early Cambrian fossil record and divergent evolution of visual systems.
Strausfeld, Nicholas J; Ma, Xiaoya; Edgecombe, Gregory D; Fortey, Richard A; Land, Michael F; Liu, Yu; Cong, Peiyun; Hou, Xianguang
2016-03-01
Four types of eyes serve the visual neuropils of extant arthropods: compound retinas composed of adjacent facets; a visual surface populated by spaced eyelets; a smooth transparent cuticle providing inwardly directed lens cylinders; and single-lens eyes. The first type is a characteristic of pancrustaceans, the eyes of which comprise lenses arranged as hexagonal or rectilinear arrays, each lens crowning 8-9 photoreceptor neurons. Except for Scutigeromorpha, the second type typifies Myriapoda whose relatively large eyelets surmount numerous photoreceptive rhabdoms stacked together as tiers. Scutigeromorph eyes are facetted, each lens crowning some dozen photoreceptor neurons of a modified apposition-type eye. Extant chelicerate eyes are single-lensed except in xiphosurans, whose lateral eyes comprise a cuticle with a smooth outer surface and an inner one providing regular arrays of lens cylinders. This account discusses whether these disparate eye types speak for or against divergence from one ancestral eye type. Previous considerations of eye evolution, focusing on the eyes of trilobites and on facet proliferation in xiphosurans and myriapods, have proposed that the mode of development of eyes in those taxa is distinct from that of pancrustaceans and is the plesiomorphic condition from which facetted eyes have evolved. But the recent discovery of enormous regularly facetted compound eyes belonging to early Cambrian radiodontans suggests that high-resolution facetted eyes with superior optics may be the ground pattern organization for arthropods, predating the evolution of arthrodization and jointed post-protocerebral appendages. Here we provide evidence that compound eye organization in stem-group euarthropods of the Cambrian can be understood in terms of eye morphologies diverging from this ancestral radiodontan-type ground pattern. We show that in certain Cambrian groups apposition eyes relate to fixed or mobile eyestalks, whereas other groups reveal concomitant evolution of sessile eyes equipped with optics typical of extant xiphosurans. Observations of fossil material, including that of trilobites and eurypterids, support the proposition that the ancestral compound eye was the apposition type. Cambrian arthropods include possible precursors of mandibulate eyes. The latter are the modified compound eyes, now sessile, and their underlying optic lobes exemplified by scutigeromorph chilopods, and the mobile stalked compound eyes and more elaborate optic lobes typifying Pancrustacea. Radical divergence from an ancestral apposition type is demonstrated by the evolution of chelicerate eyes, from doublet sessile-eyed stem-group taxa to special apposition eyes of xiphosurans, the compound eyes of eurypterids, and single-lens eyes of arachnids. Different eye types are discussed with respect to possible modes of life of the extinct species that possessed them, comparing these to extant counterparts and the types of visual centers the eyes might have served. Copyright © 2015 Elsevier Ltd. All rights reserved.
Recent technology and usage of plastic lenses in image taking objectives
NASA Astrophysics Data System (ADS)
Yamaguchi, Susumu; Sato, Hiroshi; Mori, Nobuyoshi; Kiriki, Toshihiko
2005-09-01
Recently, plastic lenses produced by injection molding are widely used in image taking objectives for digital cameras, camcorders, and mobile phone cameras, because of their suitability for volume production and ease of obtaining an advantage of aspherical surfaces. For digital camera and camcorder objectives, it is desirable that there is no image point variation with the temperature change in spite of employing several plastic lenses. At the same time, due to the shrinking pixel size of solid-state image sensor, there is now a requirement to assemble lenses with high accuracy. In order to satisfy these requirements, we have developed 16 times compact zoom objective for camcorder and 3 times class folded zoom objectives for digital camera, incorporating cemented plastic doublet consisting of a positive lens and a negative lens. Over the last few years, production volumes of camera-equipped mobile phones have increased substantially. Therefore, for mobile phone cameras, the consideration of productivity is more important than ever. For this application, we have developed a 1.3-mega pixels compact camera module with macro function utilizing the advantage of a plastic lens that can be given mechanically functional shape to outer flange part. Its objective consists of three plastic lenses and all critical dimensions related to optical performance can be determined by high precise optical elements. Therefore this camera module is manufactured without optical adjustment in automatic assembling line, and achieves both high productivity and high performance. Reported here are the constructions and the technical topics of image taking objectives described above.
Towards fish-eye camera based in-home activity assessment.
Bas, Erhan; Erdogmus, Deniz; Ozertem, Umut; Pavel, Misha
2008-01-01
Indoors localization, activity classification, and behavioral modeling are increasingly important for surveillance applications including independent living and remote health monitoring. In this paper, we study the suitability of fish-eye cameras (high-resolution CCD sensors with very-wide-angle lenses) for the purpose of monitoring people in indoors environments. The results indicate that these sensors are very useful for automatic activity monitoring and people tracking. We identify practical and mathematical problems related to information extraction from these video sequences and identify future directions to solve these issues.
Orr, Tim R.; Hoblitt, Richard P.
2008-01-01
Volcanoes can be difficult to study up close. Because it may be days, weeks, or even years between important events, direct observation is often impractical. In addition, volcanoes are often inaccessible due to their remote location and (or) harsh environmental conditions. An eruption adds another level of complexity to what already may be a difficult and dangerous situation. For these reasons, scientists at the U.S. Geological Survey (USGS) Hawaiian Volcano Observatory (HVO) have, for years, built camera systems to act as surrogate eyes. With the recent advances in digital-camera technology, these eyes are rapidly improving. One type of photographic monitoring involves the use of near-real-time network-enabled cameras installed at permanent sites (Hoblitt and others, in press). Time-lapse camera-systems, on the other hand, provide an inexpensive, easily transportable monitoring option that offers more versatility in site location. While time-lapse systems lack near-real-time capability, they provide higher image resolution and can be rapidly deployed in areas where the use of sophisticated telemetry required by the networked cameras systems is not practical. This report describes the latest generation (as of 2008) time-lapse camera system used by HVO for photograph acquisition in remote and hazardous sites on Kilauea Volcano.
New Modular Camera No Ordinary Joe
NASA Technical Reports Server (NTRS)
2003-01-01
Although dubbed 'Little Joe' for its small-format characteristics, a new wavefront sensor camera has proved that it is far from coming up short when paired with high-speed, low-noise applications. SciMeasure Analytical Systems, Inc., a provider of cameras and imaging accessories for use in biomedical research and industrial inspection and quality control, is the eye behind Little Joe's shutter, manufacturing and selling the modular, multi-purpose camera worldwide to advance fields such as astronomy, neurobiology, and cardiology.
... of one eye. Exams and Tests Tests to evaluate for vein occlusion include: Exam of the retina after dilating the pupil An eye test that uses a special dye and camera to look at blood flow in the retina and choroid. Intraocular pressure Pupil ...
Visual Positioning Indoors: Human Eyes vs. Smartphone Cameras
Wu, Dewen; Chen, Ruizhi; Chen, Liang
2017-01-01
Artificial Intelligence (AI) technologies and their related applications are now developing at a rapid pace. Indoor positioning will be one of the core technologies that enable AI applications because people spend 80% of their time indoors. Humans can locate themselves related to a visually well-defined object, e.g., a door, based on their visual observations. Can a smartphone camera do a similar job when it points to an object? In this paper, a visual positioning solution was developed based on a single image captured from a smartphone camera pointing to a well-defined object. The smartphone camera simulates the process of human eyes for the purpose of relatively locating themselves against a well-defined object. Extensive experiments were conducted with five types of smartphones on three different indoor settings, including a meeting room, a library, and a reading room. Experimental results shown that the average positioning accuracy of the solution based on five smartphone cameras is 30.6 cm, while that for the human-observed solution with 300 samples from 10 different people is 73.1 cm. PMID:29144420
Visual Positioning Indoors: Human Eyes vs. Smartphone Cameras.
Wu, Dewen; Chen, Ruizhi; Chen, Liang
2017-11-16
Artificial Intelligence (AI) technologies and their related applications are now developing at a rapid pace. Indoor positioning will be one of the core technologies that enable AI applications because people spend 80% of their time indoors. Humans can locate themselves related to a visually well-defined object, e.g., a door, based on their visual observations. Can a smartphone camera do a similar job when it points to an object? In this paper, a visual positioning solution was developed based on a single image captured from a smartphone camera pointing to a well-defined object. The smartphone camera simulates the process of human eyes for the purpose of relatively locating themselves against a well-defined object. Extensive experiments were conducted with five types of smartphones on three different indoor settings, including a meeting room, a library, and a reading room. Experimental results shown that the average positioning accuracy of the solution based on five smartphone cameras is 30.6 cm, while that for the human-observed solution with 300 samples from 10 different people is 73.1 cm.
Photon collider: a four-channel autoguider solution
NASA Astrophysics Data System (ADS)
Hygelund, John C.; Haynes, Rachel; Burleson, Ben; Fulton, Benjamin J.
2010-07-01
The "Photon Collider" uses a compact array of four off axis autoguider cameras positioned with independent filtering and focus. The photon collider is two way symmetric and robustly mounted with the off axis light crossing the science field which allows the compact single frame construction to have extremely small relative deflections between guide and science CCDs. The photon collider provides four independent guiding signals with a total of 15 square arc minutes of sky coverage. These signals allow for simultaneous altitude, azimuth, field rotation and focus guiding. Guide cameras read out without exposure overhead increasing the tracking cadence. The independent focus allows the photon collider to maintain in focus guide stars when the main science camera is taking defocused exposures as well as track for telescope focus changes. Independent filters allow auto guiding in the science camera wavelength bandpass. The four cameras are controlled with a custom web services interface from a single Linux based industrial PC, and the autoguider mechanism and telemetry is built around a uCLinux based Analog Devices BlackFin embedded microprocessor. Off axis light is corrected with a custom meniscus correcting lens. Guide CCDs are cooled with ethylene glycol with an advanced leak detection system. The photon collider was built for use on Las Cumbres Observatory's 2 meter Faulks telescopes and currently used to guide the alt-az mount.
Radiometric calibration of an ultra-compact microbolometer thermal imaging module
NASA Astrophysics Data System (ADS)
Riesland, David W.; Nugent, Paul W.; Laurie, Seth; Shaw, Joseph A.
2017-05-01
As microbolometer focal plane array formats are steadily decreasing, new challenges arise in correcting for thermal drift in the calibration coefficients. As the thermal mass of the cameras decrease the focal plane becomes more sensitive to external thermal inputs. This paper shows results from a temperature compensation algorithm for characterizing and radiometrically calibrating a FLIR Lepton camera.
Image synchronization for 3D application using the NanEye sensor
NASA Astrophysics Data System (ADS)
Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Dias, Morgado
2015-03-01
Based on Awaiba's NanEye CMOS image sensor family and a FPGA platform with USB3 interface, the aim of this paper is to demonstrate a novel technique to perfectly synchronize up to 8 individual self-timed cameras. Minimal form factor self-timed camera modules of 1 mm x 1 mm or smaller do not generally allow external synchronization. However, for stereo vision or 3D reconstruction with multiple cameras as well as for applications requiring pulsed illumination it is required to synchronize multiple cameras. In this work, the challenge to synchronize multiple self-timed cameras with only 4 wire interface has been solved by adaptively regulating the power supply for each of the cameras to synchronize their frame rate and frame phase. To that effect, a control core was created to constantly monitor the operating frequency of each camera by measuring the line period in each frame based on a well-defined sampling signal. The frequency is adjusted by varying the voltage level applied to the sensor based on the error between the measured line period and the desired line period. To ensure phase synchronization between frames of multiple cameras, a Master-Slave interface was implemented. A single camera is defined as the Master entity, with its operating frequency being controlled directly through a PC based interface. The remaining cameras are setup in Slave mode and are interfaced directly with the Master camera control module. This enables the remaining cameras to monitor its line and frame period and adjust their own to achieve phase and frequency synchronization. The result of this work will allow the realization of smaller than 3mm diameter 3D stereo vision equipment in medical endoscopic context, such as endoscopic surgical robotic or micro invasive surgery.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lv, Yang; Wang, Ruixing; Ma, Haotong
Purpose: The measurement based on Shack-Hartmann wave-front sensor(WFS), obtaining both the high and low order wave-front aberrations simultaneously and accurately, has been applied in the detection of human eyes aberration in recent years. However, Its application is limited by the small field of view (FOV), slight eye movement leads the optical bacon image exceeds the lenslet array which result in uncertain detection error. To overcome difficulties of precise eye location, the capacity of detecting eye wave-front aberration over FOV much larger than simply a single conjugate Hartmann WFS accurately and simultaneously is demanded. Methods: Plenoptic camera’s lenslet array subdivides themore » aperture light-field in spatial frequency domain, capture the 4-D light-field information. Data recorded by plenoptic cameras can be used to extract the wave-front phases associated to the eyes aberration. The corresponding theoretical model and simulation system is built up in this article to discuss wave-front measurement performance when utilizing plenoptic camera as wave-front sensor. Results: The simulation results indicate that the plenoptic wave-front method can obtain both the high and low order eyes wave-front aberration with the same accuracy as conventional system in single visual angle detectionand over FOV much larger than simply a single conjugate Hartmann systems. Meanwhile, simulation results show that detection of eye aberrations wave-front in different visual angle can be achieved effectively and simultaneously by plenoptic method, by both point and extended optical beacon from the eye. Conclusion: Plenoptic wave-front method possesses the feasibility in eye aberrations wave-front detection. With larger FOV, the method can effectively reduce the detection error brought by imprecise eye location and simplify the eye aberrations wave-front detection system comparing with which based on Shack-Hartmann WFS. Unique advantage of the plenoptic method lies in obtaining wave-front in different visual angle simultaneously, which provides an approach in building up 3-D model of eye refractor tomographically. Funded by the key Laboratory of High Power Laser and Physics, CAS Research Project of National University of Defense Technology No. JC13-07-01; National Natural Science Foundation of China No. 61205144.« less
A new omni-directional multi-camera system for high resolution surveillance
NASA Astrophysics Data System (ADS)
Cogal, Omer; Akin, Abdulkadir; Seyid, Kerem; Popovic, Vladan; Schmid, Alexandre; Ott, Beat; Wellig, Peter; Leblebici, Yusuf
2014-05-01
Omni-directional high resolution surveillance has a wide application range in defense and security fields. Early systems used for this purpose are based on parabolic mirror or fisheye lens where distortion due to the nature of the optical elements cannot be avoided. Moreover, in such systems, the image resolution is limited to a single image sensor's image resolution. Recently, the Panoptic camera approach that mimics the eyes of flying insects using multiple imagers has been presented. This approach features a novel solution for constructing a spherically arranged wide FOV plenoptic imaging system where the omni-directional image quality is limited by low-end sensors. In this paper, an overview of current Panoptic camera designs is provided. New results for a very-high resolution visible spectrum imaging and recording system inspired from the Panoptic approach are presented. The GigaEye-1 system, with 44 single cameras and 22 FPGAs, is capable of recording omni-directional video in a 360°×100° FOV at 9.5 fps with a resolution over (17,700×4,650) pixels (82.3MP). Real-time video capturing capability is also verified at 30 fps for a resolution over (9,000×2,400) pixels (21.6MP). The next generation system with significantly higher resolution and real-time processing capacity, called GigaEye-2, is currently under development. The important capacity of GigaEye-1 opens the door to various post-processing techniques in surveillance domain such as large perimeter object tracking, very-high resolution depth map estimation and high dynamicrange imaging which are beyond standard stitching and panorama generation methods.
Development of a Compact & Easy-to-Use 3-D Camera for High Speed Turbulent Flow Fields
2013-12-05
resolved. Also, in the case of a single camera system, the use of an aperture greatly reduces the amount of collected light. The combination of these...a study on wall-bounded turbulence [Sheng_2006]. Nevertheless, these techniques are limited to small measurement volumes, while maintaining a high...It has also been adapted to kHz rates using high-speed cameras for aeroacoustic studies (see Violato et al. [17, 18]. Tomo-PIV, however, has some
NASA Astrophysics Data System (ADS)
Sima, A. A.; Baeck, P.; Nuyts, D.; Delalieux, S.; Livens, S.; Blommaert, J.; Delauré, B.; Boonen, M.
2016-06-01
This paper gives an overview of the new COmpact hyperSpectral Imaging (COSI) system recently developed at the Flemish Institute for Technological Research (VITO, Belgium) and suitable for remotely piloted aircraft systems. A hyperspectral dataset captured from a multirotor platform over a strawberry field is presented and explored in order to assess spectral bands co-registration quality. Thanks to application of line based interference filters deposited directly on the detector wafer the COSI camera is compact and lightweight (total mass of 500g), and captures 72 narrow (FWHM: 5nm to 10 nm) bands in the spectral range of 600-900 nm. Covering the region of red edge (680 nm to 730 nm) allows for deriving plant chlorophyll content, biomass and hydric status indicators, making the camera suitable for agriculture purposes. Additionally to the orthorectified hypercube digital terrain model can be derived enabling various analyses requiring object height, e.g. plant height in vegetation growth monitoring. Geometric data quality assessment proves that the COSI camera and the dedicated data processing chain are capable to deliver very high resolution data (centimetre level) where spectral information can be correctly derived. Obtained results are comparable or better than results reported in similar studies for an alternative system based on the Fabry-Pérot interferometer.
Catadioptric planar compound eye with large field of view.
Deng, Huaxia; Gao, Xicheng; Ma, Mengchao; Li, Yunyang; Li, Hang; Zhang, Jin; Zhong, Xiang
2018-05-14
The planar compound eye has the advantages of simple structure and no requirement for complex relay optical elements, but the field of view (FOV) is very difficult to expand. Overcoming the limitation of FOV, especially with simple structures, is a great challenge for the development of planar compound eyes. Different from the existing designs that only considering refraction, this article proposes a catadioptric planar compound eye based on the reflection and refraction to expand the FOV. In the proposed design, the incident light from a large angle is reflected into the lenslet array by two rotationally symmetric mirrors whose surface equations are optimized by mathematical and optical softwares. The FOV of the proposed catadioptric planar compound eye theoretically can reach 96.6°, which is much wider than the opening record of 70°. Moreover, no distortion of the imaging system can be obtained theoretically in this design. Simulation results show a linearity of better than 99% for the most of the incident angles. The verification experiments show that the FOV of the proposed device can reach 90.7° while the FOV of the corresponding planar compound eye without mirrors is 41.6°. The proposed catadioptric planar compound eye has the great potential in monitoring, detection and virtual reality since the FOV has been widen significantly.
ERIC Educational Resources Information Center
Vollmer, Michael; Mollmann, Klaus-Peter
2011-01-01
A selection of hands-on experiments from different fields of physics, which happen too fast for the eye or video cameras to properly observe and analyse the phenomena, is presented. They are recorded and analysed using modern high speed cameras. Two types of cameras were used: the first were rather inexpensive consumer products such as Casio…
Nagai, Noriaki; Ito, Yoshimasa; Okamoto, Norio; Shimomura, Yoshikazu
2013-01-01
We investigated the protective effects of sericin on corneal damage due to benzalkonium chloride (BAC) used as a preservative in commercially available timolol maleate eye drops using rat debrided corneal epithelium and a human cornea epithelial cell line (HCE-T). Corneal wounds were monitored using a fundus camera TRC-50X equipped with a digital camera; eye drops were instilled into the rat eyes five times a day after corneal epithelial abrasion. The viability of HCE-T cells was calculated by TetraColor One; and Escherichia coli (ATCC 8739) were used to measure antimicrobial activity. The reducing effects on transcorneal penetration and intraocular pressure (IOP) of the eye drops were determined using rabbits. The corneal wound healing rate and rate constants (kH) as well as cell viability were higher following treatment with 0.005% BAC solution containing 0.1% sericin than in the case of treatment with BAC solution alone; the antimicrobial activity was approximately the same for BAC solutions with and without sericin. In addition, the kH for rat eyes instilled with commercially available timolol maleate eye drops containing 0.1% sericin was significantly higher than that of eyes instilled with timolol maleate eye drops without sericin, and the addition of sericin did not affect the corneal penetration or IOP reducing effect of commercially available timolol maleate eye drops. A preservative system comprising BAC and sericin may provide effective therapy for glaucoma patients requiring long-term anti-glaucoma agents.
Adaptive Optics for the Human Eye
NASA Astrophysics Data System (ADS)
Williams, D. R.
2000-05-01
Adaptive optics can extend not only the resolution of ground-based telescopes, but also the human eye. Both static and dynamic aberrations in the cornea and lens of the normal eye limit its optical quality. Though it is possible to correct defocus and astigmatism with spectacle lenses, higher order aberrations remain. These aberrations blur vision and prevent us from seeing at the fundamental limits set by the retina and brain. They also limit the resolution of cameras to image the living retina, cameras that are a critical for the diagnosis and treatment of retinal disease. I will describe an adaptive optics system that measures the wave aberration of the eye in real time and compensates for it with a deformable mirror, endowing the human eye with unprecedented optical quality. This instrument provides fresh insight into the ultimate limits on human visual acuity, reveals for the first time images of the retinal cone mosaic responsible for color vision, and points the way to contact lenses and laser surgical methods that could enhance vision beyond what is currently possible today. Supported by the NSF Science and Technology Center for Adaptive Optics, the National Eye Institute, and Bausch and Lomb, Inc.
Dickstein-Fischer, Laurie; Fischer, Gregory S
2014-01-01
It is estimated that Autism Spectrum Disorder (ASD) affects 1 in 68 children. Early identification of an ASD is exceedingly important to the introduction of an intervention. We are developing a robot-assisted approach that will serve as an improved diagnostic and early intervention tool for children with autism. The robot, named PABI® (Penguin for Autism Behavioral Interventions), is a compact humanoid robot taking on an expressive cartoon-like embodiment. The robot is affordable, durable, and portable so that it can be used in various settings including schools, clinics, and the home. Thus enabling significantly enhanced and more readily available diagnosis and continuation of care. Through facial expressions, body motion, verbal cues, stereo vision-based tracking, and a tablet computer, the robot is capable of interacting meaningfully with an autistic child. Initial implementations of the robot, as part of a comprehensive treatment model (CTM), include Applied Behavioral Analysis (ABA) therapy where the child interacts with a tablet computer wirelessly interfaced with the robot. At the same time, the robot makes meaningful expressions and utterances and uses stereo cameras in eyes to track the child, maintain eye contact, and collect data such as affect and gaze direction for charting of progress. In this paper we present the clinical justification, anticipated usage with corresponding requirements, prototype development of the robotic system, and demonstration of a sample application for robot-assisted ABA therapy.
NASA Astrophysics Data System (ADS)
Gliss, Christine; Parel, Jean-Marie A.; Flynn, John T.; Pratisto, Hans S.; Niederer, Peter F.
2003-07-01
We present a miniaturized version of a fundus camera. The camera is designed for the use in screening for retinopathy of prematurity (ROP). There, but also in other applications a small, light weight, digital camera system can be extremely useful. We present a small wide angle digital camera system. The handpiece is significantly smaller and lighter then in all other systems. The electronics is truly portable fitting in a standard boardcase. The camera is designed to be offered at a compatible price. Data from tests on young rabbits' eyes is presented. The development of the camera system is part of a telemedicine project screening for ROP. Telemedical applications are a perfect application for this camera system using both advantages: the portability as well as the digital image.
Amano, Shiro; Honda, Norihiko; Amano, Yuki; Yamagami, Satoru; Miyai, Takashi; Samejima, Tomokazu; Ogata, Miyuki; Miyata, Kazunori
2006-06-01
To compare central corneal thickness measurements and their reproducibility when taken by a rotating Scheimpflug camera, ultrasonic pachymetry, and scanning-slit corneal topography/pachymetry. Experimental study. Seventy-four eyes of 64 subjects without ocular abnormalities other than cataract. Corneal thickness measurements were compared among the 3 methods in 54 eyes of 54 subjects. Two sets of measurements were repeated by a single examiner for each pachymetry in another 10 eyes of 5 subjects, and the intraexaminer repeatability was assessed as the absolute difference of the first and second measurements. Two experienced examiners took one measurement for each pachymetry in another 10 eyes of 5 subjects, and the interexaminer reproducibility was assessed as the absolute difference of the 2 measurements of the first and second examiners. Central corneal thickness measurements by the 3 methods, absolute difference of the first and second measurements by a single examiner, absolute difference of the 2 measurements by 2 examiners, and relative amount of variation. The average measurements of central corneal thickness by a rotating Scheimpflug camera, scanning-slit topography, and ultrasonic pachymetry were 538+/-31.3 microm, 541+/-40.7 microm, and 545+/-31.3 microm, respectively. There were no statistically significant differences in the measurement results among the 3 methods (P = 0.569, repeated-measures analysis of variance). There was a significant linear correlation between the rotating Scheimpflug camera and ultrasonic pachymetry (r = 0.908, P<0.0001), rotating Scheimpflug camera and scanning-slit topography (r = 0.930, P<0.0001), and ultrasonic pachymetry and scanning-slit topography (r = 0.887, P<0.0001). Ultrasonic pachymetry had the smallest intraexaminer variability, and scanning-slit topography had the largest intraexaminer variability among the 3 methods. There were similar variations in interexaminer reproducibility among the 3 methods. Mean corneal thicknesses were comparable among rotating Scheimpflug camera, ultrasonic pachymetry, and scanning-slit topography with the acoustic equivalent correction factor. The measurements of the 3 instruments had significant linear correlations with one another, and all methods had highly satisfactory measurement repeatability.
A TV Camera System Which Extracts Feature Points For Non-Contact Eye Movement Detection
NASA Astrophysics Data System (ADS)
Tomono, Akira; Iida, Muneo; Kobayashi, Yukio
1990-04-01
This paper proposes a highly efficient camera system which extracts, irrespective of background, feature points such as the pupil, corneal reflection image and dot-marks pasted on a human face in order to detect human eye movement by image processing. Two eye movement detection methods are sugested: One utilizing face orientation as well as pupil position, The other utilizing pupil and corneal reflection images. A method of extracting these feature points using LEDs as illumination devices and a new TV camera system designed to record eye movement are proposed. Two kinds of infra-red LEDs are used. These LEDs are set up a short distance apart and emit polarized light of different wavelengths. One light source beams from near the optical axis of the lens and the other is some distance from the optical axis. The LEDs are operated in synchronization with the camera. The camera includes 3 CCD image pick-up sensors and a prism system with 2 boundary layers. Incident rays are separated into 2 wavelengths by the first boundary layer of the prism. One set of rays forms an image on CCD-3. The other set is split by the half-mirror layer of the prism and forms an image including the regularly reflected component by placing a polarizing filter in front of CCD-1 or another image not including the component by not placing a polarizing filter in front of CCD-2. Thus, three images with different reflection characteristics are obtained by three CCDs. Through the experiment, it is shown that two kinds of subtraction operations between the three images output from CCDs accentuate three kinds of feature points: the pupil and corneal reflection images and the dot-marks. Since the S/N ratio of the subtracted image is extremely high, the thresholding process is simple and allows reducting the intensity of the infra-red illumination. A high speed image processing apparatus using this camera system is decribed. Realtime processing of the subtraction, thresholding and gravity position calculation of the feature points is possible.
Pigmented anatomy in Carboniferous cyclostomes and the evolution of the vertebrate eye.
Gabbott, Sarah E; Donoghue, Philip C J; Sansom, Robert S; Vinther, Jakob; Dolocan, Andrei; Purnell, Mark A
2016-08-17
The success of vertebrates is linked to the evolution of a camera-style eye and sophisticated visual system. In the absence of useful data from fossils, scenarios for evolutionary assembly of the vertebrate eye have been based necessarily on evidence from development, molecular genetics and comparative anatomy in living vertebrates. Unfortunately, steps in the transition from a light-sensitive 'eye spot' in invertebrate chordates to an image-forming camera-style eye in jawed vertebrates are constrained only by hagfish and lampreys (cyclostomes), which are interpreted to reflect either an intermediate or degenerate condition. Here, we report-based on evidence of size, shape, preservation mode and localized occurrence-the presence of melanosomes (pigment-bearing organelles) in fossil cyclostome eyes. Time of flight secondary ion mass spectrometry analyses reveal secondary ions with a relative intensity characteristic of melanin as revealed through principal components analyses. Our data support the hypotheses that extant hagfish eyes are degenerate, not rudimentary, that cyclostomes are monophyletic, and that the ancestral vertebrate had a functional visual system. We also demonstrate integument pigmentation in fossil lampreys, opening up the exciting possibility of investigating colour patterning in Palaeozoic vertebrates. The examples we report add to the record of melanosome preservation in Carboniferous fossils and attest to surprising durability of melanosomes and biomolecular melanin. © 2016 The Authors.
Pigmented anatomy in Carboniferous cyclostomes and the evolution of the vertebrate eye
Gabbott, Sarah E.; Sansom, Robert S.; Vinther, Jakob; Dolocan, Andrei; Purnell, Mark A.
2016-01-01
The success of vertebrates is linked to the evolution of a camera-style eye and sophisticated visual system. In the absence of useful data from fossils, scenarios for evolutionary assembly of the vertebrate eye have been based necessarily on evidence from development, molecular genetics and comparative anatomy in living vertebrates. Unfortunately, steps in the transition from a light-sensitive ‘eye spot’ in invertebrate chordates to an image-forming camera-style eye in jawed vertebrates are constrained only by hagfish and lampreys (cyclostomes), which are interpreted to reflect either an intermediate or degenerate condition. Here, we report—based on evidence of size, shape, preservation mode and localized occurrence—the presence of melanosomes (pigment-bearing organelles) in fossil cyclostome eyes. Time of flight secondary ion mass spectrometry analyses reveal secondary ions with a relative intensity characteristic of melanin as revealed through principal components analyses. Our data support the hypotheses that extant hagfish eyes are degenerate, not rudimentary, that cyclostomes are monophyletic, and that the ancestral vertebrate had a functional visual system. We also demonstrate integument pigmentation in fossil lampreys, opening up the exciting possibility of investigating colour patterning in Palaeozoic vertebrates. The examples we report add to the record of melanosome preservation in Carboniferous fossils and attest to surprising durability of melanosomes and biomolecular melanin. PMID:27488650
Motion Estimation Using the Single-row Superposition-type Planar Compound-like Eye
Cheng, Chi-Cheng; Lin, Gwo-Long
2007-01-01
How can the compound eye of insects capture the prey so accurately and quickly? This interesting issue is explored from the perspective of computer vision instead of from the viewpoint of biology. The focus is on performance evaluation of noise immunity for motion recovery using the single-row superposition-type planar compound like eye (SPCE). The SPCE owns a special symmetrical framework with tremendous amount of ommatidia inspired by compound eye of insects. The noise simulates possible ambiguity of image patterns caused by either environmental uncertainty or low resolution of CCD devices. Results of extensive simulations indicate that this special visual configuration provides excellent motion estimation performance regardless of the magnitude of the noise. Even when the noise interference is serious, the SPCE is able to dramatically reduce errors of motion recovery of the ego-translation without any type of filters. In other words, symmetrical, regular, and multiple vision sensing devices of the compound-like eye have statistical averaging advantage to suppress possible noises. This discovery lays the basic foundation in terms of engineering approaches for the secret of the compound eye of insects.
View of the Cupola RWS taken with Fish-Eye Lens
2010-05-08
ISS023-E-039983 (8 May 2010) --- A fish-eye lens attached to an electronic still camera was used by an Expedition 23 crew member to capture this image of the robotic workstation in the Cupola of the International Space Station.
Growth of the eye lens: II. Allometric studies
2014-01-01
Purpose The purpose of this study was to examine the ontogeny and phylogeny of lens growth in a variety of species using allometry. Methods Data on the accumulation of wet and/or dry lens weight as a function of bodyweight were obtained for 40 species and subjected to allometric analysis to examine ontogenic growth and compaction. Allometric analysis was also used to compare the maximum adult lens weights for 147 species with the maximum adult bodyweight and to compare lens volumes calculated from wet and dry weights with eye volumes calculated from axial length. Results Linear allometric relationships were obtained for the comparison of ontogenic lens and bodyweight accumulation. The body mass exponent (BME) decreased with increasing animal size from around 1.0 in small rodents to 0.4 in large ungulates for both wet and dry weights. Compaction constants for the ontogenic growth ranged from 1.00 in birds and reptiles up to 1.30 in mammals. Allometric comparison of maximum lens wet and dry weights with maximum bodyweights also yielded linear plots with a BME of 0.504 for all warm blooded species except primates which had a BME of 0.25. When lens volumes were compared with eye volumes, all species yielded a scaling constant of 0.75 but the proportionality constants for primates and birds were lower. Conclusions Ontogenic lens growth is fastest, relative to body growth, in small animals and slowest in large animals. Fiber cell compaction takes place throughout life in most species, but not in birds and reptiles. Maximum adult lens size scales with eye size with the same exponent in all species, but birds and primates have smaller lenses relative to eye size than other species. Optical properties of the lens are generated through the combination of variations in the rate of growth, rate of compaction, shape and size. PMID:24715759
A new Agraecina spider species from the Balkan Peninsula (FYR Macedonia) (Araneae: Liocranidae).
Deltshev, Christo; Wang, Chunxia
2016-05-30
Specimens were collected using pitfall traps. Coloration is described from alcohol-preserved specimens. Specimens were examined and measured using a Wild M5A stereomicroscope. Further details were studied and measured under an Olympus BX41 compound microscope. All drawings were made using a drawing apparatus attached to a Leica stereomicroscope. Male palps and female genitalia were examined and illustrated after they were dissected from the spiders' bodies. Photos were taken with an Olympus C7070 wide zoom digital camera mounted on an Olympus SZX12 stereomicroscope. The images were montaged using Helicon Focus image stacking software. Measurements of the legs are taken from the dorsal side. Total length of the body includes the chelicerae. All measurements were taken in mm. Abbreviations used in text include: AME, anterior median eyes; ALE, anterior lateral eyes; EM, embolus; MA, median apophysis; CD, copulatory duct; ST, spermatheca; fe, femur; pa, patella; ti, tibia; mt, metatarsus; p, prolateral; d, dorsal; r, retrolateral; v, ventral. Type specimens are deposited in the National Museum of Natural History (NMNHS), Sofia, Bulgaria.
Biologically inspired artificial compound eyes.
Jeong, Ki-Hun; Kim, Jaeyoun; Lee, Luke P
2006-04-28
This work presents the fabrication of biologically inspired artificial compound eyes. The artificial ommatidium, like that of an insect's compound eyes, consists of a refractive polymer microlens, a light-guiding polymer cone, and a self-aligned waveguide to collect light with a small angular acceptance. The ommatidia are omnidirectionally arranged along a hemispherical polymer dome such that they provide a wide field of view similar to that of a natural compound eye. The spherical configuration of the microlenses is accomplished by reconfigurable microtemplating, that is, polymer replication using the deformed elastomer membrane with microlens patterns. The formation of polymer waveguides self-aligned with microlenses is also realized by a self-writing process in a photosensitive polymer resin. The angular acceptance is directly measured by three-dimensional optical sectioning with a confocal microscope, and the detailed optical characteristics are studied in comparison with a natural compound eye.
Compact camera technologies for real-time false-color imaging in the SWIR band
NASA Astrophysics Data System (ADS)
Dougherty, John; Jennings, Todd; Snikkers, Marco
2013-11-01
Previously real-time false-colored multispectral imaging was not available in a true snapshot single compact imager. Recent technology improvements now allow for this technique to be used in practical applications. This paper will cover those advancements as well as a case study for its use in UAV's where the technology is enabling new remote sensing methodologies.
NASA Astrophysics Data System (ADS)
Luquet, Ph.; Chikouche, A.; Benbouzid, A. B.; Arnoux, J. J.; Chinal, E.; Massol, C.; Rouchit, P.; De Zotti, S.
2017-11-01
EADS Astrium is currently developing a new product line of compact and versatile instruments for high resolution missions in Earth Observation. First version has been developed in the frame of the ALSAT-2 contract awarded by the Algerian Space Agency (ASAL) to EADS Astrium. The Silicon Carbide Korsch-type telescope coupled with a multilines detector array offers a 2.5 m GSD in PAN band at Nadir @ 680 km altitude (10 m GSD in the four multispectral bands) with a 17.5 km swath width. This compact camera - 340 (W) x 460 (L) x 510 (H) mm3, 13 kg - is embarked on a Myriade-type small platform. The electronics unit accommodates video, housekeeping, and thermal control functions and also a 64 Gbit mass memory. Two satellites are developed; the first one is planned to be launched on mid 2009. Several other versions of the instrument have already been defined with enhanced resolution or/and larger field of view.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conder, A.; Mummolo, F. J.
The goal of the project was to develop a compact, large active area, high spatial resolution, high dynamic range, charge-coupled device (CCD) camera to replace film for digital imaging of visible light, ultraviolet radiation, and soft to penetrating X-rays. The camera head and controller needed to be capable of operation within a vacuum environment and small enough to be fielded within the small vacuum target chambers at LLNL.
Engineering design criteria for an image intensifier/image converter camera
NASA Technical Reports Server (NTRS)
Sharpsteen, J. T.; Lund, D. L.; Stoap, L. J.; Solheim, C. D.
1976-01-01
The design, display, and evaluation of an image intensifier/image converter camera which can be utilized in various requirements of spaceshuttle experiments are described. An image intensifier tube was utilized in combination with two brassboards as power supply and used for evaluation of night photography in the field. Pictures were obtained showing field details which would have been undistinguishable to the naked eye or to an ordinary camera.
Compact Kirkpatrick–Baez microscope mirrors for imaging laser-plasma x-ray emission
Marshall, F. J.
2012-07-18
Compact Kirkpatrick–Baez microscope mirror components for use in imaging laser-plasma x-ray emission have been manufactured, coated, and tested. A single mirror pair has dimensions of 14 × 7 × 9 mm and a best resolution of ~5 μm. The mirrors are coated with Ir providing a useful energy range of 2-8 keV when operated at a grazing angle of 0.7°. The mirrors can be circularly arranged to provide 16 images of the target emission a configuration best suited for use in combination with a custom framing camera. As a result, an alternative arrangement of the mirrors would allow alignment ofmore » the images with a fourstrip framing camera.« less
Assessment of Eye Fatigue Caused by 3D Displays Based on Multimodal Measurements
Bang, Jae Won; Heo, Hwan; Choi, Jong-Suk; Park, Kang Ryoung
2014-01-01
With the development of 3D displays, user's eye fatigue has been an important issue when viewing these displays. There have been previous studies conducted on eye fatigue related to 3D display use, however, most of these have employed a limited number of modalities for measurements, such as electroencephalograms (EEGs), biomedical signals, and eye responses. In this paper, we propose a new assessment of eye fatigue related to 3D display use based on multimodal measurements. compared to previous works Our research is novel in the following four ways: first, to enhance the accuracy of assessment of eye fatigue, we measure EEG signals, eye blinking rate (BR), facial temperature (FT), and a subjective evaluation (SE) score before and after a user watches a 3D display; second, in order to accurately measure BR in a manner that is convenient for the user, we implement a remote gaze-tracking system using a high speed (mega-pixel) camera that measures eye blinks of both eyes; thirdly, changes in the FT are measured using a remote thermal camera, which can enhance the measurement of eye fatigue, and fourth, we perform various statistical analyses to evaluate the correlation between the EEG signal, eye BR, FT, and the SE score based on the T-test, correlation matrix, and effect size. Results show that the correlation of the SE with other data (FT, BR, and EEG) is the highest, while those of the FT, BR, and EEG with other data are second, third, and fourth highest, respectively. PMID:25192315
Visual navigation in starfish: first evidence for the use of vision and eyes in starfish
Garm, Anders; Nilsson, Dan-Eric
2014-01-01
Most known starfish species possess a compound eye at the tip of each arm, which, except for the lack of true optics, resembles an arthropod compound eye. Although these compound eyes have been known for about two centuries, no visually guided behaviour has ever been directly associated with their presence. There are indications that they are involved in negative phototaxis but this may also be governed by extraocular photoreceptors. Here, we show that the eyes of the coral-reef-associated starfish Linckia laevigata are slow and colour blind. The eyes are capable of true image formation although with low spatial resolution. Further, our behavioural experiments reveal that only specimens with intact eyes can navigate back to their reef habitat when displaced, demonstrating that this is a visually guided behaviour. This is, to our knowledge, the first report of a function of starfish compound eyes. We also show that the spectral sensitivity optimizes the contrast between the reef and the open ocean. Our results provide an example of an eye supporting only low-resolution vision, which is believed to be an essential stage in eye evolution, preceding the high-resolution vision required for detecting prey, predators and conspecifics. PMID:24403344
Hurwitz, David S; Pradhan, Anuj; Fisher, Donald L; Knodler, Michael A; Muttart, Jeffrey W; Menon, Rajiv; Meissner, Uwe
2010-04-01
Backing crash injures can be severe; approximately 200 of the 2,500 reported injuries of this type per year to children under the age of 15 years result in death. Technology for assisting drivers when backing has limited success in preventing backing crashes. Two questions are addressed: Why is the reduction in backing crashes moderate when rear-view cameras are deployed? Could rear-view cameras augment sensor systems? 46 drivers (36 experimental, 10 control) completed 16 parking trials over 2 days (eight trials per day). Experimental participants were provided with a sensor camera system, controls were not. Three crash scenarios were introduced. Parking facility at UMass Amherst, USA. 46 drivers (33 men, 13 women) average age 29 years, who were Massachusetts residents licensed within the USA for an average of 9.3 years. Interventions Vehicles equipped with a rear-view camera and sensor system-based parking aid. Subject's eye fixations while driving and researcher's observation of collision with objects during backing. Only 20% of drivers looked at the rear-view camera before backing, and 88% of those did not crash. Of those who did not look at the rear-view camera before backing, 46% looked after the sensor warned the driver. This study indicates that drivers not only attend to an audible warning, but will look at a rear-view camera if available. Evidence suggests that when used appropriately, rear-view cameras can mitigate the occurrence of backing crashes, particularly when paired with an appropriate sensor system.
Hurwitz, David S; Pradhan, Anuj; Fisher, Donald L; Knodler, Michael A; Muttart, Jeffrey W; Menon, Rajiv; Meissner, Uwe
2012-01-01
Context Backing crash injures can be severe; approximately 200 of the 2,500 reported injuries of this type per year to children under the age of 15 years result in death. Technology for assisting drivers when backing has limited success in preventing backing crashes. Objectives Two questions are addressed: Why is the reduction in backing crashes moderate when rear-view cameras are deployed? Could rear-view cameras augment sensor systems? Design 46 drivers (36 experimental, 10 control) completed 16 parking trials over 2 days (eight trials per day). Experimental participants were provided with a sensor camera system, controls were not. Three crash scenarios were introduced. Setting Parking facility at UMass Amherst, USA. Subjects 46 drivers (33 men, 13 women) average age 29 years, who were Massachusetts residents licensed within the USA for an average of 9.3 years. Interventions Vehicles equipped with a rear-view camera and sensor system-based parking aid. Main Outcome Measures Subject’s eye fixations while driving and researcher’s observation of collision with objects during backing. Results Only 20% of drivers looked at the rear-view camera before backing, and 88% of those did not crash. Of those who did not look at the rear-view camera before backing, 46% looked after the sensor warned the driver. Conclusions This study indicates that drivers not only attend to an audible warning, but will look at a rear-view camera if available. Evidence suggests that when used appropriately, rear-view cameras can mitigate the occurrence of backing crashes, particularly when paired with an appropriate sensor system. PMID:20363812
Stereo View of Martian Rock Target 'Funzie'
2018-02-08
The surface of the Martian rock target in this stereo image includes small hollows with a "swallowtail" shape characteristic of some gypsum crystals, most evident in the lower left quadrant. These hollows may have resulted from the original crystallizing mineral subsequently dissolving away. The view appears three-dimensional when seen through blue-red glasses with the red lens on the left. The scene spans about 2.5 inches (6.5 centimeters). This rock target, called "Funzie," is near the southern, uphill edge of "Vera Rubin Ridge" on lower Mount Sharp. The stereo view combines two images taken from slightly different angles by the Mars Hand Lens Imager (MAHLI) camera on NASA's Curiosity Mars rover, with the camera about 4 inches (10 centimeters) above the target. Fig. 1 and Fig. 2 are the separate "right-eye" and "left-eye" images, taken on Jan. 11, 2018, during the 1,932nd Martian day, or sol, of the rover's work on Mars. Right-eye and left-eye images are available at https://photojournal.jpl.nasa.gov/catalog/PIA22212
Word Lists to Limit Vocabulary in Technical Information.
1985-02-01
COMPOSITE adjective COMPACTING verb COMPOSITES noun " COMPACTS verb COMPOSITION noun COMPANIES noun COMPOSITIONS noun COMPANY noun COMPOUND adj/verb...COMPARE verb COMPOUNDED * verb COMPARED* verb COMPOUNDING verb . COMPARES verb COMPOUNDS noun/verb - COMPARING verb COMPRESS noun/verb COMPARISON noun...34 DID* verb DIRECTS verb DIE verb DIRT noun DIED* verb DIRTIED * verb BOLDFACE a Root Word ’ a Past Participle A-28 . .. Technical Report 164 COMMON WORD
Replication and characterization of the compound eye of a fruit fly for imaging purpose
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Hefu; University of Chinese Academy of Sciences, Beijing 10039; Gong, Xianwei
In this work, we report the replication and characterization of the compound eye of a fruit fly for imaging purpose. In the replication, soft lithography method was employed to replicate the compound eye of a fruit fly into a UV-curable polymer. The method was demonstrated to be effective and the compound eye is replicated into the polymer (NOA78) where each ommatidium has a diameter of about 30 μm and a sag height of about 7 μm. To characterize its optical property, the point spread function of the compound eye was tested and a NA of 0.386 has been obtained for the replicatedmore » polymeric ommatidium. Comparing with the NA of a real fruit fly ommatidium which was measured to be about 0.212, the replicated polymeric ommatidium has a much larger NA due to the refractive index of NOA78 is much higher than that of the material used to form the real fruit fly ommatidium. Furthermore, the replicated compound eye was used to image a photomask patterned with grating structures to test its imaging property. It is shown that the grating with a line width of 20 μm can be clearly imaged. The image of the grating formed by the replicated compound eye was shrunk by about 10 times and therefore a line width of about 2.2 μm in the image plane has been obtained, which is close to the diffraction limited resolution calculated through the measured NA. In summary, the replication method demonstrated is effective and the replicated compound eye has the great potential in optical imaging.« less
Identifying People with Soft-Biometrics at Fleet Week
2013-03-01
onboard sensors. This included: Color Camera: Located in the right eye, Octavia stored 640x480 RGB images at ~4 Hz from a Point Grey Firefly camera. A...Face Detection The Fleet Week experiments demonstrated the potential of soft biometrics for recognition, but all of the existing algorithms currently
Ultraviolet Viewing with a Television Camera.
ERIC Educational Resources Information Center
Eisner, Thomas; And Others
1988-01-01
Reports on a portable video color camera that is fully suited for seeing ultraviolet images and offers some expanded viewing possibilities. Discusses the basic technique, specialized viewing, and the instructional value of this system of viewing reflectance patterns of flowers and insects that are invisible to the unaided eye. (CW)
Fast and robust curve skeletonization for real-world elongated objects
USDA-ARS?s Scientific Manuscript database
These datasets were generated for calibrating robot-camera systems. In an extension, we also considered the problem of calibrating robots with more than one camera. These datasets are provided as a companion to the paper, "Solving the Robot-World Hand-Eye(s) Calibration Problem with Iterative Meth...
Pirie, C G; Alario, A
2014-03-01
The objective of this study was to assess and compare indocyanine green (IG) and sodium fluorescein (SF) angiographic findings in the normal canine anterior segment using a digital single lens reflex (dSLR) camera adaptor. Images were obtained from 10 brown-eyed Beagles, free of ocular and systemic disease. All animals received butorphanol (0.2 mg/kg IM), maropitant citrate (1.0 mg/kg SC) and diphenhydramine (2.0 mg/kg SC) 20 min prior to propofol (4 mg/kg IV bolus, 0.2 mg/kg/min continuous rate infusion). Standard color imaging was performed prior to the administration of 0.25% IG (1 mg/kg IV). Imaging was performed using a full spectrum dSLR camera, dSLR camera adaptor, camera lens (Canon 60 mm f/2.8 Macro) and an accessory flash. Images were obtained at a rate of 1/s immediately following IG bolus for 30 s, then at 1, 2, 3, 4 and 5 min. Ten minutes later, 10% SF (20 mg/kg IV) was administered. Imaging was repeated using the same adaptor system and imaging sequence protocol. Arterial, capillary and venous phases were identified during anterior segment IG angiography (ASIGA) and their time sequences were recorded. ASIGA offered improved visualization of the iris vasculature in heavily pigmented eyes compared to anterior segment SF angiography (ASSFA), since visualization of the vascular pattern during ASSFA was not possible due to pigment masking. Leakage of SF was noted in a total of six eyes. The use of IG and SF was not associated with any observed adverse events. The adaptor described here provides a cost-effective alternative to existing imaging systems. Copyright © 2013 Elsevier Ltd. All rights reserved.
Compact portable diffraction moire interferometer
Deason, Vance A.; Ward, Michael B.
1989-01-01
A compact and portable moire interferometer used to determine surface deformations of an object. The improved interferometer is comprised of a laser beam, optical and fiber optics devices coupling the beam to one or more evanescent wave splitters, and collimating lenses directing the split beam at one or more specimen gratings. Observation means including film and video cameras may be used to view and record the resultant fringe patterns.
Compact CdZnTe-based gamma camera for prostate cancer imaging
NASA Astrophysics Data System (ADS)
Cui, Yonggang; Lall, Terry; Tsui, Benjamin; Yu, Jianhua; Mahler, George; Bolotnikov, Aleksey; Vaska, Paul; De Geronimo, Gianluigi; O'Connor, Paul; Meinken, George; Joyal, John; Barrett, John; Camarda, Giuseppe; Hossain, Anwar; Kim, Ki Hyun; Yang, Ge; Pomper, Marty; Cho, Steve; Weisman, Ken; Seo, Youngho; Babich, John; LaFrance, Norman; James, Ralph B.
2011-06-01
In this paper, we discuss the design of a compact gamma camera for high-resolution prostate cancer imaging using Cadmium Zinc Telluride (CdZnTe or CZT) radiation detectors. Prostate cancer is a common disease in men. Nowadays, a blood test measuring the level of prostate specific antigen (PSA) is widely used for screening for the disease in males over 50, followed by (ultrasound) imaging-guided biopsy. However, PSA tests have a high falsepositive rate and ultrasound-guided biopsy has a high likelihood of missing small cancerous tissues. Commercial methods of nuclear medical imaging, e.g. PET and SPECT, can functionally image the organs, and potentially find cancer tissues at early stages, but their applications in diagnosing prostate cancer has been limited by the smallness of the prostate gland and the long working distance between the organ and the detectors comprising these imaging systems. CZT is a semiconductor material with wide band-gap and relatively high electron mobility, and thus can operate at room temperature without additional cooling. CZT detectors are photon-electron direct-conversion devices, thus offering high energy-resolution in detecting gamma rays, enabling energy-resolved imaging, and reducing the background of Compton-scattering events. In addition, CZT material has high stopping power for gamma rays; for medical imaging, a few-mm-thick CZT material provides adequate detection efficiency for many SPECT radiotracers. Because of these advantages, CZT detectors are becoming popular for several SPECT medical-imaging applications. Most recently, we designed a compact gamma camera using CZT detectors coupled to an application-specific-integratedcircuit (ASIC). This camera functions as a trans-rectal probe to image the prostate gland from a distance of only 1-5 cm, thus offering higher detection efficiency and higher spatial resolution. Hence, it potentially can detect prostate cancers at their early stages. The performance tests of this camera have been completed. The results show better than 6-mm resolution at a distance of 1 cm. Details of the test results are discussed in this paper.
Nance, Thomas A.; Siddall, Alvin A.; Cheng, William Y.; Counts, Kevin T.
2005-05-10
Disclosed is an elongated, tubular, compact high pressure sprayer apparatus for insertion into an access port of vessels having contaminated interior areas that require cleaning by high pressure water spray. The invention includes a spray nozzle and a camera adjacent thereto with means for rotating and raising and lowering the nozzle so that areas identified through the camera may be cleaned with a minimum production of waste water to be removed.
NASA Astrophysics Data System (ADS)
Bird, Alan; Anderson, Scott A.; Linne von Berg, Dale; Davidson, Morgan; Holt, Niel; Kruer, Melvin; Wilson, Michael L.
2010-04-01
EyePod is a compact survey and inspection day/night imaging sensor suite for small unmanned aircraft systems (UAS). EyePod generates georeferenced image products in real-time from visible near infrared (VNIR) and long wave infrared (LWIR) imaging sensors and was developed under the ONR funded FEATHAR (Fusion, Exploitation, Algorithms, and Targeting for High-Altitude Reconnaissance) program. FEATHAR is being directed and executed by the Naval Research Laboratory (NRL) in conjunction with the Space Dynamics Laboratory (SDL) and FEATHAR's goal is to develop and test new tactical sensor systems specifically designed for small manned and unmanned platforms (payload weight < 50 lbs). The EyePod suite consists of two VNIR/LWIR (day/night) gimbaled sensors that, combined, provide broad area survey and focused inspection capabilities. Each EyePod sensor pairs an HD visible EO sensor with a LWIR bolometric imager providing precision geo-referenced and fully digital EO/IR NITFS output imagery. The LWIR sensor is mounted to a patent-pending jitter-reduction stage to correct for the high-frequency motion typically found on small aircraft and unmanned systems. Details will be presented on both the wide-area and inspection EyePod sensor systems, their modes of operation, and results from recent flight demonstrations.
Recent developments on dry eye disease treatment compounds.
Colligris, Basilio; Alkozi, Hanan Awad; Pintor, Jesus
2014-01-01
Dry eye syndrome is a common tears and ocular surface multifactorial disease, described by changes in the ocular surface epithelia related to reduced tears quantity and ocular surface sensitivity, leading to inflammatory reaction. Managing the eye inflammation proved helpful to patients with dry eye disease and current treatment is based on the use of topically applied artificial tear products/lubricants, tear retention management, stimulation of tear secretion and using anti-inflammatory drugs. In this article we revise the corresponding literature and patents assembling the new treatment approaches of novel and future pharmaceutical compounds destined for the dry eye disease treatment. The most frequent categories of compounds presented are secretagogues and anti-inflammatory drugs. These compounds are the research outcome of novel therapeutic strategies designed to reduce key inflammatory pathways and restore healthy tear film.
Development of mechanical structure for the compact space IR camera MIRIS
NASA Astrophysics Data System (ADS)
Moon, Bongkon; Jeong, Woong-Seob; Cha, Sang-Mok; Park, Youngsik; Ree, Chang-Hee; Lee, Dae-Hee; Park, Sung-Joon; Nam, Uk-Won; Park, Jang-Hyun; Ka, Nung Hyun; Lee, Mi Hyun; Lee, Duk-Hang; Pyo, Jeonghyun; Rhee, Seung-Woo; Park, Jong-Oh; Lee, Hyung-Mok; Matsumoto, Toshio; Yang, Sun Choel; Han, Wonyong
2010-07-01
MIRIS is a compact near-infrared camera with a wide field of view of 3.67°×3.67° in the Korea Science and Technology Satellite 3 (STSAT-3). MIRIS will be launched warm and cool the telescope optics below 200K by pointing to the deep space on Sun-synchronous orbit. In order to realize the passive cooling, the mechanical structure was designed to consider thermal analysis results on orbit. Structural analysis was also conducted to ensure safety and stability in launching environments. To achieve structural and thermal requirements, we fabricated the thermal shielding parts such as Glass Fiber Reinforced Plastic (GFRP) pipe supports, a Winston cone baffle, aluminum-shield plates, a sunshade, a radiator and 30 layers of Multi Layer Insulation (MLI). These structures prevent the heat load from the spacecraft and the earth effectively, and maintain the temperature of the telescope optics within operating range. A micro cooler was installed in a cold box including a PICNIC detector and a filter-wheel, and cooled the detector down to a operating temperature range. We tested the passive cooling in the simulated space environment and confirmed that the required temperature of telescope can be achieved. Driving mechanism of the filter-wheel and the cold box structure were also developed for the compact space IR camera. Finally, we present the assembly procedures and the test result for the mechanical parts of MIRIS.
Development of a low cost high precision three-layer 3D artificial compound eye.
Zhang, Hao; Li, Lei; McCray, David L; Scheiding, Sebastian; Naples, Neil J; Gebhardt, Andreas; Risse, Stefan; Eberhardt, Ramona; Tünnermann, Andreas; Yi, Allen Y
2013-09-23
Artificial compound eyes are typically designed on planar substrates due to the limits of current imaging devices and available manufacturing processes. In this study, a high precision, low cost, three-layer 3D artificial compound eye consisting of a 3D microlens array, a freeform lens array, and a field lens array was constructed to mimic an apposition compound eye on a curved substrate. The freeform microlens array was manufactured on a curved substrate to alter incident light beams and steer their respective images onto a flat image plane. The optical design was performed using ZEMAX. The optical simulation shows that the artificial compound eye can form multiple images with aberrations below 11 μm; adequate for many imaging applications. Both the freeform lens array and the field lens array were manufactured using microinjection molding process to reduce cost. Aluminum mold inserts were diamond machined by the slow tool servo method. The performance of the compound eye was tested using a home-built optical setup. The images captured demonstrate that the proposed structures can successfully steer images from a curved surface onto a planar photoreceptor. Experimental results show that the compound eye in this research has a field of view of 87°. In addition, images formed by multiple channels were found to be evenly distributed on the flat photoreceptor. Additionally, overlapping views of the adjacent channels allow higher resolution images to be re-constructed from multiple 3D images taken simultaneously.
The Development of Fine-Grained Sensitivity to Eye Contact after 6 Years of Age
ERIC Educational Resources Information Center
Vida, Mark D.; Maurer, Daphne
2012-01-01
Adults use eye contact as a cue to the mental and emotional states of others. Here, we examined developmental changes in the ability to discriminate between eye contact and averted gaze. Children (6-, 8-, 10-, and 14-year-olds) and adults (n=18/age) viewed photographs of a model fixating the center of a camera lens and a series of positions to the…
Li, Meiyan; Zhao, Jing; Miao, Huamao; Shen, Yang; Sun, Ling; Tian, Mi; Wadium, Elizabeth; Zhou, Xingtao
2014-05-20
To measure decentration following femtosecond laser small incision lenticule extraction (SMILE) for the correction of myopia and myopic astigmatism in the early learning curve, and to investigate its impact on visual quality. A total of 55 consecutive patients (100 eyes) who underwent the SMILE procedure were included. Decentration was measured using a Scheimpflug camera 6 months after surgery. Uncorrected and corrected distance visual acuity (UDVA, CDVA), manifest refraction, and wavefront errors were also measured. Associations between decentration and the preoperative spherical equivalent were analyzed, as well as the associations between decentration and wavefront aberrations. Regarding efficacy and safety, 40 eyes (40%) had an unchanged CDVA; 32 eyes (32%) gained one line; and 11 eyes (11%) gained two lines. Fifteen eyes (15%) lost one line of CDVA, and two eyes (2%) lost two lines. Ninety-nine of the treated eyes (99%) had a postoperative UDVA better than 1.0, and 100 eyes (100%) had a UDVA better than 0.8. The mean decentered displacement was 0.17 ± 0.09 mm. The decentered displacement of all treated eyes (100%) was within 0.50 mm; 70 eyes (70%) were within 0.20 mm; and 90 eyes (90%) were within 0.30 mm. The vertical coma showed the greatest increase in magnitude. The magnitude of horizontal decentration was found to be associated with an induced horizontal coma. This study suggests that, although mild decentration occurred in the early learning curve, good visual outcomes were achieved after the SMILE surgery. Special efforts to minimize induced vertical coma are necessary. Copyright 2014 The Association for Research in Vision and Ophthalmology, Inc.
NASA Astrophysics Data System (ADS)
Talukder, Ashit; Morookian, John M.; Monacos, Steve P.; Lam, Raymond K.; Lebaw, C.; Bond, A.
2004-04-01
Eyetracking is one of the latest technologies that has shown potential in several areas including human-computer interaction for people with and without disabilities, and for noninvasive monitoring, detection, and even diagnosis of physiological and neurological problems in individuals. Current non-invasive eyetracking methods achieve a 30 Hz rate with possibly low accuracy in gaze estimation, that is insufficient for many applications. We propose a new non-invasive visual eyetracking system that is capable of operating at speeds as high as 6-12 KHz. A new CCD video camera and hardware architecture is used, and a novel fast image processing algorithm leverages specific features of the input CCD camera to yield a real-time eyetracking system. A field programmable gate array (FPGA) is used to control the CCD camera and execute the image processing operations. Initial results show the excellent performance of our system under severe head motion and low contrast conditions.
Design of a MEMS-based retina scanning system for biometric authentication
NASA Astrophysics Data System (ADS)
Woittennek, Franziska; Knobbe, Jens; Pügner, Tino; Schelinski, Uwe; Grüger, Heinrich
2014-05-01
There is an increasing need for reliable authentication for a number of applications such as e commerce. Common authentication methods based on ownership (ID card) or knowledge factors (password, PIN) are often prone to manipulations and may therefore be not safe enough. Various inherence factor based methods like fingerprint, retinal pattern or voice identifications are considered more secure. Retina scanning in particular offers both low false rejection rate (FRR) and low false acceptance rate (FAR) with about one in a million. Images of the retina with its characteristic pattern of blood vessels can be made with either a fundus camera or laser scanning methods. The present work describes the optical design of a new compact retina laser scanner which is based on MEMS (Micro Electric Mechanical System) technology. The use of a dual axis micro scanning mirror for laser beam deflection enables a more compact and robust design compared to classical systems. The scanner exhibits a full field of view of 10° which corresponds to an area of 4 mm2 on the retinal surface surrounding the optical disc. The system works in the near infrared and is designed for use under ambient light conditions, which implies a pupil diameter of 1.5 mm. Furthermore it features a long eye relief of 30 mm so that it can be conveniently used by persons wearing glasses. The optical design requirements and the optical performance are discussed in terms of spot diagrams and ray fan plots.
NASA Astrophysics Data System (ADS)
Lin, Chern-Sheng; Ho, Chien-Wa; Chang, Kai-Chieh; Hung, San-Shan; Shei, Hung-Jung; Yeh, Mau-Shiun
2006-06-01
This study describes the design and combination of an eye-controlled and a head-controlled human-machine interface system. This system is a highly effective human-machine interface, detecting head movement by changing positions and numbers of light sources on the head. When the users utilize the head-mounted display to browse a computer screen, the system will catch the images of the user's eyes with CCD cameras, which can also measure the angle and position of the light sources. In the eye-tracking system, the program in the computer will locate each center point of the pupils in the images, and record the information on moving traces and pupil diameters. In the head gesture measurement system, the user wears a double-source eyeglass frame, so the system catches images of the user's head by using a CCD camera in front of the user. The computer program will locate the center point of the head, transferring it to the screen coordinates, and then the user can control the cursor by head motions. We combine the eye-controlled and head-controlled human-machine interface system for the virtual reality applications.
Hardware Architecture and Cutting-Edge Assembly Process of a Tiny Curved Compound Eye
Viollet, Stéphane; Godiot, Stéphanie; Leitel, Robert; Buss, Wolfgang; Breugnon, Patrick; Menouni, Mohsine; Juston, Raphaël; Expert, Fabien; Colonnier, Fabien; L'Eplattenier, Géraud; Brückner, Andreas; Kraze, Felix; Mallot, Hanspeter; Franceschini, Nicolas; Pericet-Camara, Ramon; Ruffier, Franck; Floreano, Dario
2014-01-01
The demand for bendable sensors increases constantly in the challenging field of soft and micro-scale robotics. We present here, in more detail, the flexible, functional, insect-inspired curved artificial compound eye (CurvACE) that was previously introduced in the Proceedings of the National Academy of Sciences (PNAS, 2013). This cylindrically-bent sensor with a large panoramic field-of-view of 180° × 60° composed of 630 artificial ommatidia weighs only 1.75 g, is extremely compact and power-lean (0.9 W), while it achieves unique visual motion sensing performance (1950 frames per second) in a five-decade range of illuminance. In particular, this paper details the innovative Very Large Scale Integration (VLSI) sensing layout, the accurate assembly fabrication process, the innovative, new fast read-out interface, as well as the auto-adaptive dynamic response of the CurvACE sensor. Starting from photodetectors and microoptics on wafer substrates and flexible printed circuit board, the complete assembly of CurvACE was performed in a planar configuration, ensuring high alignment accuracy and compatibility with state-of-the art assembling processes. The characteristics of the photodetector of one artificial ommatidium have been assessed in terms of their dynamic response to light steps. We also characterized the local auto-adaptability of CurvACE photodetectors in response to large illuminance changes: this feature will certainly be of great interest for future applications in real indoor and outdoor environments. PMID:25407908
Compact streak camera for the shock study of solids by using the high-pressure gas gun
NASA Astrophysics Data System (ADS)
Nagayama, Kunihito; Mori, Yasuhito
1993-01-01
For the precise observation of high-speed impact phenomena, a compact high-speed streak camera recording system has been developed. The system consists of a high-pressure gas gun, a streak camera, and a long-pulse dye laser. The gas gun installed in our laboratory has a muzzle of 40 mm in diameter, and a launch tube of 2 m long. Projectile velocity is measured by the laser beam cut method. The gun is capable of accelerating a 27 g projectile up to 500 m/s, if helium gas is used as a driver. The system has been designed on the principal idea that the precise optical measurement methods developed in other areas of research can be applied to the gun study. The streak camera is 300 mm in diameter, with a rectangular rotating mirror which is driven by an air turbine spindle. The attainable streak velocity is 3 mm/microsecond(s) . The size of the camera is rather small aiming at the portability and economy. Therefore, the streak velocity is relatively slower than the fast cameras, but it is possible to use low-sensitivity but high-resolution film as a recording medium. We have also constructed a pulsed dye laser of 25 - 30 microsecond(s) in duration. The laser can be used as a light source of observation. The advantage for the use of the laser will be multi-fold, i.e., good directivity, almost single frequency, and so on. The feasibility of the system has been demonstrated by performing several experiments.
ERIC Educational Resources Information Center
Mulholland, Jessica
2012-01-01
In New York's Port Washington Union Free School District, security and privacy for students, faculty, and staff coexist--thanks to security cameras with eyelids. In 2010, video cameras donated by New York-based SituCon Systems were installed in the main lobby at two of the district's seven schools. "We really haven't had the kind of incidents…
SOUTH WING, MTR661. INTERIOR DETAIL INSIDE LAB ROOM 131. CAMERA ...
SOUTH WING, MTR-661. INTERIOR DETAIL INSIDE LAB ROOM 131. CAMERA FACING NORTHEAST. NOTE CONCRETE BLOCK WALLS. SAFETY SHOWER AND EYE WASHER AT REAR WALL. INL NEGATIVE NO. HD46-7-2. Mike Crane, Photographer, 2/2005. - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID
An automatic lightning detection and photographic system
NASA Technical Reports Server (NTRS)
Wojtasinski, R. J.; Holley, L. D.; Gray, J. L.; Hoover, R. B.
1973-01-01
Conventional 35-mm camera is activated by an electronic signal every time lightning strikes in general vicinity. Electronic circuit detects lightning by means of antenna which picks up atmospheric radio disturbances. Camera is equipped with fish-eye lense, automatic shutter advance, and small 24-hour clock to indicate time when exposures are made.
Through the Creator's Eyes: Using the Subjective Camera to Study Craft Creativity
ERIC Educational Resources Information Center
Glaveanu, Vlad Petre; Lahlou, Saadi
2012-01-01
This article addresses a methodological gap in the study of creativity: the difficulty of capturing the microgenesis of creative action in ways that would reflect both its psychological and behavioral dynamics. It explores the use of subjective camera (subcam) by research participants as part of an adapted Subjective Evidence-Based Ethnography…
Researches on hazard avoidance cameras calibration of Lunar Rover
NASA Astrophysics Data System (ADS)
Li, Chunyan; Wang, Li; Lu, Xin; Chen, Jihua; Fan, Shenghong
2017-11-01
Lunar Lander and Rover of China will be launched in 2013. It will finish the mission targets of lunar soft landing and patrol exploration. Lunar Rover has forward facing stereo camera pair (Hazcams) for hazard avoidance. Hazcams calibration is essential for stereo vision. The Hazcam optics are f-theta fish-eye lenses with a 120°×120° horizontal/vertical field of view (FOV) and a 170° diagonal FOV. They introduce significant distortion in images and the acquired images are quite warped, which makes conventional camera calibration algorithms no longer work well. A photogrammetric calibration method of geometric model for the type of optical fish-eye constructions is investigated in this paper. In the method, Hazcams model is represented by collinearity equations with interior orientation and exterior orientation parameters [1] [2]. For high-precision applications, the accurate calibration model is formulated with the radial symmetric distortion and the decentering distortion as well as parameters to model affinity and shear based on the fisheye deformation model [3] [4]. The proposed method has been applied to the stereo camera calibration system for Lunar Rover.
Recent developments on dry eye disease treatment compounds
Colligris, Basilio; Alkozi, Hanan Awad; Pintor, Jesus
2013-01-01
Dry eye syndrome is a common tears and ocular surface multifactorial disease, described by changes in the ocular surface epithelia related to reduced tears quantity and ocular surface sensitivity, leading to inflammatory reaction. Managing the eye inflammation proved helpful to patients with dry eye disease and current treatment is based on the use of topically applied artificial tear products/lubricants, tear retention management, stimulation of tear secretion and using anti-inflammatory drugs. In this article we revise the corresponding literature and patents assembling the new treatment approaches of novel and future pharmaceutical compounds destined for the dry eye disease treatment. The most frequent categories of compounds presented are secretagogues and anti-inflammatory drugs. These compounds are the research outcome of novel therapeutic strategies designed to reduce key inflammatory pathways and restore healthy tear film. PMID:24526854
Development of functional ectopic compound eyes in scarabaeid beetles by knockdown of orthodenticle.
Zattara, Eduardo E; Macagno, Anna L M; Busey, Hannah A; Moczek, Armin P
2017-11-07
Complex traits like limbs, brains, or eyes form through coordinated integration of diverse cell fates across developmental space and time, yet understanding how complexity and integration emerge from uniform, undifferentiated precursor tissues remains limited. Here, we use ectopic eye formation as a paradigm to investigate the emergence and integration of novel complex structures following massive ontogenetic perturbation. We show that down-regulation via RNAi of a single head patterning gene- orthodenticle -induces ectopic structures externally resembling compound eyes at the middorsal adult head of both basal and derived scarabaeid beetle species (Onthophagini and Oniticellini). Scanning electron microscopy documents ommatidial organization of these induced structures, while immunohistochemistry reveals the presence of rudimentary ommatidial lenses, crystalline cones, and associated neural-like tissue within them. Further, RNA-sequencing experiments show that after orthodenticle down-regulation, the transcriptional signature of the middorsal head-the location of ectopic eye induction-converges onto that of regular compound eyes, including up-regulation of several retina-specific genes. Finally, a light-aversion behavioral assay to assess functionality reveals that ectopic compound eyes can rescue the ability to respond to visual stimuli when wild-type eyes are surgically removed. Combined, our results show that knockdown of a single gene is sufficient for the middorsal head to acquire the competence to ectopically generate a functional compound eye-like structure. These findings highlight the buffering capacity of developmental systems, allowing massive genetic perturbations to be channeled toward orderly and functional developmental outcomes, and render ectopic eye formation a widely accessible paradigm to study the evolution of complex systems. Published under the PNAS license.
Compact portable diffraction moire interferometer
Deason, V.A.; Ward, M.B.
1988-05-23
A compact and portable moire interferometer used to determine surface deformations of an object. The improved interferometer is comprised of a laser beam, optical and fiber optics devices coupling the beam to one or more evanescent wave splitters, and collimating lenses directing the split beam at one or more specimen gratings. Observations means including film and video cameras may be used to view and record the resultant fringe patterns. 7 figs.
Advanced freeform optics enabling ultra-compact VR headsets
NASA Astrophysics Data System (ADS)
Benitez, Pablo; Miñano, Juan C.; Zamora, Pablo; Grabovičkić, Dejan; Buljan, Marina; Narasimhan, Bharathwaj; Gorospe, Jorge; López, Jesús; Nikolić, Milena; Sánchez, Eduardo; Lastres, Carmen; Mohedano, Ruben
2017-06-01
We present novel advanced optical designs with a dramatically smaller display to eye distance, excellent image quality and a large field of view (FOV). This enables headsets to be much more compact, typically occupying about a fourth of the volume of a conventional headset with the same FOV. The design strategy of these optics is based on a multichannel approach, which reduces the distance from the eye to the display and the display size itself. Unlike conventional microlens arrays, which are also multichannel devices, our designs use freeform optical surfaces to produce excellent imaging quality in the entire field of view, even when operating at very oblique incidences. We present two families of compact solutions that use different types of lenslets: (1) refractive designs, whose lenslets are composed typically of two refractive surfaces each; and (2) light-folding designs that use prism-like three-surface lenslets, in which rays undergo refraction, reflection, total internal reflection and refraction again. The number of lenslets is not fixed, so different configurations may arise, adaptable for flat or curved displays with different aspect ratios. In the refractive designs the distance between the optics and the display decreases with the number of lenslets, allowing for displaying a light-field when the lenslet becomes significantly small than the eye pupil. On the other hand, the correlation between number of lenslets and the optics to display distance is broken in light-folding designs, since their geometry permits achieving a very short display to eye distance with even a small number of lenslets.
A multi-camera system for real-time pose estimation
NASA Astrophysics Data System (ADS)
Savakis, Andreas; Erhard, Matthew; Schimmel, James; Hnatow, Justin
2007-04-01
This paper presents a multi-camera system that performs face detection and pose estimation in real-time and may be used for intelligent computing within a visual sensor network for surveillance or human-computer interaction. The system consists of a Scene View Camera (SVC), which operates at a fixed zoom level, and an Object View Camera (OVC), which continuously adjusts its zoom level to match objects of interest. The SVC is set to survey the whole filed of view. Once a region has been identified by the SVC as a potential object of interest, e.g. a face, the OVC zooms in to locate specific features. In this system, face candidate regions are selected based on skin color and face detection is accomplished using a Support Vector Machine classifier. The locations of the eyes and mouth are detected inside the face region using neural network feature detectors. Pose estimation is performed based on a geometrical model, where the head is modeled as a spherical object that rotates upon the vertical axis. The triangle formed by the mouth and eyes defines a vertical plane that intersects the head sphere. By projecting the eyes-mouth triangle onto a two dimensional viewing plane, equations were obtained that describe the change in its angles as the yaw pose angle increases. These equations are then combined and used for efficient pose estimation. The system achieves real-time performance for live video input. Testing results assessing system performance are presented for both still images and video.
Surgical Videos with Synchronised Vertical 2-Split Screens Recording the Surgeons' Hand Movement.
Kaneko, Hiroki; Ra, Eimei; Kawano, Kenichi; Yasukawa, Tsutomu; Takayama, Kei; Iwase, Takeshi; Terasaki, Hiroko
2015-01-01
To improve the state-of-the-art teaching system by creating surgical videos with synchronised vertical 2-split screens. An ultra-compact, wide-angle point-of-view camcorder (HX-A1, Panasonic) was mounted on the surgical microscope focusing mostly on the surgeons' hand movements. In combination with the regular surgical videos obtained from the CCD camera in the surgical microscope, synchronised vertical 2-split-screen surgical videos were generated with the video-editing software. Using synchronised vertical 2-split-screen videos, residents of the ophthalmology department could watch and learn how assistant surgeons controlled the eyeball, while the main surgeons performed scleral buckling surgery. In vitrectomy, the synchronised vertical 2-split-screen videos showed the surgeons' hands holding the instruments and moving roughly and boldly, in contrast to the very delicate movements of the vitrectomy instruments inside the eye. Synchronised vertical 2-split-screen surgical videos are beneficial for the education of young surgical trainees when learning surgical skills including the surgeons' hand movements. © 2015 S. Karger AG, Basel.
Snapshot imaging polarimeters using spatial modulation
NASA Astrophysics Data System (ADS)
Luo, Haitao
The recent demonstration of a novel snapshot imaging polarimeter using the fringe modulation technique shows a promise in building a compact and moving-parts-free device. As just demonstrated in principle, this technique has not been adequately studied. In the effort of advancing this technique, we build a complete theory framework that can address the key issues regarding the polarization aberrations caused by using the functional elements. With this model, we can have the necessary knowledge in designing, analyzing and optimizing the systems. Also, we propose a broader technique that uses arbitrary modulation instead of sinusoidal fringes, which can give us more engineering freedom and can be the solution of achromatizing the system. In the hardware aspect, several important progresses are made. We extend the polarimeter technique from visible to middle wavelength infrared by using the yttrium vanadate crystals. Also, we incorporate a Savart Plate polarimter into a fundus camera to measure the human eye's retinal retardance, useful information for glaucoma diagnosis. Thirdly, a world-smallest imaging polarimeter is proposed and demonstrated, which may open many applications in security, remote sensing and bioscience.
Melzer, Roland R
2009-12-01
Stemmata or "larval" eyes are of crucial importance for the understanding of the evolution and ontogeny of the hexapod's main visual organs, the compound eyes. Using classical neuroanatomical techniques, I showed that the persisting stemmata of Chaoborus imagos are connected to persisting stemma neuropils neighboring the first and second order neuropils of the compound eyes, and therefore also the imago possesses a stemma lamina and medulla closely associated with the architecture and the developmental pattern of those of the compound eyes. The findings are compared with other arthropods, e.g. accessory lateral eyes in Amandibulata and Myriapoda, suggesting some ancestral rather than derived character states. (c) 2009 Wiley-Liss, Inc.
Rapid microscopy measurement of very large spectral images.
Lindner, Moshe; Shotan, Zav; Garini, Yuval
2016-05-02
The spectral content of a sample provides important information that cannot be detected by the human eye or by using an ordinary RGB camera. The spectrum is typically a fingerprint of the chemical compound, its environmental conditions, phase and geometry. Thus measuring the spectrum at each point of a sample is important for a large range of applications from art preservation through forensics to pathological analysis of a tissue section. To date, however, there is no system that can measure the spectral image of a large sample in a reasonable time. Here we present a novel method for scanning very large spectral images of microscopy samples even if they cannot be viewed in a single field of view of the camera. The system is based on capturing information while the sample is being scanned continuously 'on the fly'. Spectral separation implements Fourier spectroscopy by using an interferometer mounted along the optical axis. High spectral resolution of ~5 nm at 500 nm could be achieved with a diffraction-limited spatial resolution. The acquisition time is fairly high and takes 6-8 minutes for a sample size of 10mm x 10mm measured under a bright-field microscope using a 20X magnification.
Design of a compact low-power human-computer interaction equipment for hand motion
NASA Astrophysics Data System (ADS)
Wu, Xianwei; Jin, Wenguang
2017-01-01
Human-Computer Interaction (HCI) raises demand of convenience, endurance, responsiveness and naturalness. This paper describes a design of a compact wearable low-power HCI equipment applied to gesture recognition. System combines multi-mode sense signals: the vision sense signal and the motion sense signal, and the equipment is equipped with the depth camera and the motion sensor. The dimension (40 mm × 30 mm) and structure is compact and portable after tight integration. System is built on a module layered framework, which contributes to real-time collection (60 fps), process and transmission via synchronous confusion with asynchronous concurrent collection and wireless Blue 4.0 transmission. To minimize equipment's energy consumption, system makes use of low-power components, managing peripheral state dynamically, switching into idle mode intelligently, pulse-width modulation (PWM) of the NIR LEDs of the depth camera and algorithm optimization by the motion sensor. To test this equipment's function and performance, a gesture recognition algorithm is applied to system. As the result presents, general energy consumption could be as low as 0.5 W.
STS-36 Mission Specialist Thuot operates 16mm camera on OV-104's middeck
1990-03-03
STS-36 Mission Specialist (MS) Pierre J. Thuot operates 16mm ARRIFLEX motion picture camera mounted on the open airlock hatch via a bracket. Thuot uses the camera to record activity of his fellow STS-36 crewmembers on the middeck of Atlantis, Orbiter Vehicle (OV) 104. Positioned between the airlock hatch and the starboard wall-mounted sleep restraints, Thuot, wearing a FAIRFAX t-shirt, squints into the cameras eye piece. Thuot and four other astronauts spent four days, 10 hours and 19 minutes aboard OV-104 for the Department of Defense (DOD) devoted mission.
Polarizing aperture stereoscopic cinema camera
NASA Astrophysics Data System (ADS)
Lipton, Lenny
2012-03-01
The art of stereoscopic cinematography has been held back because of the lack of a convenient way to reduce the stereo camera lenses' interaxial to less than the distance between the eyes. This article describes a unified stereoscopic camera and lens design that allows for varying the interaxial separation to small values using a unique electro-optical polarizing aperture design for imaging left and right perspective views onto a large single digital sensor (the size of the standard 35mm frame) with the means to select left and right image information. Even with the added stereoscopic capability the appearance of existing camera bodies will be unaltered.
Polarizing aperture stereoscopic cinema camera
NASA Astrophysics Data System (ADS)
Lipton, Lenny
2012-07-01
The art of stereoscopic cinematography has been held back because of the lack of a convenient way to reduce the stereo camera lenses' interaxial to less than the distance between the eyes. This article describes a unified stereoscopic camera and lens design that allows for varying the interaxial separation to small values using a unique electro-optical polarizing aperture design for imaging left and right perspective views onto a large single digital sensor, the size of the standard 35 mm frame, with the means to select left and right image information. Even with the added stereoscopic capability, the appearance of existing camera bodies will be unaltered.
STS-36 Mission Specialist Thuot operates 16mm camera on OV-104's middeck
NASA Technical Reports Server (NTRS)
1990-01-01
STS-36 Mission Specialist (MS) Pierre J. Thuot operates 16mm ARRIFLEX motion picture camera mounted on the open airlock hatch via a bracket. Thuot uses the camera to record activity of his fellow STS-36 crewmembers on the middeck of Atlantis, Orbiter Vehicle (OV) 104. Positioned between the airlock hatch and the starboard wall-mounted sleep restraints, Thuot, wearing a FAIRFAX t-shirt, squints into the cameras eye piece. Thuot and four other astronauts spent four days, 10 hours and 19 minutes aboard OV-104 for the Department of Defense (DOD) devoted mission.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-26
... (202) 205-3065. Copies of non- confidential documents filed in connection with this investigation are... a complaint filed on behalf of HumanEyes Technologies, Ltd. of Jerusalem, Israel on March 28, 2012..., complainant HumanEyes Technologies filed an unopposed motion to terminate the investigation pursuant to...
Solving the robot-world, hand-eye(s) calibration problem with iterative methods
USDA-ARS?s Scientific Manuscript database
Robot-world, hand-eye calibration is the problem of determining the transformation between the robot end effector and a camera, as well as the transformation between the robot base and the world coordinate system. This relationship has been modeled as AX = ZB, where X and Z are unknown homogeneous ...
Spirit Beside 'Home Plate,' Sol 1809 (Stereo)
NASA Technical Reports Server (NTRS)
2009-01-01
[figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11803 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11803 NASA Mars Exploration Rover Spirit used its navigation camera to take the images assembled into this stereo, 120-degree view southward after a short drive during the 1,809th Martian day, or sol, of Spirit's mission on the surface of Mars (February 3, 2009). By combining images from the left-eye and right-eye sides of the navigation camera, the view appears three-dimensional when viewed through red-blue glasses with the red lens on the left. Spirit had driven about 2.6 meters (8.5 feet) that sol, continuing a clockwise route around a low plateau called 'Home Plate.' In this image, the rocks visible above the rovers' solar panels are on the slope at the northern edge of Home Plate. This view is presented as a cylindrical-perspective projection with geometric seam correction.Opportunity's Surroundings on Sol 1818 (Stereo)
NASA Technical Reports Server (NTRS)
2009-01-01
[figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11846 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11846 NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this full-circle view of the rover's surroundings during the 1,818th Martian day, or sol, of Opportunity's surface mission (March 5, 2009). South is at the center; north at both ends. This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The rover had driven 80.3 meters (263 feet) southward earlier on that sol. Tracks from the drive recede northward in this view. The terrain in this portion of Mars' Meridiani Planum region includes dark-toned sand ripples and lighter-toned bedrock. This view is presented as a cylindrical-perspective projection with geometric seam correction.Opportunity's Surroundings on Sol 1798 (Stereo)
NASA Technical Reports Server (NTRS)
2009-01-01
[figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11850 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11850 NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this stereo 180-degree view of the rover's surroundings during the 1,798th Martian day, or sol, of Opportunity's surface mission (Feb. 13, 2009). North is on top. This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The rover had driven 111 meters (364 feet) southward on the preceding sol. Tracks from that drive recede northward in this view. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches). The terrain in this portion of Mars' Meridiani Planum region includes dark-toned sand ripples and lighter-toned bedrock. This view is presented as a cylindrical-perspective projection with geometric seam correction.NASA Astrophysics Data System (ADS)
Steinmetz, Klaus
1995-05-01
Within the automotive industry, especially for the development and improvement of safety systems, we find a lot of high accelerated motions, that can not be followed and consequently not be analyzed by human eye. For the vehicle safety tests at AUDI, which are performed as 'Crash Tests', 'Sled Tests' and 'Static Component Tests', 'Stalex', 'Hycam', and 'Locam' cameras are in use. Nowadays the automobile production is inconceivable without the use of high speed cameras.
STS-31 crew activity on the middeck of the Earth-orbiting Discovery, OV-103
1990-04-29
STS031-05-002 (24-29 April 1990) --- A 35mm camera with a "fish eye" lens captured this high angle image on Discovery's middeck. Astronaut Kathryn D. Sullivan works with the IMAX camera in foreground, while Astronaut Steven A. Hawley consults a checklist in corner. An Arriflex motion picture camera records student ion arc experiment in apparatus mounted on stowage locker. The experiment was the project of Gregory S. Peterson, currently a student at Utah State University.
Protective laser beam viewing device
Neil, George R.; Jordan, Kevin Carl
2012-12-18
A protective laser beam viewing system or device including a camera selectively sensitive to laser light wavelengths and a viewing screen receiving images from the laser sensitive camera. According to a preferred embodiment of the invention, the camera is worn on the head of the user or incorporated into a goggle-type viewing display so that it is always aimed at the area of viewing interest to the user and the viewing screen is incorporated into a video display worn as goggles over the eyes of the user.
Jung, Hyukjin; Jeong, Ki-Hun
2009-08-17
A microfabricated compound eye, comparable to a natural compound eye shows a spherical arrangement of integrated optical units called artificial ommatidia. Each consists of a self-aligned microlens and waveguide. The increase of waveguide length is imperative to obtain high resolution images through an artificial compound eye for wide field-of - view imaging as well as fast motion detection. This work presents an effective method for increasing the waveguide length of artificial ommatidium using a laser induced self-writing process in a photosensitive polymer resin. The numerical and experimental results show the uniform formation of waveguides and the increment of waveguide length over 850 microm. (c) 2009 Optical Society of America
Henze, Miriam J; Dannenhauer, Kara; Kohler, Martin; Labhart, Thomas; Gesemann, Matthias
2012-08-30
Opsins are key proteins in animal photoreception. Together with a light-sensitive group, the chromophore, they form visual pigments which initiate the visual transduction cascade when photoactivated. The spectral absorption properties of visual pigments are mainly determined by their opsins, and thus opsins are crucial for understanding the adaptations of animal eyes. Studies on the phylogeny and expression pattern of opsins have received considerable attention, but our knowledge about insect visual opsins is still limited. Up to now, researchers have focused on holometabolous insects, while general conclusions require sampling from a broader range of taxa. We have therefore investigated visual opsins in the ocelli and compound eyes of the two-spotted cricket Gryllus bimaculatus, a hemimetabolous insect. Phylogenetic analyses place all identified cricket sequences within the three main visual opsin clades of insects. We assign three of these opsins to visual pigments found in the compound eyes with peak absorbances in the green (515 nm), blue (445 nm) and UV (332 nm) spectral range. Their expression pattern divides the retina into distinct regions: (1) the polarization-sensitive dorsal rim area with blue- and UV-opsin, (2) a newly-discovered ventral band of ommatidia with blue- and green-opsin and (3) the remainder of the compound eye with UV- and green-opsin. In addition, we provide evidence for two ocellar photopigments with peak absorbances in the green (511 nm) and UV (350 nm) spectral range, and with opsins that differ from those expressed in the compound eyes. Our data show that cricket eyes are spectrally more specialized than has previously been assumed, suggesting that similar adaptations in other insect species might have been overlooked. The arrangement of spectral receptor types within some ommatidia of the cricket compound eyes differs from the generally accepted pattern found in holometabolous insect taxa and awaits a functional explanation. From the opsin phylogeny, we conclude that gene duplications, which permitted differential opsin expression in insect ocelli and compound eyes, occurred independently in several insect lineages and are recent compared to the origin of the eyes themselves.
2012-01-01
Background Opsins are key proteins in animal photoreception. Together with a light-sensitive group, the chromophore, they form visual pigments which initiate the visual transduction cascade when photoactivated. The spectral absorption properties of visual pigments are mainly determined by their opsins, and thus opsins are crucial for understanding the adaptations of animal eyes. Studies on the phylogeny and expression pattern of opsins have received considerable attention, but our knowledge about insect visual opsins is still limited. Up to now, researchers have focused on holometabolous insects, while general conclusions require sampling from a broader range of taxa. We have therefore investigated visual opsins in the ocelli and compound eyes of the two-spotted cricket Gryllus bimaculatus, a hemimetabolous insect. Results Phylogenetic analyses place all identified cricket sequences within the three main visual opsin clades of insects. We assign three of these opsins to visual pigments found in the compound eyes with peak absorbances in the green (515 nm), blue (445 nm) and UV (332 nm) spectral range. Their expression pattern divides the retina into distinct regions: (1) the polarization-sensitive dorsal rim area with blue- and UV-opsin, (2) a newly-discovered ventral band of ommatidia with blue- and green-opsin and (3) the remainder of the compound eye with UV- and green-opsin. In addition, we provide evidence for two ocellar photopigments with peak absorbances in the green (511 nm) and UV (350 nm) spectral range, and with opsins that differ from those expressed in the compound eyes. Conclusions Our data show that cricket eyes are spectrally more specialized than has previously been assumed, suggesting that similar adaptations in other insect species might have been overlooked. The arrangement of spectral receptor types within some ommatidia of the cricket compound eyes differs from the generally accepted pattern found in holometabolous insect taxa and awaits a functional explanation. From the opsin phylogeny, we conclude that gene duplications, which permitted differential opsin expression in insect ocelli and compound eyes, occurred independently in several insect lineages and are recent compared to the origin of the eyes themselves. PMID:22935102
Using spectral information in forensic imaging.
Miskelly, Gordon M; Wagner, John H
2005-12-20
Improved detection of forensic evidence by combining narrow band photographic images taken at a range of wavelengths is dependent on the substance of interest having a significantly different spectrum from the underlying substrate. While some natural substances such as blood have distinctive spectral features which are readily distinguished from common colorants, this is not true for visualization agents commonly used in forensic science. We now show that it is possible to select reagents with narrow spectral features that lead to increased visibility using digital cameras and computer image enhancement programs even if their coloration is much less intense to the unaided eye than traditional reagents. The concept is illustrated by visualising latent fingermarks on paper with the zinc complex of Ruhemann's Purple, cyanoacrylate-fumed fingerprints with Eu(tta)(3)(phen), and soil prints with 2,6-bis(benzimidazol-2-yl)-4-[4'-(dimethylamino)phenyl]pyridine [BBIDMAPP]. In each case background correction is performed at one or two wavelengths bracketing the narrow absorption or emission band of these compounds. However, compounds with sharp spectral features would also lead to improved detection using more advanced algorithms such as principal component analysis.
Meyer-Rochow, Victor Benno
2015-03-01
Similarities and differences between the 2 main kinds of compound eye (apposition and superposition) are briefly explained before several promising topics for research on compound eyes are being introduced. Research on the embryology and molecular control of the development of the insect clear-zone eye with superposition optics is one of the suggestions, because almost all of the developmental work on insect eyes in the past has focused on eyes with apposition optics. Age- and habitat-related ultrastructural studies of the retinal organization are another suggestion and the deer cad Lipoptena cervi, which has an aerial phase during which it is winged followed by a several months long parasitic phase during which it is wingless, is mentioned as a candidate species. Sexual dimorphism expressing itself in many species as a difference in eye structure and function provides another promising field for compound eye researchers and so is a focus on compound eye miniaturization in very small insects, especially those that are aquatic and belong to species, in which clear-zone eyes are diagnostic or are tiny insects that are not aquatic, but belong to taxa like the Diptera for instance, in which open rather than closed rhabdoms are the rule. Structures like interommatidial hairs and glands as well as corneal microridges are yet another field that could yield interesting results and in the past has received insufficient consideration. Finally, the dearth of information on distance vision and depth perception is mentioned and a plea is made to examine the photic environment inside the foam shelters of spittle bugs, chrysales of pupae and other structures shielding insects and crustaceans. © 2014 Institute of Zoology, Chinese Academy of Sciences.
Hurricane Bonnie, Northeast of Bermuda, Atlantic Ocean
1992-09-20
STS047-151-618 (19 Sept 1992) --- A large format Earth observation camera captured this scene of Hurricane Bonnie during the late phase of the mission. Bonnie was located about 500 miles from Bermuda near a point centered at 35.4 degrees north latitude and 56.8 degrees west longitude. The Linhof camera was aimed through one of Space Shuttle Endeavour's aft flight deck windows (note slight reflection at right). The crew members noticed the well defined eye in this hurricane, compared to an almost non-existent eye in the case of Hurricane Iniki, which was relatively broken up by the mission's beginning. Six NASA astronauts and a Japanese payload specialist conducted eight days of in-space research.
Using thermographic cameras to investigate eye temperature and clinical severity in depression
NASA Astrophysics Data System (ADS)
Maller, Jerome J.; George, Shefin Sam; Viswanathan, Rekha Puzhavakkathumadom; Fitzgerald, Paul B.; Junor, Paul
2016-02-01
Previous studies suggest that altered corneal temperature may be a feature of schizophrenia, but the association between major depressive disorder (MDD) and corneal temperature has yet to be assessed. The aim of this study is to investigate whether eye temperature is different among MDD patients than among healthy individuals. We used a thermographic camera to measure and compare the temperature profile across the corneas of 16 patients with MDD and 16 age- and sex-matched healthy subjects. We found that the average corneal temperature between the two groups did not differ statistically, although clinical severity correlated positively with right corneal temperature. Corneal temperature may be an indicator of clinical severity in psychiatric disorders, including depression.
PBF Reactor Building (PER620). Camera faces south along west wall. ...
PBF Reactor Building (PER-620). Camera faces south along west wall. Gap between native lava rock and concrete basement walls is being backfilled and compacted. Wire mesh protects workers from falling rock. Note penetrations for piping that will carry secondary coolant water to Cooling Tower. Photographer: Holmes. Date: June 15, 1967. INEEL negative no. 67-3665 - Idaho National Engineering Laboratory, SPERT-I & Power Burst Facility Area, Scoville, Butte County, ID
Bright compact bulges at intermediate redshifts
NASA Astrophysics Data System (ADS)
Sachdeva, Sonali; Saha, Kanak
2018-07-01
Studying bright (MB < -20), intermediate-redshift (0.4 < z< 1.0), disc-dominated (nB < 2.5) galaxies from Hubble Space Telescope/Advanced Camera for Surveys and Wide Field Camera 3 in Chandra Deep Field-South, in rest-frame B and I band, we found a new class of bulges that is brighter and more compact than ellipticals. We refer to them as `bright, compact bulges' (BCBs) - they resemble neither classical nor pseudo-bulges and constitute ˜12 per cent of the total bulge population at these redshifts. Examining free-bulge + disc decomposition sample and elliptical galaxy sample from Simard et al., we find that only ˜0.2 per cent of the bulges can be classified as BCBs in the local Universe. Bulge to total light ratio of disc galaxies with BCBs is (at ˜0.4) a factor of ˜2 and ˜4 larger than for those with classical and pseudo-bulges. BCBs are ˜2.5 and ˜6 times more massive than classical and pseudo-bulges. Although disc galaxies with BCBs host the most massive and dominant bulge type, their specific star formation rate is 1.5-2 times higher than other disc galaxies. This is contrary to the expectations that a massive compact bulge would lead to lower star formation rates. We speculate that our BCB host disc galaxies are descendant of massive, compact, and passive elliptical galaxies observed at higher redshifts. Those high-redshift ellipticals lack local counterparts and possibly evolved by acquiring a compact disc around them. The overall properties of BCBs support a picture of galaxy assembly in which younger discs are being accreted around massive pre-existing spheroids.
Implementation and performance of shutterless uncooled micro-bolometer cameras
NASA Astrophysics Data System (ADS)
Das, J.; de Gaspari, D.; Cornet, P.; Deroo, P.; Vermeiren, J.; Merken, P.
2015-06-01
A shutterless algorithm is implemented into the Xenics LWIR thermal cameras and modules. Based on a calibration set and a global temperature coefficient the optimal non-uniformity correction is calculated onboard of the camera. The limited resources in the camera require a compact algorithm, hence the efficiency of the coding is important. The performance of the shutterless algorithm is studied by a comparison of the residual non-uniformity (RNU) and signal-to-noise ratio (SNR) between the shutterless and shuttered correction algorithm. From this comparison we conclude that the shutterless correction is only slightly less performant compared to the standard shuttered algorithm, making this algorithm very interesting for thermal infrared applications where small weight and size, and continuous operation are important.
High-Resolution Large Field-of-View FUV Compact Camera
NASA Technical Reports Server (NTRS)
Spann, James F.
2006-01-01
The need for a high resolution camera with a large field of view and capable to image dim emissions in the far-ultraviolet is driven by the widely varying intensities of FUV emissions and spatial/temporal scales of phenomena of interest in the Earth% ionosphere. In this paper, the concept of a camera is presented that is designed to achieve these goals in a lightweight package with sufficient visible light rejection to be useful for dayside and nightside emissions. The camera employs the concept of self-filtering to achieve good spectral resolution tuned to specific wavelengths. The large field of view is sufficient to image the Earth's disk at Geosynchronous altitudes and capable of a spatial resolution of >20 km. The optics and filters are emphasized.
Study of multi-channel optical system based on the compound eye
NASA Astrophysics Data System (ADS)
Zhao, Yu; Fu, Yuegang; Liu, Zhiying; Dong, Zhengchao
2014-09-01
As an important part of machine vision, compound eye optical systems have the characteristics of high resolution and large FOV. By applying the compound eye optical systems to target detection and recognition, the contradiction between large FOV and high resolution in the traditional single aperture optical systems could be solved effectively and also the parallel processing ability of the optical systems could be sufficiently shown. In this paper, the imaging features of the compound eye optical systems are analyzed. After discussing the relationship between the FOV in each subsystem and the contact ratio of the FOV in the whole system, a method to define the FOV of the subsystem is presented. And a compound eye optical system is designed, which is based on the large FOV synthesized of multi-channels. The compound eye optical system consists with a central optical system and array subsystem, in which the array subsystem is used to capture the target. The high resolution image of the target could be achieved by the central optical system. With the advantage of small volume, light weight and rapid response speed, the optical system could detect the objects which are in 3km and FOV of 60°without any scanning device. The objects in the central field 2w=5.1°could be imaged with high resolution so that the objects could be recognized.
Non-Invasive Detection of Anaemia Using Digital Photographs of the Conjunctiva.
Collings, Shaun; Thompson, Oliver; Hirst, Evan; Goossens, Louise; George, Anup; Weinkove, Robert
2016-01-01
Anaemia is a major health burden worldwide. Although the finding of conjunctival pallor on clinical examination is associated with anaemia, inter-observer variability is high, and definitive diagnosis of anaemia requires a blood sample. We aimed to detect anaemia by quantifying conjunctival pallor using digital photographs taken with a consumer camera and a popular smartphone. Our goal was to develop a non-invasive screening test for anaemia. The conjunctivae of haemato-oncology in- and outpatients were photographed in ambient lighting using a digital camera (Panasonic DMC-LX5), and the internal rear-facing camera of a smartphone (Apple iPhone 5S) alongside an in-frame calibration card. Following image calibration, conjunctival erythema index (EI) was calculated and correlated with laboratory-measured haemoglobin concentration. Three clinicians independently evaluated each image for conjunctival pallor. Conjunctival EI was reproducible between images (average coefficient of variation 2.96%). EI of the palpebral conjunctiva correlated more strongly with haemoglobin concentration than that of the forniceal conjunctiva. Using the compact camera, palpebral conjunctival EI had a sensitivity of 93% and 57% and specificity of 78% and 83% for detection of anaemia (haemoglobin < 110 g/L) in training and internal validation sets, respectively. Similar results were found using the iPhone camera, though the EI cut-off value differed. Conjunctival EI analysis compared favourably with clinician assessment, with a higher positive likelihood ratio for prediction of anaemia. Erythema index of the palpebral conjunctiva calculated from images taken with a compact camera or mobile phone correlates with haemoglobin and compares favourably to clinician assessment for prediction of anaemia. If confirmed in further series, this technique may be useful for the non-invasive screening for anaemia.
Eye movement analysis of reading from computer displays, eReaders and printed books.
Zambarbieri, Daniela; Carniglia, Elena
2012-09-01
To compare eye movements during silent reading of three eBooks and a printed book. The three different eReading tools were a desktop PC, iPad tablet and Kindle eReader. Video-oculographic technology was used for recording eye movements. In the case of reading from the computer display the recordings were made by a video camera placed below the computer screen, whereas for reading from the iPad tablet, eReader and printed book the recording system was worn by the subject and had two cameras: one for recording the movement of the eyes and the other for recording the scene in front of the subject. Data analysis provided quantitative information in terms of number of fixations, their duration, and the direction of the movement, the latter to distinguish between fixations and regressions. Mean fixation duration was different only in reading from the computer display, and was similar for the Tablet, eReader and printed book. The percentage of regressions with respect to the total amount of fixations was comparable for eReading tools and the printed book. The analysis of eye movements during reading an eBook from different eReading tools suggests that subjects' reading behaviour is similar to reading from a printed book. © 2012 The College of Optometrists.
Decline of vertical gaze and convergence with aging.
Oguro, Hiroaki; Okada, Kazunori; Suyama, Nobuo; Yamashita, Kazuya; Yamaguchi, Shuhei; Kobayashi, Shotai
2004-01-01
Disturbance of vertical eye movement and ocular convergence is often observed in elderly people, but little is known about its frequency. The purpose of this study was to investigate age-associated changes in vertical eye movement and convergence in healthy elderly people, using a digital video camera system. We analyzed vertical eye movements and convergence in 113 neurologically normal elderly subjects (mean age 70 years) in comparison with 20 healthy young controls (mean age 32 years). The range of vertical eye movement was analyzed quantitatively and convergence was analyzed qualitatively. In the elderly subjects, the angle of vertical gaze decreased with advancing age and it was significantly smaller than that of the younger subjects. The mean angle of upward gaze was significantly smaller than that of downward gaze for both young and elderly subjects. Upward gaze impairment became apparent in subjects in their 70s, and downward gaze impairment in subjects in their 60s. Disturbance in convergence also increased with advancing age, and was found in 40.7% of the elderly subjects. These findings indicate that the mechanisms of age-related change are different for upward and downward vertical gaze. Digital video camera monitoring was useful for assessing and monitoring eye movements. Copyright 2004 S. Karger AG, Basel
SPADAS: a high-speed 3D single-photon camera for advanced driver assistance systems
NASA Astrophysics Data System (ADS)
Bronzi, D.; Zou, Y.; Bellisai, S.; Villa, F.; Tisa, S.; Tosi, A.; Zappa, F.
2015-02-01
Advanced Driver Assistance Systems (ADAS) are the most advanced technologies to fight road accidents. Within ADAS, an important role is played by radar- and lidar-based sensors, which are mostly employed for collision avoidance and adaptive cruise control. Nonetheless, they have a narrow field-of-view and a limited ability to detect and differentiate objects. Standard camera-based technologies (e.g. stereovision) could balance these weaknesses, but they are currently not able to fulfill all automotive requirements (distance range, accuracy, acquisition speed, and frame-rate). To this purpose, we developed an automotive-oriented CMOS single-photon camera for optical 3D ranging based on indirect time-of-flight (iTOF) measurements. Imagers based on Single-photon avalanche diode (SPAD) arrays offer higher sensitivity with respect to CCD/CMOS rangefinders, have inherent better time resolution, higher accuracy and better linearity. Moreover, iTOF requires neither high bandwidth electronics nor short-pulsed lasers, hence allowing the development of cost-effective systems. The CMOS SPAD sensor is based on 64 × 32 pixels, each able to process both 2D intensity-data and 3D depth-ranging information, with background suppression. Pixel-level memories allow fully parallel imaging and prevents motion artefacts (skew, wobble, motion blur) and partial exposure effects, which otherwise would hinder the detection of fast moving objects. The camera is housed in an aluminum case supporting a 12 mm F/1.4 C-mount imaging lens, with a 40°×20° field-of-view. The whole system is very rugged and compact and a perfect solution for vehicle's cockpit, with dimensions of 80 mm × 45 mm × 70 mm, and less that 1 W consumption. To provide the required optical power (1.5 W, eye safe) and to allow fast (up to 25 MHz) modulation of the active illumination, we developed a modular laser source, based on five laser driver cards, with three 808 nm lasers each. We present the full characterization of the 3D automotive system, operated both at night and during daytime, in both indoor and outdoor, in real traffic, scenario. The achieved long-range (up to 45m), high dynamic-range (118 dB), highspeed (over 200 fps) 3D depth measurement, and high precision (better than 90 cm at 45 m), highlight the excellent performance of this CMOS SPAD camera for automotive applications.
A single pixel camera video ophthalmoscope
NASA Astrophysics Data System (ADS)
Lochocki, B.; Gambin, A.; Manzanera, S.; Irles, E.; Tajahuerce, E.; Lancis, J.; Artal, P.
2017-02-01
There are several ophthalmic devices to image the retina, from fundus cameras capable to image the whole fundus to scanning ophthalmoscopes with photoreceptor resolution. Unfortunately, these devices are prone to a variety of ocular conditions like defocus and media opacities, which usually degrade the quality of the image. Here, we demonstrate a novel approach to image the retina in real-time using a single pixel camera, which has the potential to circumvent those optical restrictions. The imaging procedure is as follows: a set of spatially coded patterns is projected rapidly onto the retina using a digital micro mirror device. At the same time, the inner product's intensity is measured for each pattern with a photomultiplier module. Subsequently, an image of the retina is reconstructed computationally. Obtained image resolution is up to 128 x 128 px with a varying real-time video framerate up to 11 fps. Experimental results obtained in an artificial eye confirm the tolerance against defocus compared to a conventional multi-pixel array based system. Furthermore, the use of a multiplexed illumination offers a SNR improvement leading to a lower illumination of the eye and hence an increase in patient's comfort. In addition, the proposed system could enable imaging in wavelength ranges where cameras are not available.
Light emission from compound eye with conformal fluorescent coating
NASA Astrophysics Data System (ADS)
Martín-Palma, Raúl J.; Miller, Amy E.; Pulsifer, Drew P.; Lakhtakia, Akhlesh
2015-03-01
Compound eyes of insects are attractive biological systems for engineered biomimicry as artificial sources of light, given their characteristic wide angular field of view. A blowfly eye was coated with a thin conformal fluorescent film, with the aim of achieving wide field-of-view emission. Experimental results showed that the coated eye emitted visible light and that the intensity showed a weaker angular dependence than a fluorescent thin film deposited on a flat surface.
NASA Astrophysics Data System (ADS)
Daxecker, Franz
Some of Scheiner's discoveries and experiments are taken from the books «Oculus», (Innsbruck 1619) and «Rosa Ursina sive Sol» (Rome 1626-1630): determination of the radius of curvature of the cornea, discovery of the nasal exit of the visual nerve, increase in the curvature of the lens in case of accommodation, anatomy of the eye, light reaction of the pupil, contraction of the pupil during accommodation, Scheiner's test (double images caused by ametropia), stenopeic effect, crossing rays in the eye, aperture, description of the cataract treatment, refractive indices of various parts of the eye, eye model, visual pivot angle of the eye, proof of crossing rays on the retina, comparison of the camera obscura and the optics of the eye.
Extended spectrum SWIR camera with user-accessible Dewar
NASA Astrophysics Data System (ADS)
Benapfl, Brendan; Miller, John Lester; Vemuri, Hari; Grein, Christoph; Sivananthan, Siva
2017-02-01
Episensors has developed a series of extended short wavelength infrared (eSWIR) cameras based on high-Cd concentration Hg1-xCdxTe absorbers. The cameras have a bandpass extending to 3 microns cutoff wavelength, opening new applications relative to traditional InGaAs-based cameras. Applications and uses are discussed and examples given. A liquid nitrogen pour-filled version was initially developed. This was followed by a compact Stirling-cooled version with detectors operating at 200 K. Each camera has unique sensitivity and performance characteristics. The cameras' size, weight and power specifications are presented along with images captured with band pass filters and eSWIR sources to demonstrate spectral response beyond 1.7 microns. The soft seal Dewars of the cameras are designed for accessibility, and can be opened and modified in a standard laboratory environment. This modular approach allows user flexibility for swapping internal components such as cold filters and cold stops. The core electronics of the Stirlingcooled camera are based on a single commercial field programmable gate array (FPGA) that also performs on-board non-uniformity corrections, bad pixel replacement, and directly drives any standard HDMI display.
Photogrammetry System and Method for Determining Relative Motion Between Two Bodies
NASA Technical Reports Server (NTRS)
Miller, Samuel A. (Inventor); Severance, Kurt (Inventor)
2014-01-01
A photogrammetry system and method provide for determining the relative position between two objects. The system utilizes one or more imaging devices, such as high speed cameras, that are mounted on a first body, and three or more photogrammetry targets of a known location on a second body. The system and method can be utilized with cameras having fish-eye, hyperbolic, omnidirectional, or other lenses. The system and method do not require overlapping fields-of-view if two or more cameras are utilized. The system and method derive relative orientation by equally weighting information from an arbitrary number of heterogeneous cameras, all with non-overlapping fields-of-view. Furthermore, the system can make the measurements with arbitrary wide-angle lenses on the cameras.
Integrated flexible handheld probe for imaging and evaluation of iridocorneal angle
NASA Astrophysics Data System (ADS)
Shinoj, Vengalathunadakal K.; Murukeshan, Vadakke Matham; Baskaran, Mani; Aung, Tin
2015-01-01
An imaging probe is designed and developed by integrating a miniaturized charge-coupled diode camera and light-emitting diode light source, which enables evaluation of the iridocorneal region inside the eye. The efficiency of the prototype probe instrument is illustrated initially by using not only eye models, but also samples such as pig eye. The proposed methodology and developed scheme are expected to find potential application in iridocorneal angle documentation, glaucoma diagnosis, and follow-up management procedures.
2001-10-25
analyses of electroencephalogram at half- closed eye and fully closed eye. This study aimed at quantitative estimating rest rhythm of horses by the...analyses of eyeball movement. The mask attached with a miniature CCD camera was newly developed. The continuous images of the horse eye for about 24...eyeball area were calculated. As for the results, the fluctuating status of eyeball area was analyzed quantitatively, and the rest rhythm of horses was
A compact high-speed pnCCD camera for optical and x-ray applications
NASA Astrophysics Data System (ADS)
Ihle, Sebastian; Ordavo, Ivan; Bechteler, Alois; Hartmann, Robert; Holl, Peter; Liebel, Andreas; Meidinger, Norbert; Soltau, Heike; Strüder, Lothar; Weber, Udo
2012-07-01
We developed a camera with a 264 × 264 pixel pnCCD of 48 μm size (thickness 450 μm) for X-ray and optical applications. It has a high quantum efficiency and can be operated up to 400 / 1000 Hz (noise≍ 2:5 ° ENC / ≍4:0 ° ENC). High-speed astronomical observations can be performed with low light levels. Results of test measurements will be presented. The camera is well suitable for ground based preparation measurements for future X-ray missions. For X-ray single photons, the spatial position can be determined with significant sub-pixel resolution.
A small field of view camera for hybrid gamma and optical imaging
NASA Astrophysics Data System (ADS)
Lees, J. E.; Bugby, S. L.; Bhatia, B. S.; Jambi, L. K.; Alqahtani, M. S.; McKnight, W. R.; Ng, A. H.; Perkins, A. C.
2014-12-01
The development of compact low profile gamma-ray detectors has allowed the production of small field of view, hand held imaging devices for use at the patient bedside and in operating theatres. The combination of an optical and a gamma camera, in a co-aligned configuration, offers high spatial resolution multi-modal imaging giving a superimposed scintigraphic and optical image. This innovative introduction of hybrid imaging offers new possibilities for assisting surgeons in localising the site of uptake in procedures such as sentinel node detection. Recent improvements to the camera system along with results of phantom and clinical imaging are reported.
Gaze Estimation Method Using Analysis of Electrooculogram Signals and Kinect Sensor
Tanno, Koichi
2017-01-01
A gaze estimation system is one of the communication methods for severely disabled people who cannot perform gestures and speech. We previously developed an eye tracking method using a compact and light electrooculogram (EOG) signal, but its accuracy is not very high. In the present study, we conducted experiments to investigate the EOG component strongly correlated with the change of eye movements. The experiments in this study are of two types: experiments to see objects only by eye movements and experiments to see objects by face and eye movements. The experimental results show the possibility of an eye tracking method using EOG signals and a Kinect sensor. PMID:28912800
Creation of nano eye-drops and effective drug delivery to the interior of the eye
NASA Astrophysics Data System (ADS)
Ikuta, Yoshikazu; Aoyagi, Shigenobu; Tanaka, Yuji; Sato, Kota; Inada, Satoshi; Koseki, Yoshitaka; Onodera, Tsunenobu; Oikawa, Hidetoshi; Kasai, Hitoshi
2017-03-01
Nano eye-drops are a new type of ophthalmic treatment with increased potency and reduced side effects. Compounds in conventional eye-drops barely penetrate into the eye because the cornea, located at the surface of eye, has a strong barrier function for preventing invasion of hydrophilic or large-sized materials from the outside. In this work, we describe the utility of nano eye-drops utilising brinzolamide, a commercially available glaucoma treatment drug, as a target compound. Fabrication of the nanoparticles of brinzolamide prodrug increases the eye penetration rate and results in high drug efficacy, compared with that of commercially available brinzolamide eye-drops formulated as micro-sized structures. In addition, the resulting nano eye-drops were not toxic to the corneal epithelium after repeated administration for 1 week. The nano eye-drops may have applications as a next-generation ophthalmic treatment.
Cooling, degassing and compaction of rhyolitic ash flow tuffs: a computational model
Riehle, J.R.; Miller, T.F.; Bailey, R.A.
1995-01-01
Previous models of degassing, cooling and compaction of rhyolitic ash flow deposits are combined in a single computational model that runs on a personal computer. The model applies to a broader range of initial and boundary conditions than Riehle's earlier model, which did not integrate heat and mass flux with compaction and which for compound units was limited to two deposits. Model temperatures and gas pressures compare well with simple measured examples. The results indicate that degassing of volatiles present at deposition occurs within days to a few weeks. Compaction occurs for weeks to two to three years unless halted by devitrification; near-emplacement temperatures can persist for tens of years in the interiors of thick deposits. Even modest rainfall significantly chills the upper parts of ash deposits, but compaction in simple cooling units ends before chilling by rainwater influences cooling of the interior of the sheet. Rainfall does, however, affect compaction at the boundaries of deposits in compound cooling units, because the influx of heat from the overlying unit is inadequate to overcome heat previously lost to vaporization of water. Three density profiles from the Matahina Ignimbrite, a compound cooling unit, are fairly well reproduced by the model despite complexities arising from numerous cooling breaks. Uncertainties in attempts to correlate in detail among the profiles may be the result of the non-uniform distribution of individual deposits. Regardless, it is inferred that model compaction is approximately valid. Thus the model should be of use in reconstructing the emplacement history of compound ash deposits, for inferring the depositional environments of ancient deposits and for assessing how long deposits of modern ash flows are capable of generating phreatic eruptions or secondary ash flows. ?? 1995 Springer-Verlag.
Femtosecond all-solid-state laser for refractive surgery
NASA Astrophysics Data System (ADS)
Zickler, Leander; Han, Meng; Giese, G.'nter; Loesel, Frieder H.; Bille, Josef F.
2003-06-01
Refractive surgery in the pursuit of perfect vision (e.g. 20/10) requires firstly an exact measurement of abberations induced by the eye and then a sophisticated surgical approach. A recent extension of wavefront measurement techniques and adaptive optics to ophthalmology has quantitatively characterized the quality of the human eye. The next milestone towards perfect vision is developing a more efficient and precise laser scalpel and evaluating minimal-invasive laser surgery strategies. Femtosecond all-solid-state MOPA lasers based on passive modelocking and chirped pulse amplification are excellent candidates for eye surgery due to their stability, ultra-high intensity and compact tabletop size. Furthermore, taking into account the peak emission in the near IR and diffraction limited focusing abilities, surgical laser systems performing precise intrastromal incisions for corneal flap resection and intrastromal corneal reshaping promise significant improvement over today's Photorefractive Keratectomy (PRK) and Laser Assisted In Situ Keratomileusis (LASIK) techniques which utilize UV excimer lasers. Through dispersion control and optimized regenerative amplification, a compact femtosecond all-solid-state laser with pulsed energy well above LIOB threshold and kHz repetition rate is constructed. After applying a pulse sequence to the eye, the modified corneal morphology is investigated by high resolution microscopy (Multi Photon/SHG Confocal Microscope).
An Application for Driver Drowsiness Identification based on Pupil Detection using IR Camera
NASA Astrophysics Data System (ADS)
Kumar, K. S. Chidanand; Bhowmick, Brojeshwar
A Driver drowsiness identification system has been proposed that generates alarms when driver falls asleep during driving. A number of different physical phenomena can be monitored and measured in order to detect drowsiness of driver in a vehicle. This paper presents a methodology for driver drowsiness identification using IR camera by detecting and tracking pupils. The face region is first determined first using euler number and template matching. Pupils are then located in the face region. In subsequent frames of video, pupils are tracked in order to find whether the eyes are open or closed. If eyes are closed for several consecutive frames then it is concluded that the driver is fatigued and alarm is generated.
High-definition television evaluation for remote handling task performance
NASA Astrophysics Data System (ADS)
Fujita, Y.; Omori, E.; Hayashi, S.; Draper, J. V.; Herndon, J. N.
Described are experiments designed to evaluate the impact of HDTV (High-Definition Television) on the performance of typical remote tasks. The experiments described in this paper compared the performance of four operators using HDTV with their performance while using other television systems. The experiments included four television systems: (1) high-definition color television, (2) high-definition monochromatic television, (3) standard-resolution monochromatic television, and (4) standard-resolution stereoscopic monochromatic television. The stereo system accomplished stereoscopy by displaying two cross-polarized images, one reflected by a half-silvered mirror and one seen through the mirror. Observers wore spectacles with cross-polarized lenses so that the left eye received only the view from the left camera and the right eye received only the view from the right camera.
Crustacean Larvae-Vision in the Plankton.
Cronin, Thomas W; Bok, Michael J; Lin, Chan
2017-11-01
We review the visual systems of crustacean larvae, concentrating on the compound eyes of decapod and stomatopod larvae as well as the functional and behavioral aspects of their vision. Larval compound eyes of these macrurans are all built on fundamentally the same optical plan, the transparent apposition eye, which is eminently suitable for modification into the abundantly diverse optical systems of the adults. Many of these eyes contain a layer of reflective structures overlying the retina that produces a counterilluminating eyeshine, so they are unique in being camouflaged both by their transparency and by their reflection of light spectrally similar to background light to conceal the opaque retina. Besides the pair of compound eyes, at least some crustacean larvae have a non-imaging photoreceptor system based on a naupliar eye and possibly other frontal eyes. Larval compound-eye photoreceptors send axons to a large and well-developed optic lobe consisting of a series of neuropils that are similar to those of adult crustaceans and insects, implying sophisticated analysis of visual stimuli. The visual system fosters a number of advanced and flexible behaviors that permit crustacean larvae to survive extended periods in the plankton and allows them to reach acceptable adult habitats, within which to metamorphose. © The Author 2017. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology. All rights reserved. For permissions please email: journals.permissions@oup.com.
Photorefraction Screens Millions for Vision Disorders
NASA Technical Reports Server (NTRS)
2008-01-01
Who would have thought that stargazing in the 1980s would lead to hundreds of thousands of schoolchildren seeing more clearly today? Collaborating with research ophthalmologists and optometrists, Marshall Space Flight Center scientists Joe Kerr and the late John Richardson adapted optics technology for eye screening methods using a process called photorefraction. Photorefraction consists of delivering a light beam into the eyes where it bends in the ocular media, hits the retina, and then reflects as an image back to a camera. A series of refinements and formal clinical studies followed their highly successful initial tests in the 1980s. Evaluating over 5,000 subjects in field tests, Kerr and Richardson used a camera system prototype with a specifically angled telephoto lens and flash to photograph a subject s eye. They then analyzed the image, the cornea and pupil in particular, for irregular reflective patterns. Early tests of the system with 1,657 Alabama children revealed that, while only 111 failed the traditional chart test, Kerr and Richardson s screening system found 507 abnormalities.
Adding polarimetric imaging to depth map using improved light field camera 2.0 structure
NASA Astrophysics Data System (ADS)
Zhang, Xuanzhe; Yang, Yi; Du, Shaojun; Cao, Yu
2017-06-01
Polarization imaging plays an important role in various fields, especially for skylight navigation and target identification, whose imaging system is always required to be designed with high resolution, broad band, and single-lens structure. This paper describe such a imaging system based on light field 2.0 camera structure, which can calculate the polarization state and depth distance from reference plane for every objet point within a single shot. This structure, including a modified main lens, a multi-quadrants Polaroid, a honeycomb-liked micro lens array, and a high resolution CCD, is equal to an "eyes array", with 3 or more polarization imaging "glasses" in front of each "eye". Therefore, depth can be calculated by matching the relative offset of corresponding patch on neighboring "eyes", while polarization state by its relative intensity difference, and their resolution will be approximately equal to each other. An application on navigation under clear sky shows that this method has a high accuracy and strong robustness.
NASA Astrophysics Data System (ADS)
Jeong, Mira; Nam, Jae-Yeal; Ko, Byoung Chul
2017-09-01
In this paper, we focus on pupil center detection in various video sequences that include head poses and changes in illumination. To detect the pupil center, we first find four eye landmarks in each eye by using cascade local regression based on a regression forest. Based on the rough location of the pupil, a fast radial symmetric transform is applied using the previously found pupil location to rearrange the fine pupil center. As the final step, the pupil displacement is estimated between the previous frame and the current frame to maintain the level of accuracy against a false locating result occurring in a particular frame. We generated a new face dataset, called Keimyung University pupil detection (KMUPD), with infrared camera. The proposed method was successfully applied to the KMUPD dataset, and the results indicate that its pupil center detection capability is better than that of other methods and with a shorter processing time.
ERIC Educational Resources Information Center
Staub, Adrian; Rayner, Keith; Pollatsek, Alexander; Hyona, Jukka; Majewski, Helen
2007-01-01
Readers' eye movements were monitored as they read sentences containing noun-noun compounds that varied in frequency (e.g., elevator mechanic, mountain lion). The left constituent of the compound was either plausible or implausible as a head noun at the point at which it appeared, whereas the compound as a whole was always plausible. When the head…
Laser beam alignment and profilometry using diagnostic fluorescent safety mirrors
NASA Astrophysics Data System (ADS)
Lizotte, Todd E.
2011-03-01
There are a wide range of laser beam delivery systems in use for various purposes; including industrial and medical applications. Virtually all such beam delivery systems for practical purposes employ optical systems comprised of mirrors and lenses to shape, focus and guide the laser beam down to the material being processed. The goal of the laser beam delivery is to set the optimum parameters and to "fold" the beam path to reduce the mechanical length of the optical system, thereby allowing a physically compact system. In many cases, even a compact system can incorporate upwards of six mirrors and a comparable number of lenses all needing alignment so they are collinear. One of the major requirements for use of such systems in industry is a method of safe alignment. The alignment process requires that the aligner determine where the beam strikes each element. The aligner should also preferably be able to determine the shape or pattern of the laser beam at that point and its relative power. These alignments are further compounded in that the laser beams generated are not visible to the unaided human eye. Such beams are also often of relatively high power levels, and are thereby a significant hazard to the eyes of the aligner. Obvious an invisible beam makes it nearly impossible to align laser system without some form of optical assistance. The predominant method of visually aligning the laser beam delivery is the use of thermal paper, paper cards or fluorescing card material. The use of paper products which have limited power handling capability or coated plastics can produce significant debris and contaminants within the beam line that ultimately damage the optics. The use of the cards can also create significant laser light scatter jeopardizing the safety of the person aligning the system. This paper covers a new safety mirror design for use with at various UV and Near IR wavelengths (193 nm to 1064 nm) within laser beam delivery systems and how its use can provide benefits covering eye safety, precise alignment and beam diagnostics.
Characterization of Impact Initiation of Aluminum-Based Powder Compacts
NASA Astrophysics Data System (ADS)
Tucker, Michael; Dixon, Sean; Thadhani, Naresh
2011-06-01
Impact initiation of reactions in quasi-statically pressed powder compacts of Al-Ni, Al-Ta, and Al-W powder compacts is investigated in an effort to characterize the differences in the energy threshold as a function of materials system, volumetric distribution, and environment. The powder compacts were mounted in front of a copper projectile and impacted onto a steel anvil using a 7.62 mm gas gun at velocities up to 500 m/s. The experiments were conducted in ambient environment, as well as under a 50 millitorr vacuum. The IMACON 200 framing camera was used to observe the transient powder compact densification and deformation states, as well as a signature of reaction based on light emission. Evidence of reaction was also confirmed based on post-mortem XRD analysis of the recovered residue. The effective kinetic energy, dissipated in processes leading to reaction initiation was estimated and correlated with reactivity of the various compacts as a function of composition and environment.
A compact electron spectrometer for an LWFA.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lumpkin, A.; Crowell, R.; Li, Y.
2007-01-01
The use of a laser wakefield accelerator (LWFA) beam as a driver for a compact free-electron laser (FEL) has been proposed recently. A project is underway at Argonne National Laboratory (ANL) to operate an LWFA in the bubble regime and to use the quasi-monoenergetic electron beam as a driver for a 3-m-long undulator for generation of sub-ps UV radiation. The Terawatt Ultrafast High Field Facility (TUHFF) in the Chemistry Division provides the 20-TW peak power laser. A compact electron spectrometer whose initial fields of 0.45 T provide energy coverage of 30-200 MeV has been selected to characterize the electron beams.more » The system is based on the Ecole Polytechnique design used for their LWFA and incorporates the 5-cm-long permanent magnet dipole, the LANEX scintillator screen located at the dispersive plane, a Roper Scientific 16-bit MCP-intensified CCD camera, and a Bergoz ICT for complementary charge measurements. Test results on the magnets, the 16-bit camera, and the ICT will be described, and initial electron beam data will be presented as available. Other challenges will also be addressed.« less
PERCEPTION AND TELEVISION--PHYSIOLOGICAL FACTORS OF TELEVISION VIEWING.
ERIC Educational Resources Information Center
GUBA, EGON; AND OTHERS
AN EXPERIMENTAL SYSTEM WAS DEVELOPED FOR RECORDING EYE-MOVEMENT DATA. RAW DATA WERE IN THE FORM OF MOTION PICTURES TAKEN OF THE MONITOR OF A CLOSED LOOP TELEVISION SYSTEM. A TELEVISION CAMERA WAS MOUNTED ON THE SUBJECTS' FIELD OF VIEW. THE EYE MARKER APPEARED AS A SMALL SPOT OF LIGHT AND INDICATED THE POINT IN THE VISUAL FIELD AT WHICH THE SUBJECT…
A Simple Model of the Accommodating Lens of the Human Eye
ERIC Educational Resources Information Center
Oommen, Vinay; Kanthakumar, Praghalathan
2014-01-01
The human eye is often discussed as optically equivalent to a photographic camera. The iris is compared with the shutter, the pupil to the aperture, and the retina to the film, and both have lens systems to focus rays of light. Although many similarities exist, a major difference between the two systems is the mechanism involved in focusing an…
Design of optical system for binocular fundus camera.
Wu, Jun; Lou, Shiliang; Xiao, Zhitao; Geng, Lei; Zhang, Fang; Wang, Wen; Liu, Mengjia
2017-12-01
A non-mydriasis optical system for binocular fundus camera has been designed in this paper. It can capture two images of the same fundus retinal region from different angles at the same time, and can be used to achieve three-dimensional reconstruction of fundus. It is composed of imaging system and illumination system. In imaging system, Gullstrand Le Grand eye model is used to simulate normal human eye, and Schematic eye model is used to test the influence of ametropia in human eye on imaging quality. Annular aperture and black dot board are added into illumination system, so that the illumination system can eliminate stray light produced by corneal-reflected light and omentoscopic lens. Simulation results show that MTF of each visual field at the cut-off frequency of 90lp/mm is greater than 0.2, system distortion value is -2.7%, field curvature is less than 0.1 mm, radius of Airy disc is 3.25um. This system has a strong ability of chromatic aberration correction and focusing, and can image clearly for human fundus in which the range of diopters is from -10 D to +6 D(1 D = 1 m -1 ).
Opportunity's Surroundings After Sol 1820 Drive (Stereo)
NASA Technical Reports Server (NTRS)
2009-01-01
[figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11841 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11841 NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this full-circle view of the rover's surroundings during the 1,820th to 1,822nd Martian days, or sols, of Opportunity's surface mission (March 7 to 9, 2009). This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The rover had driven 20.6 meters toward the northwest on Sol 1820 before beginning to take the frames in this view. Tracks from that drive recede southwestward. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches). The terrain in this portion of Mars' Meridiani Planum region includes dark-toned sand ripples and small exposures of lighter-toned bedrock. This view is presented as a cylindrical-perspective projection with geometric seam correction.Dust Devil in Spirit's View Ahead on Sol 1854 (Stereo)
NASA Technical Reports Server (NTRS)
2009-01-01
[figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11960 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11960 NASA's Mars Exploration Rover Spirit used its navigation camera to take the images that have been combined into this stereo, 180-degree view of the rover's surroundings during the 1,854th Martian day, or sol, of Spirit's surface mission (March 21, 2009). This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The rover had driven 13.79 meters (45 feet) westward earlier on Sol 1854. West is at the center, where a dust devil is visible in the distance. North on the right, where Husband Hill dominates the horizon; Spirit was on top of Husband Hill in September and October 2005. South is on the left, where lighter-toned rock lines the edge of the low plateau called 'Home Plate.' This view is presented as a cylindrical-perspective projection with geometric seam correction.Cinar, Yasin; Cingu, Abdullah Kursat; Turkcu, Fatih Mehmet; Cinar, Tuba; Sahin, Alparslan; Yuksel, Harun; Ari, Seyhmus
2015-03-01
To compare central corneal thickness (CCT) measurements with a rotating Scheimpflug camera (RSC), noncontact specular microscopy (SM), optical low-coherence reflectometry (OLCR), and ultrasonic pachymetry (UP) in keratoconus (KC) patients. In this prospective study, four CCT measurements taken with an RSC, SM, OLCR, and UP were compared in 81 eyes of 44 consecutive KC patients. The KC patients were divided into four subgroups according to Amsler-Krumeich's KC classification. The RSC and UP measurements of the CCT were not statistically significant in all the groups. Comparison of the SM vs. the OLCR measurements yielded statistically significant differences in all the KC patients and in all KC stages. In all the KC patients, RSC and OLCR showed a high correlation coefficient factor (r = 0.87, p = 0.000). CCT measurements with RSC are comparable to those achieved with UP. Compared with the other devices, according to SM measurements, the central cornea is thicker in all keratoconic eyes and in all KC grades, and it is thinner according to OLCR. RSC, UP, SM, and OLCR should not be used interchangeably in keratoconic eyes.
Remote gaze tracking system for 3D environments.
Congcong Liu; Herrup, Karl; Shi, Bertram E
2017-07-01
Eye tracking systems are typically divided into two categories: remote and mobile. Remote systems, where the eye tracker is located near the object being viewed by the subject, have the advantage of being less intrusive, but are typically used for tracking gaze points on fixed two dimensional (2D) computer screens. Mobile systems such as eye tracking glasses, where the eye tracker are attached to the subject, are more intrusive, but are better suited for cases where subjects are viewing objects in the three dimensional (3D) environment. In this paper, we describe how remote gaze tracking systems developed for 2D computer screens can be used to track gaze points in a 3D environment. The system is non-intrusive. It compensates for small head movements by the user, so that the head need not be stabilized by a chin rest or bite bar. The system maps the 3D gaze points of the user onto 2D images from a scene camera and is also located remotely from the subject. Measurement results from this system indicate that it is able to estimate gaze points in the scene camera to within one degree over a wide range of head positions.
Choi, Young; Eom, Youngsub; Song, Jong Suk; Kim, Hyo Myung
2018-05-15
To compare the effect of posterior corneal astigmatism on the estimation of total corneal astigmatism using anterior corneal measurements (simulated keratometry [K]) between eyes with keratoconus and healthy eyes. Thirty-three eyes of 33 patients with keratoconus of grade I or II and 33 eyes of 33 age- and sex-matched healthy control subjects were enrolled. Anterior, posterior, and total corneal cylinder powers and flat meridians measured by a single Scheimpflug camera were analyzed. The difference in corneal astigmatism between the simulated K and total cornea was evaluated. The mean anterior, posterior, and total corneal cylinder powers of the keratoconus group (4.37 ± 1.73, 0.95 ± 0.39, and 4.36 ± 1.74 CD, respectively) were significantly greater than those of the control group (1.10 ± 0.68, 0.39 ± 0.18, and 0.97 ± 0.63 CD, respectively). The cylinder power difference between the simulated K and total cornea was positively correlated with the posterior corneal cylinder power and negatively correlated with the absolute flat meridian difference between the simulated K and total cornea in both groups. The mean magnitude of the vector difference between the astigmatism of the simulated K and total cornea of the keratoconus group (0.67 ± 0.67 CD) was significantly larger than that of the control group (0.28 ± 0.12 CD). Eyes with keratoconus had greater estimation errors of total corneal astigmatism based on anterior corneal measurement than did healthy eyes. Posterior corneal surface measurement should be more emphasized to determine the total corneal astigmatism in eyes with keratoconus. © 2018 The Korean Ophthalmological Society.
Choi, Young; Song, Jong Suk; Kim, Hyo Myung
2018-01-01
Purpose To compare the effect of posterior corneal astigmatism on the estimation of total corneal astigmatism using anterior corneal measurements (simulated keratometry [K]) between eyes with keratoconus and healthy eyes. Methods Thirty-three eyes of 33 patients with keratoconus of grade I or II and 33 eyes of 33 age- and sex-matched healthy control subjects were enrolled. Anterior, posterior, and total corneal cylinder powers and flat meridians measured by a single Scheimpflug camera were analyzed. The difference in corneal astigmatism between the simulated K and total cornea was evaluated. Results The mean anterior, posterior, and total corneal cylinder powers of the keratoconus group (4.37 ± 1.73, 0.95 ± 0.39, and 4.36 ± 1.74 cylinder diopters [CD], respectively) were significantly greater than those of the control group (1.10 ± 0.68, 0.39 ± 0.18, and 0.97 ± 0.63 CD, respectively). The cylinder power difference between the simulated K and total cornea was positively correlated with the posterior corneal cylinder power and negatively correlated with the absolute flat meridian difference between the simulated K and total cornea in both groups. The mean magnitude of the vector difference between the astigmatism of the simulated K and total cornea of the keratoconus group (0.67 ± 0.67 CD) was significantly larger than that of the control group (0.28 ± 0.12 CD). Conclusions Eyes with keratoconus had greater estimation errors of total corneal astigmatism based on anterior corneal measurement than did healthy eyes. Posterior corneal surface measurement should be more emphasized to determine the total corneal astigmatism in eyes with keratoconus. PMID:29770640
NASA Astrophysics Data System (ADS)
Yasuoka, Fatima M. M.; Matos, Luciana; Cremasco, Antonio; Numajiri, Mirian; Marcato, Rafael; Oliveira, Otavio G.; Sabino, Luis G.; Castro N., Jarbas C.; Bagnato, Vanderlei S.; Carvalho, Luis A. V.
2016-03-01
An optical system that conjugates the patient's pupil to the plane of a Hartmann-Shack (HS) wavefront sensor has been simulated using optical design software. And an optical bench prototype is mounted using mechanical eye device, beam splitter, illumination system, lenses, mirrors, mirrored prism, movable mirror, wavefront sensor and camera CCD. The mechanical eye device is used to simulate aberrations of the eye. From this device the rays are emitted and travelled by the beam splitter to the optical system. Some rays fall on the camera CCD and others pass in the optical system and finally reach the sensor. The eye models based on typical in vivo eye aberrations is constructed using the optical design software Zemax. The computer-aided outcomes of each HS images for each case are acquired, and these images are processed using customized techniques. The simulated and real images for low order aberrations are compared using centroid coordinates to assure that the optical system is constructed precisely in order to match the simulated system. Afterwards a simulated version of retinal images is constructed to show how these typical eyes would perceive an optotype positioned 20 ft away. Certain personalized corrections are allowed by eye doctors based on different Zernike polynomial values and the optical images are rendered to the new parameters. Optical images of how that eye would see with or without corrections of certain aberrations are generated in order to allow which aberrations can be corrected and in which degree. The patient can then "personalize" the correction to their own satisfaction. This new approach to wavefront sensing is a promising change in paradigm towards the betterment of the patient-physician relationship.
Reading Polymorphemic Dutch Compounds: Toward a Multiple Route Model of Lexical Processing
ERIC Educational Resources Information Center
Kuperman, Victor; Schreuder, Robert; Bertram, Raymond; Baayen, R. Harald
2009-01-01
This article reports an eye-tracking experiment with 2,500 polymorphemic Dutch compounds presented in isolation for visual lexical decision while readers' eye movements were registered. The authors found evidence that both full forms of compounds ("dishwasher") and their constituent morphemes (e.g., "dish," "washer," "er") and morphological…
NASA Astrophysics Data System (ADS)
Li, Zhengyan; Zgadzaj, Rafal; Wang, Xiaoming; Reed, Stephen; Dong, Peng; Downer, Michael C.
2010-11-01
We demonstrate a prototype Frequency Domain Streak Camera (FDSC) that can capture the picosecond time evolution of the plasma accelerator structure in a single shot. In our prototype Frequency-Domain Streak Camera, a probe pulse propagates obliquely to a sub-picosecond pump pulse that creates an evolving nonlinear index "bubble" in fused silica glass, supplementing a conventional Frequency Domain Holographic (FDH) probe-reference pair that co-propagates with the "bubble". Frequency Domain Tomography (FDT) generalizes Frequency-Domain Streak Camera by probing the "bubble" from multiple angles and reconstructing its morphology and evolution using algorithms similar to those used in medical CAT scans. Multiplexing methods (Temporal Multiplexing and Angular Multiplexing) improve data storage and processing capability, demonstrating a compact Frequency Domain Tomography system with a single spectrometer.
NASA Astrophysics Data System (ADS)
Druart, Guillaume; Matallah, Noura; Guerineau, Nicolas; Magli, Serge; Chambon, Mathieu; Jenouvrier, Pierre; Mallet, Eric; Reibel, Yann
2014-06-01
Today, both military and civilian applications require miniaturized optical systems in order to give an imagery function to vehicles with small payload capacity. After the development of megapixel focal plane arrays (FPA) with micro-sized pixels, this miniaturization will become feasible with the integration of optical functions in the detector area. In the field of cooled infrared imaging systems, the detector area is the Detector-Dewar-Cooler Assembly (DDCA). SOFRADIR and ONERA have launched a new research and innovation partnership, called OSMOSIS, to develop disruptive technologies for DDCA to improve the performance and compactness of optronic systems. With this collaboration, we will break down the technological barriers of DDCA, a sealed and cooled environment dedicated to the infrared detectors, to explore Dewar-level integration of optics. This technological breakthrough will bring more compact multipurpose thermal imaging products, as well as new thermal capabilities such as 3D imagery or multispectral imagery. Previous developments will be recalled (SOIE and FISBI cameras) and new developments will be presented. In particular, we will focus on a dual-band MWIR-LWIR camera and a multichannel camera.
A multipurpose camera system for monitoring Kīlauea Volcano, Hawai'i
Patrick, Matthew R.; Orr, Tim R.; Lee, Lopaka; Moniz, Cyril J.
2015-01-01
We describe a low-cost, compact multipurpose camera system designed for field deployment at active volcanoes that can be used either as a webcam (transmitting images back to an observatory in real-time) or as a time-lapse camera system (storing images onto the camera system for periodic retrieval during field visits). The system also has the capability to acquire high-definition video. The camera system uses a Raspberry Pi single-board computer and a 5-megapixel low-light (near-infrared sensitive) camera, as well as a small Global Positioning System (GPS) module to ensure accurate time-stamping of images. Custom Python scripts control the webcam and GPS unit and handle data management. The inexpensive nature of the system allows it to be installed at hazardous sites where it might be lost. Another major advantage of this camera system is that it provides accurate internal timing (independent of network connection) and, because a full Linux operating system and the Python programming language are available on the camera system itself, it has the versatility to be configured for the specific needs of the user. We describe example deployments of the camera at Kīlauea Volcano, Hawai‘i, to monitor ongoing summit lava lake activity.
2016-11-18
Space Shuttle mission STS-61 onboard view taken by a fish-eyed camera lens showing astronauts Story Musgrave and Jeffrey Hoffman's Extra Vehicular Activity (EVA) to repair the Hubble Space Telescope (HST).
System Synchronizes Recordings from Separated Video Cameras
NASA Technical Reports Server (NTRS)
Nail, William; Nail, William L.; Nail, Jasper M.; Le, Doung T.
2009-01-01
A system of electronic hardware and software for synchronizing recordings from multiple, physically separated video cameras is being developed, primarily for use in multiple-look-angle video production. The system, the time code used in the system, and the underlying method of synchronization upon which the design of the system is based are denoted generally by the term "Geo-TimeCode(TradeMark)." The system is embodied mostly in compact, lightweight, portable units (see figure) denoted video time-code units (VTUs) - one VTU for each video camera. The system is scalable in that any number of camera recordings can be synchronized. The estimated retail price per unit would be about $350 (in 2006 dollars). The need for this or another synchronization system external to video cameras arises because most video cameras do not include internal means for maintaining synchronization with other video cameras. Unlike prior video-camera-synchronization systems, this system does not depend on continuous cable or radio links between cameras (however, it does depend on occasional cable links lasting a few seconds). Also, whereas the time codes used in prior video-camera-synchronization systems typically repeat after 24 hours, the time code used in this system does not repeat for slightly more than 136 years; hence, this system is much better suited for long-term deployment of multiple cameras.
United States Homeland Security and National Biometric Identification
2002-04-09
security number. Biometrics is the use of unique individual traits such as fingerprints, iris eye patterns, voice recognition, and facial recognition to...technology to control access onto their military bases using a Defense Manpower Management Command developed software application. FACIAL Facial recognition systems...installed facial recognition systems in conjunction with a series of 200 cameras to fight street crime and identify terrorists. The cameras, which are
Field Portable Digital Ophthalmoscope/Fundus Camera. Phase I
1997-05-01
robbing injuries and pathologies. Included are retinal detachments, laser damage, CMV retinitis , retinitis pigmentosa , glaucoma, tumors, and the like...RMI-S, Fort Detrick, Frederick, Maryland 21702-5012. 13. ABSTRACT (Maximum 200 Retinal imaging is key for diagnoses and treatment of various eye-sight...personnel, and generally only used by ophthalmologists or in hospital settings. The retinal camera of this project will revolutionize retinal imaging
Prism-based single-camera system for stereo display
NASA Astrophysics Data System (ADS)
Zhao, Yue; Cui, Xiaoyu; Wang, Zhiguo; Chen, Hongsheng; Fan, Heyu; Wu, Teresa
2016-06-01
This paper combines the prism and single camera and puts forward a method of stereo imaging with low cost. First of all, according to the principle of geometrical optics, we can deduce the relationship between the prism single-camera system and dual-camera system, and according to the principle of binocular vision we can deduce the relationship between binoculars and dual camera. Thus we can establish the relationship between the prism single-camera system and binoculars and get the positional relation of prism, camera, and object with the best effect of stereo display. Finally, using the active shutter stereo glasses of NVIDIA Company, we can realize the three-dimensional (3-D) display of the object. The experimental results show that the proposed approach can make use of the prism single-camera system to simulate the various observation manners of eyes. The stereo imaging system, which is designed by the method proposed by this paper, can restore the 3-D shape of the object being photographed factually.
Supertyphoon Nepartak Barreling Toward Taiwan Viewed by NASA MISR
2016-07-08
Typhoon Nepartak, the first large typhoon in the northwest Pacific this season, is currently taking aim at the east coast of Taiwan. Over the past few days, Nepartak has rapidly gained strength, growing from a tropical storm to the equivalent of a Category 5 hurricane with sustained wind speeds of more than 160 miles (258 kilometers) per hour. Taiwan's Central Weather Bureau has issued a torrential rain warning, bracing for likely flooding as 5 to 15 inches (13 to 38 centimeters) of rain are expected to fall over Taiwan during the storm's passage. Waves of up to 40 feet (12 meters) are predicted on the coast as the typhoon approaches, and air and train travel have been severely impacted. The typhoon is currently moving at about 10 miles per hour (16 kilometers) to the west-northwest, and is predicted to pass over Taiwan within the next day and then hit the coast of mainland China. Central and eastern China are poorly situated to absorb the rainfall from Nepartak after suffering the effects of severe monsoon flooding, which has killed at least 140 people in the past week. The Multi-angle Imaging SpectroRadiometer (MISR) instrument aboard NASA's Terra satellite captured this view of Typhoon Nepartak on July 7, 2016, at 10:30 a.m. local time (2:30 a.m. UTC). On the left is an image from the nadir (vertical pointing) camera, which shows the central portion of Nepartak and the storm's eye. The image is about 235 miles (378 kilometers) across. The island of Manila in the Philippines, about 250 miles (400 kilometers) south of Taiwan, is visible to the southwest of the eye. The image shows that Nepartak's center is extremely compact, rather than broken up into spiral bands as is more typical of typhoons. This means that the storm may retain more of its strength as it passes over land. MISR uses nine cameras to capture images of the typhoon from different angles. This provides a stereographic view, which can be used to determine the height of the storm's cloud tops. These heights are plotted in the middle panel, superimposed on the image. This shows that the cloud tops are relatively low, about 2.5 miles (4 kilometers), in the eye, but much higher, up to 12.5 miles (20 kilometers), just outside it. By tracking the motion of clouds as they are viewed by each of the nine cameras over about seven minutes, it is possible to also derive how fast the clouds are moving due to wind. These wind vectors are superimposed on the image in the right panel. The length of each arrow shows the wind speed at that location (compare to the 45 miles per hour or 20 meters per second arrow in the legend), and the color shows the height at which the wind is being computed. The motion of the low-level winds (red and yellow arrows) is counterclockwise, while the motion of the high winds (blue and purple arrows) is mostly clockwise. This is because hurricanes draw in warm, moist air at low altitudes, which then flows upward around the eye, releases its moisture as rain, and moves outward at high altitudes. As is typical of these types of storm systems, the inflowing low winds and the outflowing high winds spin in different directions. http://photojournal.jpl.nasa.gov/catalog/PIA20719
Clinical Validation of a Smartphone-Based Adapter for Optic Disc Imaging in Kenya.
Bastawrous, Andrew; Giardini, Mario Ettore; Bolster, Nigel M; Peto, Tunde; Shah, Nisha; Livingstone, Iain A T; Weiss, Helen A; Hu, Sen; Rono, Hillary; Kuper, Hannah; Burton, Matthew
2016-02-01
Visualization and interpretation of the optic nerve and retina are essential parts of most physical examinations. To design and validate a smartphone-based retinal adapter enabling image capture and remote grading of the retina. This validation study compared the grading of optic nerves from smartphone images with those of a digital retinal camera. Both image sets were independently graded at Moorfields Eye Hospital Reading Centre. Nested within the 6-year follow-up (January 7, 2013, to March 12, 2014) of the Nakuru Eye Disease Cohort in Kenya, 1460 adults (2920 eyes) 55 years and older were recruited consecutively from the study. A subset of 100 optic disc images from both methods were further used to validate a grading app for the optic nerves. Data analysis was performed April 7 to April 12, 2015. Vertical cup-disc ratio for each test was compared in terms of agreement (Bland-Altman and weighted κ) and test-retest variability. A total of 2152 optic nerve images were available from both methods (also 371 from the reference camera but not the smartphone, 170 from the smartphone but not the reference camera, and 227 from neither the reference camera nor the smartphone). Bland-Altman analysis revealed a mean difference of 0.02 (95% CI, -0.21 to 0.17) and a weighted κ coefficient of 0.69 (excellent agreement). The grades of an experienced retinal photographer were compared with those of a lay photographer (no health care experience before the study), and no observable difference in image acquisition quality was found. Nonclinical photographers using the low-cost smartphone adapter were able to acquire optic nerve images at a standard that enabled independent remote grading of the images comparable to those acquired using a desktop retinal camera operated by an ophthalmic assistant. The potential for task shifting and the detection of avoidable causes of blindness in the most at-risk communities makes this an attractive public health intervention.
Simultaneous Spectral Temporal Adaptive Raman Spectrometer - SSTARS
NASA Technical Reports Server (NTRS)
Blacksberg, Jordana
2010-01-01
Raman spectroscopy is a prime candidate for the next generation of planetary instruments, as it addresses the primary goal of mineralogical analysis, which is structure and composition. However, large fluorescence return from many mineral samples under visible light excitation can render Raman spectra unattainable. Using the described approach, Raman and fluorescence, which occur on different time scales, can be simultaneously obtained from mineral samples using a compact instrument in a planetary environment. This new approach is taken based on the use of time-resolved spectroscopy for removing the fluorescence background from Raman spectra in the laboratory. In the SSTARS instrument, a visible excitation source (a green, pulsed laser) is used to generate Raman and fluorescence signals in a mineral sample. A spectral notch filter eliminates the directly reflected beam. A grating then disperses the signal spectrally, and a streak camera provides temporal resolution. The output of the streak camera is imaged on the CCD (charge-coupled device), and the data are read out electronically. By adjusting the sweep speed of the streak camera, anywhere from picoseconds to milliseconds, it is possible to resolve Raman spectra from numerous fluorescence spectra in the same sample. The key features of SSTARS include a compact streak tube capable of picosecond time resolution for collection of simultaneous spectral and temporal information, adaptive streak tube electronics that can rapidly change from one sweep rate to another over ranges of picoseconds to milliseconds, enabling collection of both Raman and fluorescence signatures versus time and wavelength, and Synchroscan integration that allows for a compact, low-power laser without compromising ultimate sensitivity.
An artificial elementary eye with optic flow detection and compositional properties.
Pericet-Camara, Ramon; Dobrzynski, Michal K; Juston, Raphaël; Viollet, Stéphane; Leitel, Robert; Mallot, Hanspeter A; Floreano, Dario
2015-08-06
We describe a 2 mg artificial elementary eye whose structure and functionality is inspired by compound eye ommatidia. Its optical sensitivity and electronic architecture are sufficient to generate the required signals for the measurement of local optic flow vectors in multiple directions. Multiple elementary eyes can be assembled to create a compound vision system of desired shape and curvature spanning large fields of view. The system configurability is validated with the fabrication of a flexible linear array of artificial elementary eyes capable of extracting optic flow over multiple visual directions. © 2015 The Author(s).
Small format digital photogrammetry for applications in the earth sciences
NASA Astrophysics Data System (ADS)
Rieke-Zapp, Dirk
2010-05-01
Small format digital photogrammetry for applications in the earth sciences Photogrammetry is often considered one of the most precise and versatile surveying techniques. The same camera and analysis software can be used for measurements from sub-millimetre to kilometre scale. Such a measurement device is well suited for application by earth scientists working in the field. In this case a small toolset and a straight forward setup best fit the needs of the operator. While a digital camera is typically already part of the field equipment of an earth scientist the main focus of the field work is often not surveying. Lack in photogrammetric training at the same time requires an easy to learn, straight forward surveying technique. A photogrammetric method was developed aimed primarily at earth scientists for taking accurate measurements in the field minimizing extra bulk and weight of the required equipment. The work included several challenges. A) Definition of an upright coordinate system without heavy and bulky tools like a total station or GNS-Sensor. B) Optimization of image acquisition and geometric stability of the image block. C) Identification of a small camera suitable for precise measurements in the field. D) Optimization of the workflow from image acquisition to preparation of images for stereo measurements. E) Introduction of students and non-photogrammetrists to the workflow. Wooden spheres were used as target points in the field. They were more rugged and available in different sizes than ping pong balls used in a previous setup. Distances between three spheres were introduced as scale information in a photogrammetric adjustment. The distances were measured with a laser distance meter accurate to 1 mm (1 sigma). The vertical angle between the spheres was measured with the same laser distance meter. The precision of the measurement was 0.3° (1 sigma) which is sufficient, i.e. better than inclination measurements with a geological compass. The upright coordinate system is important to measure the dip angle of geologic features in outcrop. The planimetric coordinate systems would be arbitrary, but may easily be oriented to compass north introducing a direction measurement of a compass. Wooden spheres and a Leica disto D3 laser distance meter added less than 0.150 kg to the field equipment considering that a suitable digital camera was already part of it. Identification of a small digital camera suitable for precise measurements was a major part of this work. A group of cameras were calibrated several times over different periods of time on a testfield. Further evaluation involved an accuracy assessment in the field comparing distances between signalized points calculated form a photogrammetric setup with coordinates derived from a total station survey. The smallest camera in the test required calibration on the job as the interior orientation changed significantly between testfield calibration and use in the field. We attribute this to the fact that the lens was retracted then the camera was switched off. Fairly stable camera geometry in a compact size camera with lens retracting system was accomplished for Sigma DP1 and DP2 cameras. While the pixel count of the cameras was less than for the Ricoh, the pixel pitch in the Sigma cameras was much larger. Hence, the same mechanical movement would have less per pixel effect for the Sigma cameras than for the Ricoh camera. A large pixel pitch may therefore compensate for some camera instability explaining why cameras with large sensors and larger pixel pitch typically yield better accuracy in object space. Both Sigma cameras weigh approximately 0.250 kg and may even be suitable for use with ultralight aerial vehicles (UAV) which have payload restriction of 0.200 to 0.300 kg. A set of other cameras that were available were also tested on a calibration field and on location showing once again that it is difficult to reason geometric stability from camera specifications. Image acquisition with geometrically stable cameras was fairly straight forward to cover the area of interest with stereo pairs for analysis. We limited our tests to setups with three to five images to minimize the amount of post processing. The laser dot of the laser distance meter was not visible for distances farther than 5-7 m with the naked eye which also limited the maximum stereo area that may be covered with this technique. Extrapolating the setup to fairly large areas showed no significant decrease in accuracy accomplished in object space. Working with a Sigma SD14 SLR camera on a 6 x 18 x 20 m3 volume the maximum length measurement error ranged between 20 and 30 mm depending on image setup and analysis. For smaller outcrops even the compact cameras yielded maximum length measurement errors in the mm range which was considered sufficient for measurements in the earth sciences. In many cases the resolution per pixel was the limiting factor of image analysis rather than accuracy. A field manual was developed guiding novice users and students to this technique. The technique does not simplify ease of use for precision; therefore successful users of the presented method easily grow into more advanced photogrammetric methods for high precision applications. Originally camera calibration was not part of the methodology for the novice operators. Recent introduction of Camera Calibrator which is a low cost, well automated software for camera calibration, allowed beginners to calibrate their camera within a couple minutes. The complete set of calibration parameters can be applied in ERDAS LPS software easing the workflow. Image orientation was performed in LPS 9.2 software which was also used for further image analysis.
Novel, ultra-compact, high-performance, eye-safe laser rangefinder for demanding applications
NASA Astrophysics Data System (ADS)
Silver, M.; Lee, S. T.; Borthwick, A.; Morton, G.; McNeill, C.; McSporran, D.; McRae, I.; McKinlay, G.; Jackson, D.; Alexander, W.
2016-05-01
Compact eye-safe laser rangefinders (LRFs) are a key technology for future sensors. In addition to reduced size, weight and power (SWaP), compact LRFs are increasingly being required to deliver a higher repetition rate, burst mode capability. Burst mode allows acquisition of telemetry data from fast moving targets or while sensing-on-the-move. We will describe a new, ultra-compact, long-range, eye-safe laser rangefinder that incorporates a novel transmitter that can deliver a burst capability. The transmitter is a diode-pumped, erbium:glass, passively Q-switched, solid-state laser which uses design and packaging techniques adopted from the telecom components sector. The key advantage of this approach is that the transmitter can be engineered to match the physical dimensions of the active laser components and the submillimetre sized laser spot. This makes the transmitter significantly smaller than existing designs, leading to big improvements in thermal management, and allowing higher repetition rates. In addition, the design approach leads to devices that have higher reliability, lower cost, and smaller form-factor, than previously possible. We present results from the laser rangefinder that incorporates the new transmitter. The LRF has dimensions (L x W x H) of 100 x 55 x 34 mm and achieves ranges of up to 15km from a single shot, and over a temperature range of -32°C to +60°C. Due to the transmitter's superior thermal performance, the unit is capable of repetition rates of 1Hz continuous operation and short bursts of up to 4Hz. Short bursts of 10Hz have also been demonstrated from the transmitter in the laboratory.
Automated Meteor Detection by All-Sky Digital Camera Systems
NASA Astrophysics Data System (ADS)
Suk, Tomáš; Šimberová, Stanislava
2017-12-01
We have developed a set of methods to detect meteor light traces captured by all-sky CCD cameras. Operating at small automatic observatories (stations), these cameras create a network spread over a large territory. Image data coming from these stations are merged in one central node. Since a vast amount of data is collected by the stations in a single night, robotic storage and analysis are essential to processing. The proposed methodology is adapted to data from a network of automatic stations equipped with digital fish-eye cameras and includes data capturing, preparation, pre-processing, analysis, and finally recognition of objects in time sequences. In our experiments we utilized real observed data from two stations.
Wen, Feng; Yu, Minzhong; Wu, Dezheng; Ma, Juanmei; Wu, Lezheng
2002-07-01
To observe the effect of indocyanine green angiography (ICGA) with infrared fundus camera on subsequent dark adaptation and the Ganzfeld electroretinogram (ERG), the ERGs of 38 eyes with different retinal diseases were recorded before and after ICGA during a 40-min dark adaptation period. ICGA was performed with Topcon 50IA retina camera. Ganzfeld ERG was recorded with Neuropack II evoked response recorder. The results showed that ICGA did not affect the latencies and the amplitudes in ERG of rod response, cone response and mixed maximum response (p>0.05). It suggests that ICGA using infrared fundus camera could be performed prior to the recording of the Ganzfeld ERG.
Portable, low-priced retinal imager for eye disease screening
NASA Astrophysics Data System (ADS)
Soliz, Peter; Nemeth, Sheila; VanNess, Richard; Barriga, E. S.; Zamora, Gilberto
2014-02-01
The objective of this project was to develop and demonstrate a portable, low-priced, easy to use non-mydriatic retinal camera for eye disease screening in underserved urban and rural locations. Existing portable retinal imagers do not meet the requirements of a low-cost camera with sufficient technical capabilities (field of view, image quality, portability, battery power, and ease-of-use) to be distributed widely to low volume clinics, such as the offices of single primary care physicians serving rural communities or other economically stressed healthcare facilities. Our approach for Smart i-Rx is based primarily on a significant departure from current generations of desktop and hand-held commercial retinal cameras as well as those under development. Our techniques include: 1) Exclusive use of off-the-shelf components; 2) Integration of retinal imaging device into low-cost, high utility camera mount and chin rest; 3) Unique optical and illumination designed for small form factor; and 4) Exploitation of autofocus technology built into present digital SLR recreational cameras; and 5) Integration of a polarization technique to avoid the corneal reflex. In a prospective study, 41 out of 44 diabetics were imaged successfully. No imaging was attempted on three of the subjects due to noticeably small pupils (less than 2mm). The images were of sufficient quality to detect abnormalities related to diabetic retinopathy, such as microaneurysms and exudates. These images were compared with ones taken non-mydriatically with a Canon CR-1 Mark II camera. No cases identified as having DR by expert retinal graders were missed in the Smart i-Rx images.
Micro-Imagers for Spaceborne Cell-Growth Experiments
NASA Technical Reports Server (NTRS)
Behar, Alberto; Matthews, Janet; SaintAnge, Beverly; Tanabe, Helen
2006-01-01
A document discusses selected aspects of a continuing effort to develop five micro-imagers for both still and video monitoring of cell cultures to be grown aboard the International Space Station. The approach taken in this effort is to modify and augment pre-existing electronic micro-cameras. Each such camera includes an image-detector integrated-circuit chip, signal-conditioning and image-compression circuitry, and connections for receiving power from, and exchanging data with, external electronic equipment. Four white and four multicolor light-emitting diodes are to be added to each camera for illuminating the specimens to be monitored. The lens used in the original version of each camera is to be replaced with a shorter-focal-length, more-compact singlet lens to make it possible to fit the camera into the limited space allocated to it. Initially, the lenses in the five cameras are to have different focal lengths: the focal lengths are to be 1, 1.5, 2, 2.5, and 3 cm. Once one of the focal lengths is determined to be the most nearly optimum, the remaining four cameras are to be fitted with lenses of that focal length.
Design of see-through near-eye display for presbyopia.
Wu, Yishi; Chen, Chao Ping; Zhou, Lei; Li, Yang; Yu, Bing; Jin, Huayi
2017-04-17
We propose a compact design of see-through near-eye display that is dedicated to presbyopia. Our solution is characterized by a plano-convex waveguide, which is essentially an integration of a corrective lens and two volume holograms. Its design rules are set forth in detail, followed by the results and discussion regarding the diffraction efficiency, field of view, modulation transfer function, distortion, and simulated imaging.
Virtual egocenters as a function of display geometric field of view and eye station point
NASA Technical Reports Server (NTRS)
Psotka, Joseph
1993-01-01
The accurate location of one's virtual egocenter in a geometric space is of critical importance for immersion technologies. This experiment was conducted to investigate the role of field of view (FOV) and observer station points in the perception of the location of one's egocenter (the personal viewpoint) in virtual space. Rivalrous cues to the accurate location of one's egocenter may be one factor involved in simulator sickness. Fourteen subjects viewed an animated 3D model, of the room in which they sat, binocularly, from Eye Station Points (ESP) of either 300 or 800 millimeters. The display was on a 190 by 245 mm monitor, at a resolution of 320 by 200 pixels with 256 colors. They saw four models of the room designed with four geometric field of view (FOVg) conditions of 18, 48, 86, and 140 degrees. They drew the apparent paths of the camera in the room on a bitmap of the room as seen from infinity above. Large differences in the paths of the camera were seen as a function of both FOVg and ESP. Ten of the subjects were then asked to find the position for each display that minimized camera motion. The results fit well with predictions from an equation that took the ratio of human FOV (roughly 180 degrees) to FOVg times the Geometric Eye Point (GEP) of the imager: Zero Station Point = (180/FOVg)*GEP
NASA Astrophysics Data System (ADS)
Elleuch, Hanene; Wali, Ali; Samet, Anis; Alimi, Adel M.
2017-03-01
Two systems of eyes and hand gestures recognition are used to control mobile devices. Based on a real-time video streaming captured from the device's camera, the first system recognizes the motion of user's eyes and the second one detects the static hand gestures. To avoid any confusion between natural and intentional movements we developed a system to fuse the decision coming from eyes and hands gesture recognition systems. The phase of fusion was based on decision tree approach. We conducted a study on 5 volunteers and the results that our system is robust and competitive.
A Support System for Mouse Operations Using Eye-Gaze Input
NASA Astrophysics Data System (ADS)
Abe, Kiyohiko; Nakayama, Yasuhiro; Ohi, Shoichi; Ohyama, Minoru
We have developed an eye-gaze input system for people with severe physical disabilities, such as amyotrophic lateral sclerosis (ALS) patients. This system utilizes a personal computer and a home video camera to detect eye-gaze under natural light. The system detects both vertical and horizontal eye-gaze by simple image analysis, and does not require special image processing units or sensors. Our conventional eye-gaze input system can detect horizontal eye-gaze with a high degree of accuracy. However, it can only classify vertical eye-gaze into 3 directions (up, middle and down). In this paper, we propose a new method for vertical eye-gaze detection. This method utilizes the limbus tracking method for vertical eye-gaze detection. Therefore our new eye-gaze input system can detect the two-dimension coordinates of user's gazing point. By using this method, we develop a new support system for mouse operation. This system can move the mouse cursor to user's gazing point.
Schoenemann, Brigitte; Castellani, Christopher; Clarkson, Euan N. K.; Haug, Joachim T.; Maas, Andreas; Haug, Carolin; Waloszek, Dieter
2012-01-01
Fossilized compound eyes from the Cambrian, isolated and three-dimensionally preserved, provide remarkable insights into the lifestyle and habitat of their owners. The tiny stalked compound eyes described here probably possessed too few facets to form a proper image, but they represent a sophisticated system for detecting moving objects. The eyes are preserved as almost solid, mace-shaped blocks of phosphate, in which the original positions of the rhabdoms in one specimen are retained as deep cavities. Analysis of the optical axes reveals four visual areas, each with different properties in acuity of vision. They are surveyed by lenses directed forwards, laterally, backwards and inwards, respectively. The most intriguing of these is the putatively inwardly orientated zone, where the optical axes, like those orientated to the front, interfere with axes of the other eye of the contralateral side. The result is a three-dimensional visual net that covers not only the front, but extends also far laterally to either side. Thus, a moving object could be perceived by a two-dimensional coordinate (which is formed by two axes of those facets, one of the left and one of the right eye, which are orientated towards the moving object) in a wide three-dimensional space. This compound eye system enables small arthropods equipped with an eye of low acuity to estimate velocity, size or distance of possible food items efficiently. The eyes are interpreted as having been derived from individuals of the early crustacean Henningsmoenicaris scutula pointing to the existence of highly efficiently developed eyes in the early evolutionary lineage leading towards the modern Crustacea. PMID:22048954
Advanced High-Definition Video Cameras
NASA Technical Reports Server (NTRS)
Glenn, William
2007-01-01
A product line of high-definition color video cameras, now under development, offers a superior combination of desirable characteristics, including high frame rates, high resolutions, low power consumption, and compactness. Several of the cameras feature a 3,840 2,160-pixel format with progressive scanning at 30 frames per second. The power consumption of one of these cameras is about 25 W. The size of the camera, excluding the lens assembly, is 2 by 5 by 7 in. (about 5.1 by 12.7 by 17.8 cm). The aforementioned desirable characteristics are attained at relatively low cost, largely by utilizing digital processing in advanced field-programmable gate arrays (FPGAs) to perform all of the many functions (for example, color balance and contrast adjustments) of a professional color video camera. The processing is programmed in VHDL so that application-specific integrated circuits (ASICs) can be fabricated directly from the program. ["VHDL" signifies VHSIC Hardware Description Language C, a computing language used by the United States Department of Defense for describing, designing, and simulating very-high-speed integrated circuits (VHSICs).] The image-sensor and FPGA clock frequencies in these cameras have generally been much higher than those used in video cameras designed and manufactured elsewhere. Frequently, the outputs of these cameras are converted to other video-camera formats by use of pre- and post-filters.
Evaluation of multispectral plenoptic camera
NASA Astrophysics Data System (ADS)
Meng, Lingfei; Sun, Ting; Kosoglow, Rich; Berkner, Kathrin
2013-01-01
Plenoptic cameras enable capture of a 4D lightfield, allowing digital refocusing and depth estimation from data captured with a compact portable camera. Whereas most of the work on plenoptic camera design has been based a simplistic geometric-optics-based characterization of the optical path only, little work has been done of optimizing end-to-end system performance for a specific application. Such design optimization requires design tools that need to include careful parameterization of main lens elements, as well as microlens array and sensor characteristics. In this paper we are interested in evaluating the performance of a multispectral plenoptic camera, i.e. a camera with spectral filters inserted into the aperture plane of the main lens. Such a camera enables single-snapshot spectral data acquisition.1-3 We first describe in detail an end-to-end imaging system model for a spectrally coded plenoptic camera that we briefly introduced in.4 Different performance metrics are defined to evaluate the spectral reconstruction quality. We then present a prototype which is developed based on a modified DSLR camera containing a lenslet array on the sensor and a filter array in the main lens. Finally we evaluate the spectral reconstruction performance of a spectral plenoptic camera based on both simulation and measurements obtained from the prototype.
Design and realization of an AEC&AGC system for the CCD aerial camera
NASA Astrophysics Data System (ADS)
Liu, Hai ying; Feng, Bing; Wang, Peng; Li, Yan; Wei, Hao yun
2015-08-01
An AEC and AGC(Automatic Exposure Control and Automatic Gain Control) system was designed for a CCD aerial camera with fixed aperture and electronic shutter. The normal AEC and AGE algorithm is not suitable to the aerial camera since the camera always takes high-resolution photographs in high-speed moving. The AEC and AGE system adjusts electronic shutter and camera gain automatically according to the target brightness and the moving speed of the aircraft. An automatic Gamma correction is used before the image is output so that the image is better for watching and analyzing by human eyes. The AEC and AGC system could avoid underexposure, overexposure, or image blurring caused by fast moving or environment vibration. A series of tests proved that the system meet the requirements of the camera system with its fast adjusting speed, high adaptability, high reliability in severe complex environment.
Multifocal microlens for bionic compound eye
NASA Astrophysics Data System (ADS)
Cao, Axiu; Wang, Jiazhou; Pang, Hui; Zhang, Man; Shi, Lifang; Deng, Qiling; Hu, Song
2017-10-01
Bionic compound eye optical element composed of multi-dimensional sub-eye microlenses plays an important role in miniaturizing the volume and weight of an imaging system. In this manuscript, we present a novel structure of the bionic compound eye with multiple focal lengths. By the division of the microlens into two concentric radial zones including the inner zone and the outer zone with independent radius, the sub-eye which is a multi-level micro-scale structure can be formed with multiple focal lengths. The imaging capability of the structure has been simulated. The results show that the optical information in different depths can be acquired by the structure. Meanwhile, the parameters including aperture and radius of the two zones, which have an influence on the imaging quality have been analyzed and discussed. With the increasing of the ratio of inner and outer aperture, the imaging quality of the inner zone is becoming better, and instead the outer zone will become worse. In addition, through controlling the radius of the inner and outer zone independently, the design of sub-eye with different focal lengths can be realized. With the difference between the radius of the inner and outer zone becoming larger, the imaging resolution of the sub-eye will decrease. Therefore, the optimization of the multifocal structure should be carried out according to the actual imaging quality demands. Meanwhile, this study can provide references for the further applications of multifocal microlens in bionic compound eye.
Noninvasive measurement of pharmacokinetics by near-infrared fluorescence imaging in the eye of mice
NASA Astrophysics Data System (ADS)
Dobosz, Michael; Strobel, Steffen; Stubenrauch, Kay-Gunnar; Osl, Franz; Scheuer, Werner
2014-01-01
Purpose: For generating preclinical pharmacokinetics (PKs) of compounds, blood is drawn at different time points and levels are quantified by different analytical methods. In order to receive statistically meaningful data, 3 to 5 animals are used for each time point to get serum peak-level and half-life of the compound. Both characteristics are determined by data interpolation, which may influence the accuracy of these values. We provide a method that allows continuous monitoring of blood levels noninvasively by measuring the fluorescence intensity of labeled compounds in the eye and other body regions of anesthetized mice. Procedures: The method evaluation was performed with four different fluorescent compounds: (i) indocyanine green, a nontargeting dye; (ii) OsteoSense750, a bone targeting agent; (iii) tumor targeting Trastuzumab-Alexa750; and (iv) its F(-alxea750 fragment. The latter was used for a direct comparison between fluorescence imaging and classical blood analysis using enzyme-linked immunosorbent assay (ELISA). Results: We found an excellent correlation between blood levels measured by noninvasive eye imaging with the results generated by classical methods. A strong correlation between eye imaging and ELISA was demonstrated for the F( fragment. Whole body imaging revealed a compound accumulation in the expected regions (e.g., liver, bone). Conclusions: The combination of eye and whole body fluorescence imaging enables the simultaneous measurement of blood PKs and biodistribution of fluorescent-labeled compounds.
Chen, Qing-Xiao; Hua, Bao-Zhen
2016-01-01
Mecoptera are unique in holometabolous insects in that their larvae have compound eyes. In the present study the cellular organisation and morphology of the compound eyes of adult individuals of the scorpionfly Panorpa dubia in Mecoptera were investigated by light, scanning electron, and transmission electron microscopy. The results showed that the compound eyes of adult P. dubia are of the apposition type, each eye comprising more than 1200 ommatidia. The ommatidium consists of a cornea, a crystalline cone made up of four cone cells, eight photoreceptors, two primary pigment cells, and 18 secondary pigment cells. The adult ommatidium has a fused rhabdom with eight photoreceptors. Seven photoreceptors extend from the proximal end of the crystalline cone to the basal matrix, whereas the eighth photoreceptor is shorter, extending from the middle level of the photoreceptor cluster to the basal matrix. The fused rhabdom is composed of the rhabdomeres of different photoreceptors at different levels. The adult ommatidia have the same cellular components as the larval ommatidia, but the tiering scheme is different. PMID:27258365
Chen, Qing-Xiao; Hua, Bao-Zhen
2016-01-01
Mecoptera are unique in holometabolous insects in that their larvae have compound eyes. In the present study the cellular organisation and morphology of the compound eyes of adult individuals of the scorpionfly Panorpa dubia in Mecoptera were investigated by light, scanning electron, and transmission electron microscopy. The results showed that the compound eyes of adult P. dubia are of the apposition type, each eye comprising more than 1200 ommatidia. The ommatidium consists of a cornea, a crystalline cone made up of four cone cells, eight photoreceptors, two primary pigment cells, and 18 secondary pigment cells. The adult ommatidium has a fused rhabdom with eight photoreceptors. Seven photoreceptors extend from the proximal end of the crystalline cone to the basal matrix, whereas the eighth photoreceptor is shorter, extending from the middle level of the photoreceptor cluster to the basal matrix. The fused rhabdom is composed of the rhabdomeres of different photoreceptors at different levels. The adult ommatidia have the same cellular components as the larval ommatidia, but the tiering scheme is different.
Fish-eye view of STS-112 crew on middeck
2002-10-18
STS112-337-034 (18 October 2002) --- A fish-eye lens on a 35mm camera records astronaut Pamela A. Melroy, STS-112 pilot, at the pilots station on the forward flight deck of the Space Shuttle Atlantis. Melroy, attired in her shuttle launch and entry suit, looks over a checklist prior to the entry phase of the flight.
Narendra, Ajay; Alkaladi, Ali; Raderschall, Chloé A.; Robson, Simon K. A.; Ribi, Willi A.
2013-01-01
The Australian intertidal ant, Polyrhachis sokolova lives in mudflat habitats and nests at the base of mangroves. They are solitary foraging ants that rely on visual cues. The ants are active during low tides at both day and night and thus experience a wide range of light intensities. We here ask the extent to which the compound eyes of P. sokolova reflect the fact that they operate during both day and night. The ants have typical apposition compound eyes with 596 ommatidia per eye and an interommatidial angle of 6.0°. We find the ants have developed large lenses (33 µm in diameter) and wide rhabdoms (5 µm in diameter) to make their eyes highly sensitive to low light conditions. To be active at bright light conditions, the ants have developed an extreme pupillary mechanism during which the primary pigment cells constrict the crystalline cone to form a narrow tract of 0.5 µm wide and 16 µm long. This pupillary mechanism protects the photoreceptors from bright light, making the eyes less sensitive during the day. The dorsal rim area of their compound eye has specialised photoreceptors that could aid in detecting the orientation of the pattern of polarised skylight, which would assist the animals to determine compass directions required while navigating between nest and food sources. PMID:24155883
Evaluation of the Quality of Action Cameras with Wide-Angle Lenses in Uav Photogrammetry
NASA Astrophysics Data System (ADS)
Hastedt, H.; Ekkel, T.; Luhmann, T.
2016-06-01
The application of light-weight cameras in UAV photogrammetry is required due to restrictions in payload. In general, consumer cameras with normal lens type are applied to a UAV system. The availability of action cameras, like the GoPro Hero4 Black, including a wide-angle lens (fish-eye lens) offers new perspectives in UAV projects. With these investigations, different calibration procedures for fish-eye lenses are evaluated in order to quantify their accuracy potential in UAV photogrammetry. Herewith the GoPro Hero4 is evaluated using different acquisition modes. It is investigated to which extent the standard calibration approaches in OpenCV or Agisoft PhotoScan/Lens can be applied to the evaluation processes in UAV photogrammetry. Therefore different calibration setups and processing procedures are assessed and discussed. Additionally a pre-correction of the initial distortion by GoPro Studio and its application to the photogrammetric purposes will be evaluated. An experimental setup with a set of control points and a prospective flight scenario is chosen to evaluate the processing results using Agisoft PhotoScan. Herewith it is analysed to which extent a pre-calibration and pre-correction of a GoPro Hero4 will reinforce the reliability and accuracy of a flight scenario.
Near-infrared transillumination photography of intraocular tumours.
Krohn, Jørgen; Ulltang, Erlend; Kjersem, Bård
2013-10-01
To present a technique for near-infrared transillumination imaging of intraocular tumours based on the modifications of a conventional digital slit lamp camera system. The Haag-Streit Photo-Slit Lamp BX 900 (Haag-Streit AG) was used for transillumination photography by gently pressing the tip of the background illumination cable against the surface of the patient's eye. Thus the light from the flash unit was transmitted into the eye, leading to improved illumination and image resolution. The modification for near-infrared photography was done by replacing the original camera with a Canon EOS 30D (Canon Inc) converted by Advanced Camera Services Ltd. In this camera, the infrared blocking filter was exchanged for a 720 nm long-pass filter, so that the near-infrared part of the spectrum was recorded by the sensor. The technique was applied in eight patients: three with anterior choroidal melanoma, three with ciliary body melanoma and two with ocular pigment alterations. The good diagnostic quality of the photographs made it possible to evaluate the exact location and extent of the lesions in relation to pigmented intraocular landmarks such as the ora serrata and ciliary body. The photographic procedure did not lead to any complications. We recommend near-infrared transillumination photography as a supplementary diagnostic tool for the evaluation and documentation of anteriorly located intraocular tumours.
Improved iris localization by using wide and narrow field of view cameras for iris recognition
NASA Astrophysics Data System (ADS)
Kim, Yeong Gon; Shin, Kwang Yong; Park, Kang Ryoung
2013-10-01
Biometrics is a method of identifying individuals by their physiological or behavioral characteristics. Among other biometric identifiers, iris recognition has been widely used for various applications that require a high level of security. When a conventional iris recognition camera is used, the size and position of the iris region in a captured image vary according to the X, Y positions of a user's eye and the Z distance between a user and the camera. Therefore, the searching area of the iris detection algorithm is increased, which can inevitably decrease both the detection speed and accuracy. To solve these problems, we propose a new method of iris localization that uses wide field of view (WFOV) and narrow field of view (NFOV) cameras. Our study is new as compared to previous studies in the following four ways. First, the device used in our research acquires three images, one each of the face and both irises, using one WFOV and two NFOV cameras simultaneously. The relation between the WFOV and NFOV cameras is determined by simple geometric transformation without complex calibration. Second, the Z distance (between a user's eye and the iris camera) is estimated based on the iris size in the WFOV image and anthropometric data of the size of the human iris. Third, the accuracy of the geometric transformation between the WFOV and NFOV cameras is enhanced by using multiple matrices of the transformation according to the Z distance. Fourth, the searching region for iris localization in the NFOV image is significantly reduced based on the detected iris region in the WFOV image and the matrix of geometric transformation corresponding to the estimated Z distance. Experimental results showed that the performance of the proposed iris localization method is better than that of conventional methods in terms of accuracy and processing time.
Noncontact detection of dry eye using a custom designed infrared thermal image system
NASA Astrophysics Data System (ADS)
Su, Tai Yuan; Hwa, Chen Kerh; Liu, Po Hsuan; Wu, Ming Hong; Chang, David O.; Su, Po Fang; Chang, Shu Wen; Chiang, Huihua Kenny
2011-04-01
Dry eye syndrome is a common irritating eye disease. Current clinical diagnostic methods are invasive and uncomfortable for patients. This study developed a custom designed noncontact infrared (IR) thermal image system to measure the spatial and temporal variation of the ocular surface temperature over a 6-second eye-open period. This research defined two parameters: the temperature difference value and the compactness value to represent the temperature change and the irregularity of the temperature distribution on the tear film. Using these two parameters, this study achieved discrimination results for the dry eye and the normal eye groups; the sensitivity is 0.84, the specificity is 0.83, and the receiver operating characteristic area is 0.87. The results suggest that the custom designed IR thermal image system may be used as an effective tool for noncontact detection of dry eye.
Noncontact detection of dry eye using a custom designed IR thermal image system
NASA Astrophysics Data System (ADS)
Su, Tai Yuan; Chen, Kerh Hwa; Liu, Po Hsuan; Wu, Ming Hong; Chang, David O.; Chiang, Huihua
2011-03-01
Dry eye syndrome is a common irritating eye disease. Current clinical diagnostic methods are invasive and uncomfortable to patients. A custom designed noncontact infrared (IR) thermal image system was developed to measure the spatial and temporal variation of the ocular surface temperature over a 6-second eye-opening period. We defined two parameters: the temperature difference value and the compactness value to represent the degree of the temperature change and irregularity of the temperature distribution on the tear film. By using these two parameters, in this study, a linear discrimination result for the dry eye and the normal eye groups; the sensitivity is 0.9, the specificity is 0.86 and the receiver operating characteristic (ROC) area is 0.91. The result suggests that the custom designed IR thermal image system may be used as an effective tool for noncontact detection of dry eye.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li Zhengyan; Zgadzaj, Rafal; Wang Xiaoming
2010-11-04
We demonstrate a prototype Frequency Domain Streak Camera (FDSC) that can capture the picosecond time evolution of the plasma accelerator structure in a single shot. In our prototype Frequency-Domain Streak Camera, a probe pulse propagates obliquely to a sub-picosecond pump pulse that creates an evolving nonlinear index 'bubble' in fused silica glass, supplementing a conventional Frequency Domain Holographic (FDH) probe-reference pair that co-propagates with the 'bubble'. Frequency Domain Tomography (FDT) generalizes Frequency-Domain Streak Camera by probing the 'bubble' from multiple angles and reconstructing its morphology and evolution using algorithms similar to those used in medical CAT scans. Multiplexing methods (Temporalmore » Multiplexing and Angular Multiplexing) improve data storage and processing capability, demonstrating a compact Frequency Domain Tomography system with a single spectrometer.« less
A new compact, high sensitivity neutron imaging systema)
NASA Astrophysics Data System (ADS)
Caillaud, T.; Landoas, O.; Briat, M.; Rossé, B.; Thfoin, I.; Philippe, F.; Casner, A.; Bourgade, J. L.; Disdier, L.; Glebov, V. Yu.; Marshall, F. J.; Sangster, T. C.; Park, H. S.; Robey, H. F.; Amendt, P.
2012-10-01
We have developed a new small neutron imaging system (SNIS) diagnostic for the OMEGA laser facility. The SNIS uses a penumbral coded aperture and has been designed to record images from low yield (109-1010 neutrons) implosions such as those using deuterium as the fuel. This camera was tested at OMEGA in 2009 on a rugby hohlraum energetics experiment where it recorded an image at a yield of 1.4 × 1010. The resolution of this image was 54 μm and the camera was located only 4 meters from target chamber centre. We recently improved the instrument by adding a cooled CCD camera. The sensitivity of the new camera has been fully characterized using a linear accelerator and a 60Co γ-ray source. The calibration showed that the signal-to-noise ratio could be improved by using raw binning detection.
Kinematics of Visually-Guided Eye Movements
Hess, Bernhard J. M.; Thomassen, Jakob S.
2014-01-01
One of the hallmarks of an eye movement that follows Listing’s law is the half-angle rule that says that the angular velocity of the eye tilts by half the angle of eccentricity of the line of sight relative to primary eye position. Since all visually-guided eye movements in the regime of far viewing follow Listing’s law (with the head still and upright), the question about its origin is of considerable importance. Here, we provide theoretical and experimental evidence that Listing’s law results from a unique motor strategy that allows minimizing ocular torsion while smoothly tracking objects of interest along any path in visual space. The strategy consists in compounding conventional ocular rotations in meridian planes, that is in horizontal, vertical and oblique directions (which are all torsion-free) with small linear displacements of the eye in the frontal plane. Such compound rotation-displacements of the eye can explain the kinematic paradox that the fixation point may rotate in one plane while the eye rotates in other planes. Its unique signature is the half-angle law in the position domain, which means that the rotation plane of the eye tilts by half-the angle of gaze eccentricity. We show that this law does not readily generalize to the velocity domain of visually-guided eye movements because the angular eye velocity is the sum of two terms, one associated with rotations in meridian planes and one associated with displacements of the eye in the frontal plane. While the first term does not depend on eye position the second term does depend on eye position. We show that compounded rotation - displacements perfectly predict the average smooth kinematics of the eye during steady- state pursuit in both the position and velocity domain. PMID:24751602
Warrant, Eric J; Locket, N Adam
2004-08-01
The deep sea is the largest habitat on earth. Its three great faunal environments--the twilight mesopelagic zone, the dark bathypelagic zone and the vast flat expanses of the benthic habitat--are home to a rich fauna of vertebrates and invertebrates. In the mesopelagic zone (150-1000 m), the down-welling daylight creates an extended scene that becomes increasingly dimmer and bluer with depth. The available daylight also originates increasingly from vertically above, and bioluminescent point-source flashes, well contrasted against the dim background daylight, become increasingly visible. In the bathypelagic zone below 1000 m no daylight remains, and the scene becomes entirely dominated by point-like bioluminescence. This changing nature of visual scenes with depth--from extended source to point source--has had a profound effect on the designs of deep-sea eyes, both optically and neurally, a fact that until recently was not fully appreciated. Recent measurements of the sensitivity and spatial resolution of deep-sea eyes--particularly from the camera eyes of fishes and cephalopods and the compound eyes of crustaceans--reveal that ocular designs are well matched to the nature of the visual scene at any given depth. This match between eye design and visual scene is the subject of this review. The greatest variation in eye design is found in the mesopelagic zone, where dim down-welling daylight and bio-luminescent point sources may be visible simultaneously. Some mesopelagic eyes rely on spatial and temporal summation to increase sensitivity to a dim extended scene, while others sacrifice this sensitivity to localise pinpoints of bright bioluminescence. Yet other eyes have retinal regions separately specialised for each type of light. In the bathypelagic zone, eyes generally get smaller and therefore less sensitive to point sources with increasing depth. In fishes, this insensitivity, combined with surprisingly high spatial resolution, is very well adapted to the detection and localisation of point-source bioluminescence at ecologically meaningful distances. At all depths, the eyes of animals active on and over the nutrient-rich sea floor are generally larger than the eyes of pelagic species. In fishes, the retinal ganglion cells are also frequently arranged in a horizontal visual streak, an adaptation for viewing the wide flat horizon of the sea floor, and all animals living there. These and many other aspects of light and vision in the deep sea are reviewed in support of the following conclusion: it is not only the intensity of light at different depths, but also its distribution in space, which has been a major force in the evolution of deep-sea vision.
... your skin or eyes, you may have: Blisters Burns Pain Vision loss Hydrofluoric acid poisoning can have ... urine tests Camera down the throat to see burns in the esophagus and the stomach (endoscopy) Fluids ...
On Biometrics With Eye Movements.
Zhang, Youming; Juhola, Martti
2017-09-01
Eye movements are a relatively novel data source for biometric identification. When video cameras applied to eye tracking become smaller and more efficient, this data source could offer interesting opportunities for the development of eye movement biometrics. In this paper, we study primarily biometric identification as seen as a classification task of multiple classes, and secondarily biometric verification considered as binary classification. Our research is based on the saccadic eye movement signal measurements from 109 young subjects. In order to test the data measured, we use a procedure of biometric identification according to the one-versus-one (subject) principle. In a development from our previous research, which also involved biometric verification based on saccadic eye movements, we now apply another eye movement tracker device with a higher sampling frequency of 250 Hz. The results obtained are good, with correct identification rates at 80-90% at their best.
John, Sheila; Premila, M; Javed, Mohd; Vikas, G; Wagholikar, Amol
2015-01-01
To inform about a very unique and first of its kind telehealth pilot study in India that has provided virtual telehealth consultation to eye care patients in low resource at remote villages. Provision of Access to eye care services in remote population is always challenging due to pragmatic reasons. Advances in Telehealth technologies have provided an opportunity to improve access to remote population. However, current Telehealth technologies are limited to face-to-face video consultation only. We inform about a pilot study that illustrates real-time imaging access to ophthalmologists. Our innovative software led technology solution allowed screening of patients with varying ocular conditions. Eye camps were conducted in 2 districts in South India over a 12-month period in 2014. Total of 196 eye camps were conducted. Total of 19,634 patients attended the eye camps. Innovative software was used to conduct consultation with the ophthalmologist located in the city hospital. The software enabled virtual visit and allowed instant sharing of fundus camera images for assessment and diagnosis. About 71% of the patients were found to have Refractive Error problems, 15% of them were found to have cataract, 7% of the patients were diagnosed to have Retina problems and 7% of the patients were found to have other ocular diseases. The patients requiring cataract surgery were immediately transferred to city hospital for treatment. Software led assessment of fundus camera images assisted in identifying retinal eye diseases. Our real-time virtual visit software assisted in specialist care provision and illustrated a novel tele health solution for low resource population.
Dissolution Mechanism for High Melting Point Transition Elements in Aluminum Melt
NASA Astrophysics Data System (ADS)
Lee, Young E.; Houser, Stephen L.
When added cold in aluminum melt, the alloying process for compacts of transition metal elements such as Mn, Fe, Cr, Ni, Ti, Cu, and Zn takes a sequence of incubation, exothermic reactions to form intermetallic compounds, and dispersion of the alloying elements into aluminum melt. The experiments with Cr compacts show that the incubation period is affected by the content of ingredient Al and size of compacts and by size of Cr particles. Incubation period becomes longer as the content of ingredient aluminum in compact decreases, and this prolonged incubation period negatively impacts the dissolution of the alloying elements in aluminum. Once liquid aluminum forms at reaction sites, the exothermic reaction takes place quickly and significantly raises the temperature of the compacts. As the result of it, the compacts swell in volume with a sponge like structure. Such porous structure encourages the penetration of liquid aluminum from the melt. The compacts become weak mechanically, and the alloying elements are dispersed and entrained in aluminum melt as discrete and small sized units. When Cr compacts are deficient in aluminum, the unreacted Cr particles are encased by the intermetallic compounds in the dispersed particles. They are carried in the melt flow and continue the dissolution reaction in aluminum. The entire dissolution process of Cr compacts completes within 10 to 15 minutes with a full recovery when the aluminum content is 10 to 20% in compacts.
Jacob, Julie; Paques, Michel; Krivosic, Valérie; Dupas, Bénédicte; Erginay, Ali; Tadayoni, Ramin; Gaudric, Alain
2017-01-01
To analyze cone mosaic metrics on adaptive optics (AO) images as a function of retinal eccentricity in two different age groups using a commercial flood illumination AO device. Fifty-three eyes of 28 healthy subjects divided into two age groups were imaged using an AO flood-illumination camera (rtx1; Imagine Eyes, Orsay, France). A 16° × 4° field was obtained horizontally. Cone-packing metrics were determined in five neighboring 50 µm × 50 µm regions. Both retinal (cones/mm 2 and µm) and visual (cones/degrees 2 and arcmin) units were computed. Results for cone mosaic metrics at 2°, 2.5°, 3°, 4°, and 5° eccentricity were compatible with previous AO scanning laser ophthalmoscopy and histology data. No significant difference was observed between the two age groups. The rtx1 camera enabled reproducible measurements of cone-packing metrics across the extrafoveal retina. These findings may contribute to the development of normative data and act as a reference for future research. [Ophthalmic Surg Lasers Imaging Retina. 2017;48:45-50.]. Copyright 2017, SLACK Incorporated.
Colonnier, Fabien; Manecy, Augustin; Juston, Raphaël; Mallot, Hanspeter; Leitel, Robert; Floreano, Dario; Viollet, Stéphane
2015-02-25
In this study, a miniature artificial compound eye (15 mm in diameter) called the curved artificial compound eye (CurvACE) was endowed for the first time with hyperacuity, using similar micro-movements to those occurring in the fly's compound eye. A periodic micro-scanning movement of only a few degrees enables the vibrating compound eye to locate contrasting objects with a 40-fold greater resolution than that imposed by the interommatidial angle. In this study, we developed a new algorithm merging the output of 35 local processing units consisting of adjacent pairs of artificial ommatidia. The local measurements performed by each pair are processed in parallel with very few computational resources, which makes it possible to reach a high refresh rate of 500 Hz. An aerial robotic platform with two degrees of freedom equipped with the active CurvACE placed over naturally textured panels was able to assess its linear position accurately with respect to the environment thanks to its efficient gaze stabilization system. The algorithm was found to perform robustly at different light conditions as well as distance variations relative to the ground and featured small closed-loop positioning errors of the robot in the range of 45 mm. In addition, three tasks of interest were performed without having to change the algorithm: short-range odometry, visual stabilization, and tracking contrasting objects (hands) moving over a textured background.
Study on real-time images compounded using spatial light modulator
NASA Astrophysics Data System (ADS)
Xu, Jin; Chen, Zhebo; Ni, Xuxiang; Lu, Zukang
2007-01-01
Image compounded technology is often used on film and its facture. In common, image compounded use image processing arithmetic, get useful object, details, background or some other things from the images firstly, then compounding all these information into one image. When using this method, the film system needs a powerful processor, for the process function is very complex, we get the compounded image for a few time delay. In this paper, we introduce a new method of image real-time compounded, use this method, we can do image composite at the same time with movie shot. The whole system is made up of two camera-lens, spatial light modulator array and image sensor. In system, the spatial light modulator could be liquid crystal display (LCD), liquid crystal on silicon (LCoS), thin film transistor liquid crystal display (TFTLCD), Deformable Micro-mirror Device (DMD), and so on. Firstly, one camera-lens images the object on the spatial light modulator's panel, we call this camera-lens as first image lens. Secondly, we output an image to the panel of spatial light modulator. Then, the image of the object and image that output by spatial light modulator will be spatial compounded on the panel of spatial light modulator. Thirdly, the other camera-lens images the compounded image to the image sensor, and we call this camera-lens as second image lens. After these three steps, we will gain the compound images by image sensor. For the spatial light modulator could output the image continuously, then the image will be compounding continuously too, and the compounding procedure is completed in real-time. When using this method to compounding image, if we will put real object into invented background, we can output the invented background scene on the spatial light modulator, and the real object will be imaged by first image lens. Then, we get the compounded images by image sensor in real time. The same way, if we will put real background to an invented object, we can output the invented object on the spatial light modulator and the real background will be imaged by first image lens. Then, we can also get the compounded images by image sensor real time. Commonly, most spatial light modulator only can do modulate light intensity, so we can only do compounding BW images if use only one panel which without color filter. If we will get colorful compounded image, we need use the system like three spatial light modulator panel projection. In the paper, the system's optical system framework we will give out. In all experiment, the spatial light modulator used liquid crystal on silicon (LCoS). At the end of the paper, some original pictures and compounded pictures will be given on it. Although the system has a few shortcomings, we can conclude that, using this system to compounding images has no delay to do mathematic compounding process, it is a really real time images compounding system.
A compact and lightweight off-axis lightguide prism in near to eye display
NASA Astrophysics Data System (ADS)
Zhuang, Zhenfeng; Cheng, Qijia; Surman, Phil; Zheng, Yuanjin; Sun, Xiao Wei
2017-06-01
We propose a method to improve the design of an off-axis lightguide configuration for near to eye displays (NED) using freeform optics technology. The advantage of this modified optical system, which includes an organic light-emitting diode (OLED), a doublet lens, an imaging lightguide prism and a compensation prism, is that it increases optical length path, offers a smaller size, as well as avoids the obstructed views, and matches the user's head shape. In this system, the light emitted from the OLED passes through the doublet lens and is refracted/reflected by the imaging lightguide prism, which is used to magnify the image from the microdisplay, while the compensation prism is utilized to correct the light ray shift so that a low-distortion image can be observed in a real-world setting. A NED with a 4 mm diameter exit pupil, 21.5° diagonal full field of view (FoV), 23 mm eye relief, and a size of 33 mm by 9.3 mm by 16 mm is designed. The developed system is compact, lightweight and suitable for entertainment and education application.
Second harmonic generation microscopy of the living human cornea
NASA Astrophysics Data System (ADS)
Artal, Pablo; Ávila, Francisco; Bueno, Juan
2018-02-01
Second Harmonic Generation (SHG) microscopy provides high-resolution structural imaging of the corneal stroma without the need of labelling techniques. This powerful tool has never been applied to living human eyes so far. Here, we present a new compact SHG microscope specifically developed to image the structural organization of the corneal lamellae in living healthy human volunteers. The research prototype incorporates a long-working distance dry objective that allows non-contact three-dimensional SHG imaging of the cornea. Safety assessment and effectiveness of the system were firstly tested in ex-vivo fresh eyes. The maximum average power of the used illumination laser was 20 mW, more than 10 times below the maximum permissible exposure (according to ANSI Z136.1-2000). The instrument was successfully employed to obtain non-contact and non-invasive SHG of the living human eye within well-established light safety limits. This represents the first recording of in vivo SHG images of the human cornea using a compact multiphoton microscope. This might become an important tool in Ophthalmology for early diagnosis and tracking ocular pathologies.
Preparation of Ti3Al intermetallic compound by spark plasma sintering
NASA Astrophysics Data System (ADS)
Ito, Tsutomu; Fukui, Takahiro
2018-04-01
Sintered compacts of single phase Ti3Al intermetallic compound, which have excellent potential as refractory materials, were prepared by spark plasma sintering (SPS). A raw powder of Ti3Al intermetallic compound with an average powder diameter of 176 ± 56 μm was used in this study; this large powder diameter is disadvantageous for sintering because of the small surface area. The samples were prepared at sintering temperatures (Ts) of 1088, 1203, and 1323 K, sintering stresses (σs) of 16, 32, and 48 MPa, and a sintering time (ts) of 10 min. The calculated relative densities based on the apparent density of Ti3Al provided by the supplier were approximately 100% under all sintering conditions. From the experimental results, it was evident that SPS is an effective technique for dense sintering of Ti3Al intermetallic compounds in a short time interval. In this report, the sintering characteristics of Ti3Al intermetallic compacts are briefly discussed and compared with those of pure titanium compacts.
NASA Astrophysics Data System (ADS)
Cucci, Costanza; Casini, Andrea; Stefani, Lorenzo; Picollo, Marcello; Jussila, Jouni
2017-07-01
For more than a decade, a number of studies and research projects have been devoted to customize hyperspectral imaging techniques to the specific needs of conservation and applications in museum context. A growing scientific literature definitely demonstrated the effectiveness of reflectance hyperspectral imaging for non-invasive diagnostics and highquality documentation of 2D artworks. Additional published studies tackle the problems of data-processing, with a focus on the development of algorithms and software platforms optimised for visualisation and exploitation of hyperspectral bigdata sets acquired on paintings. This scenario proves that, also in the field of Cultural Heritage (CH), reflectance hyperspectral imaging has nowadays reached the stage of mature technology, and is ready for the transition from the R&D phase to the large-scale applications. In view of that, a novel concept of hyperspectral camera - featuring compactness, lightness and good usability - has been developed by SPECIM, Spectral Imaging Ltd. (Oulu, Finland), a company in manufacturing products for hyperspectral imaging. The camera is proposed as new tool for novel applications in the field of Cultural Heritage. The novelty of this device relies in its reduced dimensions and weight and in its user-friendly interface, which make this camera much more manageable and affordable than conventional hyperspectral instrumentation. The camera operates in the 400-1000nm spectral range and can be mounted on a tripod. It can operate from short-distance (tens of cm) to long distances (tens of meters) with different spatial resolutions. The first release of the prototype underwent a preliminary in-depth experimentation at the IFAC-CNR laboratories. This paper illustrates the feasibility study carried out on the new SPECIM hyperspectral camera, tested under different conditions on laboratory targets and artworks with the specific aim of defining its potentialities and weaknesses in its use in the Cultural Heritage field.
A compact 16-module camera using 64-pixel CsI(Tl)/Si p-i-n photodiode imaging modules
NASA Astrophysics Data System (ADS)
Choong, W.-S.; Gruber, G. J.; Moses, W. W.; Derenzo, S. E.; Holland, S. E.; Pedrali-Noy, M.; Krieger, B.; Mandelli, E.; Meddeler, G.; Wang, N. W.; Witt, E. K.
2002-10-01
We present a compact, configurable scintillation camera employing a maximum of 16 individual 64-pixel imaging modules resulting in a 1024-pixel camera covering an area of 9.6 cm/spl times/9.6 cm. The 64-pixel imaging module consists of optically isolated 3 mm/spl times/3 mm/spl times/5 mm CsI(Tl) crystals coupled to a custom array of Si p-i-n photodiodes read out by a custom integrated circuit (IC). Each imaging module plugs into a readout motherboard that controls the modules and interfaces with a data acquisition card inside a computer. For a given event, the motherboard employs a custom winner-take-all IC to identify the module with the largest analog output and to enable the output address bits of the corresponding module's readout IC. These address bits identify the "winner" pixel within the "winner" module. The peak of the largest analog signal is found and held using a peak detect circuit, after which it is acquired by an analog-to-digital converter on the data acquisition card. The camera is currently operated with four imaging modules in order to characterize its performance. At room temperature, the camera demonstrates an average energy resolution of 13.4% full-width at half-maximum (FWHM) for the 140-keV emissions of /sup 99m/Tc. The system spatial resolution is measured using a capillary tube with an inner diameter of 0.7 mm and located 10 cm from the face of the collimator. Images of the line source in air exhibit average system spatial resolutions of 8.7- and 11.2-mm FWHM when using an all-purpose and high-sensitivity parallel hexagonal holes collimator, respectively. These values do not change significantly when an acrylic scattering block is placed between the line source and the camera.
a Spatio-Spectral Camera for High Resolution Hyperspectral Imaging
NASA Astrophysics Data System (ADS)
Livens, S.; Pauly, K.; Baeck, P.; Blommaert, J.; Nuyts, D.; Zender, J.; Delauré, B.
2017-08-01
Imaging with a conventional frame camera from a moving remotely piloted aircraft system (RPAS) is by design very inefficient. Less than 1 % of the flying time is used for collecting light. This unused potential can be utilized by an innovative imaging concept, the spatio-spectral camera. The core of the camera is a frame sensor with a large number of hyperspectral filters arranged on the sensor in stepwise lines. It combines the advantages of frame cameras with those of pushbroom cameras. By acquiring images in rapid succession, such a camera can collect detailed hyperspectral information, while retaining the high spatial resolution offered by the sensor. We have developed two versions of a spatio-spectral camera and used them in a variety of conditions. In this paper, we present a summary of three missions with the in-house developed COSI prototype camera (600-900 nm) in the domains of precision agriculture (fungus infection monitoring in experimental wheat plots), horticulture (crop status monitoring to evaluate irrigation management in strawberry fields) and geology (meteorite detection on a grassland field). Additionally, we describe the characteristics of the 2nd generation, commercially available ButterflEYE camera offering extended spectral range (475-925 nm), and we discuss future work.
A Well-Traveled 'Eagle Crater' (left-eye)
NASA Technical Reports Server (NTRS)
2004-01-01
This is the left-eye version of the Mars Exploration Rover Opportunity's view on its 56th sol on Mars, before it left its landing-site crater. To the right, the rover tracks are visible at the original spot where the rover attempted unsuccessfully to exit the crater. After a one-sol delay, Opportunity took another route to the plains of Meridiani Planum. This image was taken by the rover's navigation camera.
Fish-eye view of STS-112 CDR Ashby on forward flight deck
2002-10-18
STS112-347-001 (18 October 2002) --- A fish-eye lens on a 35mm camera records astronaut Jeffrey S. Ashby, STS-112 mission commander, at the commanders station on the forward flight deck of the Space Shuttle Atlantis. Ashby, attired in his shuttle launch and entry suit, looks over a checklist prior to the entry phase of the flight.
Fish-eye view of PLT Melroy and MS Wolf on forward flight deck
2002-10-18
STS112-337-036 (18 October 2002) --- A fish-eye lens on a 35mm camera records astronauts Jeffrey S. Ashby (left), STS-112 mission commander; Pamela A. Melroy, pilot; and David A. Wolf, mission specialist, on the forward flight deck of the Space Shuttle Atlantis. Attired in their shuttle launch and entry suits, the crew prepares for the entry phase of the flight.
Pediatric Eye Screening Instrumentation
NASA Astrophysics Data System (ADS)
Chen, Ying-Ling; Lewis, J. W. L.
2001-11-01
Computational evaluations are presented for binocular eye screening using the off-axis digital retinascope. The retinascope, such as the iScreen digital screening system, has been employed to perform pediatric binocular screening using a flash lamp and single-shot camera recording. The digital images are transferred electronically to a reading center for analysis. The method has been shown to detect refractive error, amblyopia, anisocoria, and ptosis. This computational work improves the performance of the system and forms the basis for automated data analysis. For this purpose, variouis published eye models are evaluated with simulated retinascope images. Two to ten million rays are traced in each image calculation. The poster will present the simulation results for a range of eye conditions of refractive error of -20 to +20 diopters with 0.5- to-1 diopter resolution, pupil size of 3 to 8 mm diameter (1-mm increment), and staring angle of 2 to 12 degree (2-degree increment). The variation of the results with the system conditions such as the off-axis distance of light source and the shutter size of camera are also evaluated. The quantitative analysis for each eye’s and system’s condition is then performed to obtain parameters for automatic reading. The summary of the system performance is given and performance-enhancement design modifications are presented.
NASA Astrophysics Data System (ADS)
Torkildsen, H. E.; Hovland, H.; Opsahl, T.; Haavardsholm, T. V.; Nicolas, S.; Skauli, T.
2014-06-01
In some applications of multi- or hyperspectral imaging, it is important to have a compact sensor. The most compact spectral imaging sensors are based on spectral filtering in the focal plane. For hyperspectral imaging, it has been proposed to use a "linearly variable" bandpass filter in the focal plane, combined with scanning of the field of view. As the image of a given object in the scene moves across the field of view, it is observed through parts of the filter with varying center wavelength, and a complete spectrum can be assembled. However if the radiance received from the object varies with viewing angle, or with time, then the reconstructed spectrum will be distorted. We describe a camera design where this hyperspectral functionality is traded for multispectral imaging with better spectral integrity. Spectral distortion is minimized by using a patterned filter with 6 bands arranged close together, so that a scene object is seen by each spectral band in rapid succession and with minimal change in viewing angle. The set of 6 bands is repeated 4 times so that the spectral data can be checked for internal consistency. Still the total extent of the filter in the scan direction is small. Therefore the remainder of the image sensor can be used for conventional imaging with potential for using motion tracking and 3D reconstruction to support the spectral imaging function. We show detailed characterization of the point spread function of the camera, demonstrating the importance of such characterization as a basis for image reconstruction. A simplified image reconstruction based on feature-based image coregistration is shown to yield reasonable results. Elimination of spectral artifacts due to scene motion is demonstrated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rathore, Kavita, E-mail: kavira@iitk.ac.in, E-mail: pmunshi@iitk.ac.in, E-mail: sudeepb@iitk.ac.in; Munshi, Prabhat, E-mail: kavira@iitk.ac.in, E-mail: pmunshi@iitk.ac.in, E-mail: sudeepb@iitk.ac.in; Bhattacharjee, Sudeep, E-mail: kavira@iitk.ac.in, E-mail: pmunshi@iitk.ac.in, E-mail: sudeepb@iitk.ac.in
A new non-invasive diagnostic system is developed for Microwave Induced Plasma (MIP) to reconstruct tomographic images of a 2D emission profile. A compact MIP system has wide application in industry as well as research application such as thrusters for space propulsion, high current ion beams, and creation of negative ions for heating of fusion plasma. Emission profile depends on two crucial parameters, namely, the electron temperature and density (over the entire spatial extent) of the plasma system. Emission tomography provides basic understanding of plasmas and it is very useful to monitor internal structure of plasma phenomena without disturbing its actualmore » processes. This paper presents development of a compact, modular, and versatile Optical Emission Tomography (OET) tool for a cylindrical, magnetically confined MIP system. It has eight slit-hole cameras and each consisting of a complementary metal–oxide–semiconductor linear image sensor for light detection. The optical noise is reduced by using aspheric lens and interference band-pass filters in each camera. The entire cylindrical plasma can be scanned with automated sliding ring mechanism arranged in fan-beam data collection geometry. The design of the camera includes a unique possibility to incorporate different filters to get the particular wavelength light from the plasma. This OET system includes selected band-pass filters for particular argon emission 750 nm, 772 nm, and 811 nm lines and hydrogen emission H{sub α} (656 nm) and H{sub β} (486 nm) lines. Convolution back projection algorithm is used to obtain the tomographic images of plasma emission line. The paper mainly focuses on (a) design of OET system in detail and (b) study of emission profile for 750 nm argon emission lines to validate the system design.« less
Fly's Eye camera system: optical imaging using a hexapod platform
NASA Astrophysics Data System (ADS)
Jaskó, Attila; Pál, András.; Vida, Krisztián.; Mészáros, László; Csépány, Gergely; Mező, György
2014-07-01
The Fly's Eye Project is a high resolution, high coverage time-domain survey in multiple optical passbands: our goal is to cover the entire visible sky above the 30° horizontal altitude with a cadence of ~3 min. Imaging is going to be performed by 19 wide-field cameras mounted on a hexapod platform resembling a fly's eye. Using a hexapod developed and built by our team allows us to create a highly fault-tolerant instrument that uses the sky as a reference to define its own tracking motion. The virtual axis of the platform is automatically aligned with the Earth's rotational axis; therefore the same mechanics can be used independently from the geographical location of the device. Its enclosure makes it capable of autonomous observing and withstanding harsh environmental conditions. We briefly introduce the electrical, mechanical and optical design concepts of the instrument and summarize our early results, focusing on sidereal tracking. Due to the hexapod design and hence the construction is independent from the actual location, it is considerably easier to build, install and operate a network of such devices around the world.
Curiosity Rover View of Alluring Martian Geology Ahead
2015-08-05
A southward-looking panorama combining images from both cameras of the Mast Camera Mastcam instrument on NASA Curiosity Mars Rover shows diverse geological textures on Mount Sharp. A southward-looking panorama combining images from both cameras of the Mast Camera (Mastcam) instrument on NASA's Curiosity Mars Rover shows diverse geological textures on Mount Sharp. Three years after landing on Mars, the mission is investigating this layered mountain for evidence about changes in Martian environmental conditions, from an ancient time when conditions were favorable for microbial life to the much-drier present. Gravel and sand ripples fill the foreground, typical of terrains that Curiosity traversed to reach Mount Sharp from its landing site. Outcrops in the midfield are of two types: dust-covered, smooth bedrock that forms the base of the mountain, and sandstone ridges that shed boulders as they erode. Rounded buttes in the distance contain sulfate minerals, perhaps indicating a change in the availability of water when they formed. Some of the layering patterns on higher levels of Mount Sharp in the background are tilted at different angles than others, evidence of complicated relationships still to be deciphered. The scene spans from southeastward at left to southwestward at right. The component images were taken on April 10 and 11, 2015, the 952nd and 953rd Martian days (or sols) since the rover's landing on Mars on Aug. 6, 2012, UTC (Aug. 5, PDT). Images in the central part of the panorama are from Mastcam's right-eye camera, which is equipped with a 100-millimeter-focal-length telephoto lens. Images used in outer portions, including the most distant portions of the mountain in the scene, were taken with Mastcam's left-eye camera, using a wider-angle, 34-millimeter lens. http://photojournal.jpl.nasa.gov/catalog/PIA19803
Chen, Tijun; Gao, Min; Tong, Yunqi
2018-01-15
To prepare core-shell-structured Ti@compound particle (Ti@compound p ) reinforced Al matrix composite via powder thixoforming, the effects of alloying elements, such as Si, Cu, Mg, and Zn, on the reaction between Ti powders and Al melt, and the microstructure of the resulting reinforcements were investigated during heating of powder compacts at 993 K (720 °C). Simultaneously, the situations of the reinforcing particles in the corresponding semisolid compacts were also studied. Both thermodynamic analysis and experiment results all indicate that Si participated in the reaction and promoted the formation of Al-Ti-Si ternary compounds, while Cu, Mg, and Zn did not take part in the reaction and facilitated Al₃Ti phase to form to different degrees. The first-formed Al-Ti-Si ternary compound was τ1 phase, and then it gradually transformed into (Al,Si)₃Ti phase. The proportion and existing time of τ1 phase all increased as the Si content increased. In contrast, Mg had the largest, Cu had the least, and Si and Zn had an equivalent middle effect on accelerating the reaction. The thicker the reaction shell was, the larger the stress generated in the shell was, and thus the looser the shell microstructure was. The stress generated in (Al,Si)₃Ti phase was larger than that in τ1 phase, but smaller than that in Al₃Ti phase. So, the shells in the Al-Ti-Si system were more compact than those in the other systems, and Si element was beneficial to obtain thick and compact compound shells. Most of the above results were consistent to those in the semisolid state ones except the product phase constituents in the Al-Ti-Mg system and the reaction rate in the Al-Ti-Zn system. More importantly, the desirable core-shell structured Ti@compound p was only achieved in the semisolid Al-Ti-Si system.
Optical analysis of a compound quasi-microscope for planetary landers
NASA Technical Reports Server (NTRS)
Wall, S. D.; Burcher, E. E.; Huck, F. O.
1974-01-01
A quasi-microscope concept, consisting of facsimile camera augmented with an auxiliary lens as a magnifier, was introduced and analyzed. The performance achievable with this concept was primarily limited by a trade-off between resolution and object field; this approach leads to a limiting resolution of 20 microns when used with the Viking lander camera (which has an angular resolution of 0.04 deg). An optical system is analyzed which includes a field lens between camera and auxiliary lens to overcome this limitation. It is found that this system, referred to as a compound quasi-microscope, can provide improved resolution (to about 2 microns ) and a larger object field. However, this improvement is at the expense of increased complexity, special camera design requirements, and tighter tolerances on the distances between optical components.
Patient Eye Examinations - Adults
Explore Recent Photos Trending Flickr VR The Commons Galleries World Map Camera Finder The Weekly Flickr Flickr Blog Create Upload Log In Sign Up Explore Recent Photos Trending The Commons Galleries The Weekly Flickr Flickr ...
Explore Recent Photos Trending Flickr VR The Commons Galleries World Map Camera Finder The Weekly Flickr Flickr Blog Create Upload Log In Sign Up Explore Recent Photos Trending The Commons Galleries The Weekly Flickr Flickr ...
Lipstick and Lead: Questions and Answers
... cosmetics, such as eye shadows, blushes, compact powders, shampoos, and body lotions. Our guidance recommends a maximum ... less than 10 ppm lead. Based on our surveys we determined that manufacturers are capable of limiting ...
ePix100 camera: Use and applications at LCLS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carini, G. A., E-mail: carini@slac.stanford.edu; Alonso-Mori, R.; Blaj, G.
2016-07-27
The ePix100 x-ray camera is a new system designed and built at SLAC for experiments at the Linac Coherent Light Source (LCLS). The camera is the first member of a family of detectors built around a single hardware and software platform, supporting a variety of front-end chips. With a readout speed of 120 Hz, matching the LCLS repetition rate, a noise lower than 80 e-rms and pixels of 50 µm × 50 µm, this camera offers a viable alternative to fast readout, direct conversion, scientific CCDs in imaging mode. The detector, designed for applications such as X-ray Photon Correlation Spectroscopymore » (XPCS) and wavelength dispersive X-ray Emission Spectroscopy (XES) in the energy range from 2 to 10 keV and above, comprises up to 0.5 Mpixels in a very compact form factor. In this paper, we report the performance of the camera during its first use at LCLS.« less
AO corrected satellite imaging from Mount Stromlo
NASA Astrophysics Data System (ADS)
Bennet, F.; Rigaut, F.; Price, I.; Herrald, N.; Ritchie, I.; Smith, C.
2016-07-01
The Research School of Astronomy and Astrophysics have been developing adaptive optics systems for space situational awareness. As part of this program we have developed satellite imaging using compact adaptive optics systems for small (1-2 m) telescopes such as those operated by Electro Optic Systems (EOS) from the Mount Stromlo Observatory. We have focused on making compact, simple, and high performance AO systems using modern high stroke high speed deformable mirrors and EMCCD cameras. We are able to track satellites down to magnitude 10 with a Strehl in excess of 20% in median seeing.
Shack-Hartmann wavefront sensor using a Raspberry Pi embedded system
NASA Astrophysics Data System (ADS)
Contreras-Martinez, Ramiro; Garduño-Mejía, Jesús; Rosete-Aguilar, Martha; Román-Moreno, Carlos J.
2017-05-01
In this work we present the design and manufacture of a compact Shack-Hartmann wavefront sensor using a Raspberry Pi and a microlens array. The main goal of this sensor is to recover the wavefront of a laser beam and to characterize its spatial phase using a simple and compact Raspberry Pi and the Raspberry Pi embedded camera. The recovery algorithm is based on a modified version of the Southwell method and was written in Python as well as its user interface. Experimental results and reconstructed wavefronts are presented.
Santamaría, Beatriz; Laguna, María F.; López-Romero, David; Hernandez, Ana L.; Sanza, Francisco J.; Lavín, Álvaro; Casquel, Rafael; Maigler, María V.; Espinosa, Rocío L.; Holgado, Miguel
2017-01-01
A novel compact optical biochip based on a thin layer-sensing surface of nitrocellulose is used for in-situ label-free detection of metalloproteinase (MMP9) related to dry eye disease. In this article, a new integrated chip with different interferometric transducers layout with an optimal sensing surface is reported for the first time. We demonstrate that specific antibodies can be immobilized onto these transducers with a very low volume of sample and with good orientation. Many sensing transducers constitute the presented biochip in order to yield statistical data and stability in the acquired measurements. As a result, we report the recognition curve for pure recombinant MMP9, tests of model tears with MMP9, and real tear performance from patients, with a promising limit of detection. PMID:28534808
Behavior of Compact Toroid Injected into C-2U Confinement Vessel
NASA Astrophysics Data System (ADS)
Matsumoto, Tadafumi; Roche, T.; Allrey, I.; Sekiguchi, J.; Asai, T.; Conroy, M.; Gota, H.; Granstedt, E.; Hooper, C.; Kinley, J.; Valentine, T.; Waggoner, W.; Binderbauer, M.; Tajima, T.; the TAE Team
2016-10-01
The compact toroid (CT) injector system has been developed for particle refueling on the C-2U device. A CT is formed by a magnetized coaxial plasma gun (MCPG) and the typical ejected CT/plasmoid parameters are as follows: average velocity 100 km/s, average electron density 1.9 ×1015 cm-3, electron temperature 30-40 eV, mass 12 μg . To refuel particles into FC plasma the CT must penetrate the transverse magnetic field that surrounds the FRC. The kinetic energy density of the CT should be higher than magnetic energy density of the axial magnetic field, i.e., ρv2 / 2 >=B2 / 2μ0 , where ρ, v, and B are mass density, velocity, and surrounded magnetic field, respectively. Also, the penetrated CT's trajectory is deflected by the transverse magnetic field (Bz 1 kG). Thus, we have to estimate CT's energy and track the CT trajectory inside the magnetic field, for which we adopted a fast-framing camera on C-2U: framing rate is up to 1.25 MHz for 120 frames. By employing the camera we clearly captured the CT/plasmoid trajectory. Comparisons between the fast-framing camera and some other diagnostics as well as CT injection results on C-2U will be presented.
NASA Astrophysics Data System (ADS)
Ueno, Yuichiro; Takahashi, Isao; Ishitsu, Takafumi; Tadokoro, Takahiro; Okada, Koichi; Nagumo, Yasushi; Fujishima, Yasutake; Yoshida, Akira; Umegaki, Kikuo
2018-06-01
We developed a pinhole type gamma camera, using a compact detector module of a pixelated CdTe semiconductor, which has suitable sensitivity and quantitative accuracy for low dose rate fields. In order to improve the sensitivity of the pinhole type semiconductor gamma camera, we adopted three methods: a signal processing method to set the discriminating level lower, a high sensitivity pinhole collimator and a smoothing image filter that improves the efficiency of the source identification. We tested basic performances of the developed gamma camera and carefully examined effects of the three methods. From the sensitivity test, we found that the effective sensitivity was about 21 times higher than that of the gamma camera for high dose rate fields which we had previously developed. We confirmed that the gamma camera had sufficient sensitivity and high quantitative accuracy; for example, a weak hot spot (0.9 μSv/h) around a tree root could be detected within 45 min in a low dose rate field test, and errors of measured dose rates with point sources were less than 7% in a dose rate accuracy test.
NASA Astrophysics Data System (ADS)
Ratzloff, Jeff; Law, Nicholas M.; Fors, Octavi; Wulfken, Philip J.
2015-01-01
We designed, tested, prototyped and built a compact 27-camera robotic telescope that images 10,000 square degrees in 2-minute exposures. We exploit mass produced interline CCD Cameras with Rokinon consumer lenses to economically build a telescope that covers this large part of the sky simultaneously with a good enough pixel sampling to avoid the confusion limit over most of the sky. We developed the initial concept into a 3-d mechanical design with the aid of computer modeling programs. Significant design components include the camera assembly-mounting modules, the hemispherical support structure, and the instrument base structure. We simulated flexure and material stress in each of the three main components, which helped us optimize the rigidity and materials selection, while reducing weight. The camera mounts are CNC aluminum and the support shell is reinforced fiberglass. Other significant project components include optimizing camera locations, camera alignment, thermal analysis, environmental sealing, wind protection, and ease of access to internal components. The Evryscope will be assembled at UNC Chapel Hill and deployed to the CTIO in 2015.
Two opsins from the compound eye of the crab Hemigrapsus sanguineus
Sakamoto; Hisatomi; Tokunaga; Eguchi
1996-01-01
The primary structures of two opsins from the brachyuran crab Hemigrapsus sanguineus were deduced from the cDNA nucleotide sequences. Both deduced proteins were composed of 377 amino acid residues and included residues highly conserved in visual pigments of other species, and the proteins were 75 % identical to each other. The distribution of opsin transcripts in the compound eye, determined by in situ hybridization, suggested that the mRNAs of the two opsins were expressed simultaneously in all of the seven retinular cells (R1-R7) forming the main rhabdom in each ommatidium. Two different visual pigments may be present in one photoreceptor cell in this brachyuran crab. The spectral sensitivity of the compound eye was also determined by recording the electroretinogram. The compound eye was maximally sensitive at about 480 nm. These and previous findings suggest that both opsins of this brachyuran crab produce visual pigments with maximal absorption in the blue-green region of the spectrum. Evidence is presented that crustaceans possess multiple pigment systems for vision.
Computerized lateral-shear interferometer
NASA Astrophysics Data System (ADS)
Hasegan, Sorin A.; Jianu, Angela; Vlad, Valentin I.
1998-07-01
A lateral-shear interferometer, coupled with a computer for laser wavefront analysis, is described. A CCD camera is used to transfer the fringe images through a frame-grabber into a PC. 3D phase maps are obtained by fringe pattern processing using a new algorithm for direct spatial reconstruction of the optical phase. The program describes phase maps by Zernike polynomials yielding an analytical description of the wavefront aberration. A compact lateral-shear interferometer has been built using a laser diode as light source, a CCD camera and a rechargeable battery supply, which allows measurements in-situ, if necessary.
NASA Astrophysics Data System (ADS)
Duparré, Jacques; Wippermann, Frank; Dannberg, Peter; Schreiber, Peter; Bräuer, Andreas; Völkel, Reinhard; Scharf, Toralf
2005-09-01
Two novel objective types on the basis of artificial compound eyes are examined. Both imaging systems are well suited for fabrication using microoptics technology due to the small required lens sags. In the apposition optics a microlens array (MLA) and a photo detector array of different pitch in its focal plane are applied. The image reconstruction is based on moire magnification. Several generations of demonstrators of this objective type are manufactured by photo lithographic processes. This includes a system with opaque walls between adjacent channels and an objective which is directly applied onto a CMOS detector array. The cluster eye approach, which is based on a mixture of superposition compound eyes and the vision system of jumping spiders, produces a regular image. Here, three microlens arrays of different pitch form arrays of Keplerian microtelescopes with tilted optical axes, including a field lens. The microlens arrays of this demonstrator are also fabricated using microoptics technology, aperture arrays are applied. Subsequently the lens arrays are stacked to the overall microoptical system on wafer scale. Both fabricated types of artificial compound eye imaging systems are experimentally characterized with respect to resolution, sensitivity and cross talk between adjacent channels. Captured images are presented.
Posnien, Nico; Hopfen, Corinna; Hilbrant, Maarten; Ramos-Womack, Margarita; Murat, Sophie; Schönauer, Anna; Herbert, Samantha L; Nunes, Maria D S; Arif, Saad; Breuker, Casper J; Schlötterer, Christian; Mitteroecker, Philipp; McGregor, Alistair P
2012-01-01
A striking diversity of compound eye size and shape has evolved among insects. The number of ommatidia and their size are major determinants of the visual sensitivity and acuity of the compound eye. Each ommatidium is composed of eight photoreceptor cells that facilitate the discrimination of different colours via the expression of various light sensitive Rhodopsin proteins. It follows that variation in eye size, shape, and opsin composition is likely to directly influence vision. We analyzed variation in these three traits in D. melanogaster, D. simulans and D. mauritiana. We show that D. mauritiana generally has larger eyes than its sibling species, which is due to a combination of larger ommatidia and more ommatidia. In addition, intra- and inter-specific differences in eye size among D. simulans and D. melanogaster strains are mainly caused by variation in ommatidia number. By applying a geometric morphometrics approach to assess whether the formation of larger eyes influences other parts of the head capsule, we found that an increase in eye size is associated with a reduction in the adjacent face cuticle. Our shape analysis also demonstrates that D. mauritiana eyes are specifically enlarged in the dorsal region. Intriguingly, this dorsal enlargement is associated with enhanced expression of rhodopsin 3 in D. mauritiana. In summary, our data suggests that the morphology and functional properties of the compound eyes vary considerably within and among these closely related Drosophila species and may be part of coordinated morphological changes affecting the head capsule.
Omnidirectional Underwater Camera Design and Calibration
Bosch, Josep; Gracias, Nuno; Ridao, Pere; Ribas, David
2015-01-01
This paper presents the development of an underwater omnidirectional multi-camera system (OMS) based on a commercially available six-camera system, originally designed for land applications. A full calibration method is presented for the estimation of both the intrinsic and extrinsic parameters, which is able to cope with wide-angle lenses and non-overlapping cameras simultaneously. This method is valid for any OMS in both land or water applications. For underwater use, a customized housing is required, which often leads to strong image distortion due to refraction among the different media. This phenomena makes the basic pinhole camera model invalid for underwater cameras, especially when using wide-angle lenses, and requires the explicit modeling of the individual optical rays. To address this problem, a ray tracing approach has been adopted to create a field-of-view (FOV) simulator for underwater cameras. The simulator allows for the testing of different housing geometries and optics for the cameras to ensure a complete hemisphere coverage in underwater operation. This paper describes the design and testing of a compact custom housing for a commercial off-the-shelf OMS camera (Ladybug 3) and presents the first results of its use. A proposed three-stage calibration process allows for the estimation of all of the relevant camera parameters. Experimental results are presented, which illustrate the performance of the calibration method and validate the approach. PMID:25774707
Botanical Compounds: Effects on Major Eye Diseases
Huynh, Tuan-Phat; Mann, Shivani N.; Mandal, Nawajes A.
2013-01-01
Botanical compounds have been widely used throughout history as cures for various diseases and ailments. Many of these compounds exhibit strong antioxidative, anti-inflammatory, and antiapoptotic properties. These are also common damaging mechanisms apparent in several ocular diseases, including age-related macular degeneration (AMD), glaucoma, diabetic retinopathy, cataract, and retinitis pigmentosa. In recent years, there have been many epidemiological and clinical studies that have demonstrated the beneficial effects of plant-derived compounds, such as curcumin, lutein and zeaxanthin, danshen, ginseng, and many more, on these ocular pathologies. Studies in cell cultures and animal models showed promising results for their uses in eye diseases. While there are many apparent significant correlations, further investigation is needed to uncover the mechanistic pathways of these botanical compounds in order to reach widespread pharmaceutical use and provide noninvasive alternatives for prevention and treatments of the major eye diseases. PMID:23843879
An integrated compact airborne multispectral imaging system using embedded computer
NASA Astrophysics Data System (ADS)
Zhang, Yuedong; Wang, Li; Zhang, Xuguo
2015-08-01
An integrated compact airborne multispectral imaging system using embedded computer based control system was developed for small aircraft multispectral imaging application. The multispectral imaging system integrates CMOS camera, filter wheel with eight filters, two-axis stabilized platform, miniature POS (position and orientation system) and embedded computer. The embedded computer has excellent universality and expansibility, and has advantages in volume and weight for airborne platform, so it can meet the requirements of control system of the integrated airborne multispectral imaging system. The embedded computer controls the camera parameters setting, filter wheel and stabilized platform working, image and POS data acquisition, and stores the image and data. The airborne multispectral imaging system can connect peripheral device use the ports of the embedded computer, so the system operation and the stored image data management are easy. This airborne multispectral imaging system has advantages of small volume, multi-function, and good expansibility. The imaging experiment results show that this system has potential for multispectral remote sensing in applications such as resource investigation and environmental monitoring.
Bhadri, Prashant R; Rowley, Adrian P; Khurana, Rahul N; Deboer, Charles M; Kerns, Ralph M; Chong, Lawrence P; Humayun, Mark S
2007-05-01
To evaluate the effectiveness of a prototype stereoscopic camera-based viewing system (Digital Microsurgical Workstation, three-dimensional (3D) Vision Systems, Irvine, California, USA) for anterior and posterior segment ophthalmic surgery. Institutional-based prospective study. Anterior and posterior segment surgeons performed designated standardized tasks on porcine eyes after training on prosthetic plastic eyes. Both anterior and posterior segment surgeons were able to complete tasks requiring minimal or moderate stereoscopic viewing. The results indicate that the system provides improved ergonomics. Improvements in key viewing performance areas would further enhance the value over a conventional operating microscope. The performance of the prototype system is not at par with the planned commercial system. With continued development of this technology, the three- dimensional system may be a novel viewing system in ophthalmic surgery with improved ergonomics with respect to traditional microscopic viewing.
Photodetectors for the Advanced Gamma-ray Imaging System (AGIS)
NASA Astrophysics Data System (ADS)
Wagner, Robert G.; Advanced Gamma-ray Imaging System AGIS Collaboration
2010-03-01
The Advanced Gamma-Ray Imaging System (AGIS) is a concept for the next generation very high energy gamma-ray observatory. Design goals include an order of magnitude better sensitivity, better angular resolution, and a lower energy threshold than existing Cherenkov telescopes. Each telescope is equipped with a camera that detects and records the Cherenkov-light flashes from air showers. The camera is comprised of a pixelated focal plane of blue sensitive and fast (nanosecond) photon detectors that detect the photon signal and convert it into an electrical one. Given the scale of AGIS, the camera must be reliable and cost effective. The Schwarzschild-Couder optical design yields a smaller plate scale than present-day Cherenkov telescopes, enabling the use of more compact, multi-pixel devices, including multianode photomultipliers or Geiger avalanche photodiodes. We present the conceptual design of the focal plane for the camera and results from testing candidate! focal plane sensors.
Juhasz, Barbara J
2016-11-14
Recording eye movements provides information on the time-course of word recognition during reading. Juhasz and Rayner [Juhasz, B. J., & Rayner, K. (2003). Investigating the effects of a set of intercorrelated variables on eye fixation durations in reading. Journal of Experimental Psychology: Learning, Memory and Cognition, 29, 1312-1318] examined the impact of five word recognition variables, including familiarity and age-of-acquisition (AoA), on fixation durations. All variables impacted fixation durations, but the time-course differed. However, the study focused on relatively short, morphologically simple words. Eye movements are also informative for examining the processing of morphologically complex words such as compound words. The present study further examined the time-course of lexical and semantic variables during morphological processing. A total of 120 English compound words that varied in familiarity, AoA, semantic transparency, lexeme meaning dominance, sensory experience rating (SER), and imageability were selected. The impact of these variables on fixation durations was examined when length, word frequency, and lexeme frequencies were controlled in a regression model. The most robust effects were found for familiarity and AoA, indicating that a reader's experience with compound words significantly impacts compound recognition. These results provide insight into semantic processing of morphologically complex words during reading.
Studying the Variability of Bright Stars with the CONCAM Sky Monitoring Network
NASA Astrophysics Data System (ADS)
Pereira, W. E.; Nemiroff, R. J.; Rafert, J. B.; Perez-Ramirez, D.
2001-12-01
CONCAMs have now been deployed at some of the world's major observatories including KPNO in Arizona, Mauna Kea in Hawaii, and Wise Observatory in Israel. Data from these mobile, inexpensive and continuous sky cameras, consisting of a fish-eye lens mated to a CCD camera and run by a laptop, has been ever-increasing. Initial efforts to carry out photometric analysis of CONCAM fits images have now been fortified by a more automated technique of analyzing this data. Results of such analyses - variability of several bright stars, in particular, are presented, as well as the use of these cameras as cloud monitors to remote observers.
Can light-field photography ease focusing on the scalp and oral cavity?
Taheri, Arash; Feldman, Steven R
2013-08-01
Capturing a well-focused image using an autofocus camera can be difficult in oral cavity and on a hairy scalp. Light-field digital cameras capture data regarding the color, intensity, and direction of rays of light. Having information regarding direction of rays of light, computer software can be used to focus on different subjects in the field after the image data have been captured. A light-field camera was used to capture the images of the scalp and oral cavity. The related computer software was used to focus on scalp or different parts of oral cavity. The final pictures were compared with pictures taken with conventional, compact, digital cameras. The camera worked well for oral cavity. It also captured the pictures of scalp easily; however, we had to repeat clicking between the hairs on different points to choose the scalp for focusing. A major drawback of the system was the resolution of the resulting pictures that was lower than conventional digital cameras. Light-field digital cameras are fast and easy to use. They can capture more information on the full depth of field compared with conventional cameras. However, the resolution of the pictures is relatively low. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
High-performance dual-speed CCD camera system for scientific imaging
NASA Astrophysics Data System (ADS)
Simpson, Raymond W.
1996-03-01
Traditionally, scientific camera systems were partitioned with a `camera head' containing the CCD and its support circuitry and a camera controller, which provided analog to digital conversion, timing, control, computer interfacing, and power. A new, unitized high performance scientific CCD camera with dual speed readout at 1 X 106 or 5 X 106 pixels per second, 12 bit digital gray scale, high performance thermoelectric cooling, and built in composite video output is described. This camera provides all digital, analog, and cooling functions in a single compact unit. The new system incorporates the A/C converter, timing, control and computer interfacing in the camera, with the power supply remaining a separate remote unit. A 100 Mbyte/second serial link transfers data over copper or fiber media to a variety of host computers, including Sun, SGI, SCSI, PCI, EISA, and Apple Macintosh. Having all the digital and analog functions in the camera made it possible to modify this system for the Woods Hole Oceanographic Institution for use on a remote controlled submersible vehicle. The oceanographic version achieves 16 bit dynamic range at 1.5 X 105 pixels/second, can be operated at depths of 3 kilometers, and transfers data to the surface via a real time fiber optic link.
Lexical Processes in the Recognition of Japanese Horizontal and Vertical Compounds
ERIC Educational Resources Information Center
Miwa, Koji; Dijkstra, Ton
2017-01-01
This lexical decision eye-tracking study investigated whether horizontal and vertical readings elicit comparable behavioral patterns and whether reading directions modulate lexical processes. Response times and eye movements were recorded during a lexical decision task with Japanese bimorphemic compound words presented vertically. The data were…
Chrominance watermark for mobile applications
NASA Astrophysics Data System (ADS)
Reed, Alastair; Rogers, Eliot; James, Dan
2010-01-01
Creating an imperceptible watermark which can be read by a broad range of cell phone cameras is a difficult problem. The problems are caused by the inherently low resolution and noise levels of typical cell phone cameras. The quality limitations of these devices compared to a typical digital camera are caused by the small size of the cell phone and cost trade-offs made by the manufacturer. In order to achieve this, a low resolution watermark is required which can be resolved by a typical cell phone camera. The visibility of a traditional luminance watermark was too great at this lower resolution, so a chrominance watermark was developed. The chrominance watermark takes advantage of the relatively low sensitivity of the human visual system to chrominance changes. This enables a chrominance watermark to be inserted into an image which is imperceptible to the human eye but can be read using a typical cell phone camera. Sample images will be presented showing images with a very low visibility which can be easily read by a typical cell phone camera.
Making 3D movies of Northern Lights
NASA Astrophysics Data System (ADS)
Hivon, Eric; Mouette, Jean; Legault, Thierry
2017-10-01
We describe the steps necessary to create three-dimensional (3D) movies of Northern Lights or Aurorae Borealis out of real-time images taken with two distant high-resolution fish-eye cameras. Astrometric reconstruction of the visible stars is used to model the optical mapping of each camera and correct for it in order to properly align the two sets of images. Examples of the resulting movies can be seen at http://www.iap.fr/aurora3d
The visual system of the Australian 'Redeye' cicada (Psaltoda moerens).
Ribi, Willi A; Zeil, Jochen
2015-11-01
We investigated the functional anatomy of the visual system in the Australian 'Redeye' cicada Psaltoda moerens, including compound eyes and ocelli. The compound eyes have large visual fields, about 7500 ommatidia per eye and binocular overlaps of 10-15° in the frontal and of 50-60° in the dorsal visual field. The diameters of corneal facet lenses range between 22 and 34 μm and the lenses are unusually long with up to 100 μm in some eye regions. In the posterior part of the eyes, the hexagonal facet array changes to a square lattice. The compound eyes are of the eucone apposition type with 8 retinular cells contributing to a fused rhabdom in each ommatidium. The red eye colour is due to the pigment granules in the secondary pigment cells. We found a small Dorsal Rim Area (DRA), in which rhabdom cross-sections are rectangular rather than round. The cross-sections of DRA rhabdoms do not systematically change orientation along the length of the rhabdom, indicating that microvilli directions do not twist, which would make retinular cells in the DRA polarization sensitive. The three ocelli have unusual lenses with a champagne-cork shape in longitudinal sections. Retinular cells are short in the dorsal and ventral part of the retinae, and long in their equatorial part. Ocellar rhabdoms are short (<10 μm), positioned close to the corneagenous layer and are formed by pairs of retinular cells. In cross-section, the rhabdomeres are 2-5 μm long and straight. The red colour of ocelli is produced by screening pigments that form an iris around the base of the ocellar lens and by screening pigments between the ocellar retinular cells. We discuss the organization of the compound eye rhabdom, the organization of the ocelli and the presence of a DRA in the light of what is known about Hemipteran compound eyes. We note in particular that cicadas are the only Hemipteran group with fused rhabdoms, thus making Hemiptera an interesting case to study the evolution of open rhabdoms and neural superposition. Copyright © 2015 Elsevier Ltd. All rights reserved.
Holm, René; Borkenfelt, Simon; Allesø, Morten; Andersen, Jens Enevold Thaulov; Beato, Stefania; Holm, Per
2016-02-10
Compounds wettability is critical for a number of central processes including disintegration, dispersion, solubilisation and dissolution. It is therefore an important optimisation parameter both in drug discovery but also as guidance for formulation selection and optimisation. Wettability for a compound is determined by its contact angle to a liquid, which in the present study was measured using the sessile drop method applied to a disc compact of the compound. Precise determination of the contact angle is important should it be used to either rank compounds or selected excipients to e.g. increase the wetting from a solid dosage form. Since surface roughness of the compact has been suggested to influence the measurement this study investigated if the surface quality, in terms of surface porosity, had an influence on the measured contact angle. A correlation to surface porosity was observed, however for six out of seven compounds similar results were obtained by applying a standard pressure (866 MPa) to the discs in their preparation. The data presented in the present work therefore suggest that a constant high pressure should be sufficient for most compounds when determining the contact angle. Only for special cases where compounds have poor compressibility would there be a need for a surface-quality-control step before the contact angle determination. Copyright © 2015 Elsevier B.V. All rights reserved.
The MVACS Surface Stereo Imager on Mars Polar Lander
NASA Astrophysics Data System (ADS)
Smith, P. H.; Reynolds, R.; Weinberg, J.; Friedman, T.; Lemmon, M. T.; Tanner, R.; Reid, R. J.; Marcialis, R. L.; Bos, B. J.; Oquest, C.; Keller, H. U.; Markiewicz, W. J.; Kramm, R.; Gliem, F.; Rueffer, P.
2001-08-01
The Surface Stereo Imager (SSI), a stereoscopic, multispectral camera on the Mars Polar Lander, is described in terms of its capabilities for studying the Martian polar environment. The camera's two eyes, separated by 15.0 cm, provide the camera with range-finding ability. Each eye illuminates half of a single CCD detector with a field of view of 13.8° high by 14.3° wide and has 12 selectable filters between 440 and 1000 nm. The
QUANTITATIVE DETECTION OF ENVIRONMENTALLY IMPORTANT DYES USING DIODE LASER/FIBER-OPTIC RAMAN
A compact diode laser/fiber-optic Raman spectrometer is used for quantitative detection of environmentally important dyes. This system is based on diode laser excitation at 782 mm, fiber optic probe technology, an imaging spectrometer, and state-of-the-art scientific CCD camera. ...
Curiosity on Tilt Table with Mast Up
2011-03-25
The Mast Camera Mastcam on NASA Mars rover Curiosity has two rectangular eyes near the top of the rover remote sensing mast. This image shows Curiosity on a tilt table NASA Jet Propulsion Laboratory, Pasadena, California.
Escaping compound eye ancestry: the evolution of single-chamber eyes in holometabolous larvae.
Buschbeck, Elke K
2014-08-15
Stemmata, the eyes of holometabolous insect larvae, have gained little attention, even though they exhibit remarkably different optical solutions, ranging from compound eyes with upright images, to sophisticated single-chamber eyes with inverted images. Such optical differences raise the question of how major transitions may have occurred. Stemmata evolved from compound eye ancestry, and optical differences are apparent even in some of the simplest systems that share strong cellular homology with adult ommatidia. The transition to sophisticated single-chamber eyes occurred many times independently, and in at least two different ways: through the fusion of many ommatidia [as in the sawfly (Hymenoptera)], and through the expansion of single ommatidia [as in tiger beetles (Coleoptera), antlions (Neuroptera) and dobsonflies (Megaloptera)]. Although ommatidia-like units frequently have multiple photoreceptor layers (tiers), sophisticated image-forming stemmata tend to only have one photoreceptor tier, presumably a consequence of the lens only being able to efficiently focus light on to one photoreceptor layer. An interesting exception is found in some diving beetles [Dytiscidae (Coleoptera)], in which two retinas receive sharp images from a bifocal lens. Taken together, stemmata represent a great model system to study an impressive set of optical solutions that evolved from a relatively simple ancestral organization. © 2014. Published by The Company of Biologists Ltd.
High-Resolution Mars Camera Test Image of Moon (Infrared)
NASA Technical Reports Server (NTRS)
2005-01-01
This crescent view of Earth's Moon in infrared wavelengths comes from a camera test by NASA's Mars Reconnaissance Orbiter spacecraft on its way to Mars. The mission's High Resolution Imaging Science Experiment camera took the image on Sept. 8, 2005, while at a distance of about 10 million kilometers (6 million miles) from the Moon. The dark feature on the right is Mare Crisium. From that distance, the Moon would appear as a star-like point of light to the unaided eye. The test verified the camera's focusing capability and provided an opportunity for calibration. The spacecraft's Context Camera and Optical Navigation Camera also performed as expected during the test. The Mars Reconnaissance Orbiter, launched on Aug. 12, 2005, is on course to reach Mars on March 10, 2006. After gradually adjusting the shape of its orbit for half a year, it will begin its primary science phase in November 2006. From the mission's planned science orbit about 300 kilometers (186 miles) above the surface of Mars, the high resolution camera will be able to discern features as small as one meter or yard across.Fischer, William S.; Wall, Michael; McDermott, Michael P.; Kupersmith, Mark J.; Feldon, Steven E.
2015-01-01
Purpose. To describe the methods used by the Photographic Reading Center (PRC) of the Idiopathic Intracranial Hypertension Treatment Trial (IIHTT) and to report baseline assessments of papilledema severity in participants. Methods. Stereoscopic digital images centered on the optic disc and the macula were collected using certified personnel and photographic equipment. Certification of the camera system included standardization and calibration using a model eye. Lay readers assessed disc photos of all eyes using the Frisén grade and performed quantitative measurements of papilledema. Frisén grades by PRC were compared with site investigator clinical grades. Spearman rank correlations were used to quantify associations among disc features and selected clinical variables. Results. Frisén grades according to the PRC and site investigator's grades, matched exactly in 48% of the study eyes and 42% of the fellow eyes and within one grade in 94% of the study eyes and 92% of the fellow eyes. Frisén grade was strongly correlated (r > 0.65, P < 0.0001) with quantitative measures of disc area. Cerebrospinal fluid pressure was weakly associated with Frisén grade and disc area determinations (r ≤ 0.31). Neither Frisén grade nor any fundus feature was associated with perimetric mean deviation. Conclusions. In a prospective clinical trial, lay readers agreed reasonably well with physicians in assessing Frisén grade. Standardization of camera systems enhanced consistency of photographic quality across study sites. Images were affected more by sensors with poor dynamic range than by poor resolution. Frisén grade is highly correlated with quantitative assessment of disc area. (ClinicalTrials.gov number, NCT01003639.) PMID:26024112
Fischer, William S; Wall, Michael; McDermott, Michael P; Kupersmith, Mark J; Feldon, Steven E
2015-05-01
To describe the methods used by the Photographic Reading Center (PRC) of the Idiopathic Intracranial Hypertension Treatment Trial (IIHTT) and to report baseline assessments of papilledema severity in participants. Stereoscopic digital images centered on the optic disc and the macula were collected using certified personnel and photographic equipment. Certification of the camera system included standardization and calibration using a model eye. Lay readers assessed disc photos of all eyes using the Frisén grade and performed quantitative measurements of papilledema. Frisén grades by PRC were compared with site investigator clinical grades. Spearman rank correlations were used to quantify associations among disc features and selected clinical variables. Frisén grades according to the PRC and site investigator's grades, matched exactly in 48% of the study eyes and 42% of the fellow eyes and within one grade in 94% of the study eyes and 92% of the fellow eyes. Frisén grade was strongly correlated (r > 0.65, P < 0.0001) with quantitative measures of disc area. Cerebrospinal fluid pressure was weakly associated with Frisén grade and disc area determinations (r ≤ 0.31). Neither Frisén grade nor any fundus feature was associated with perimetric mean deviation. In a prospective clinical trial, lay readers agreed reasonably well with physicians in assessing Frisén grade. Standardization of camera systems enhanced consistency of photographic quality across study sites. Images were affected more by sensors with poor dynamic range than by poor resolution. Frisén grade is highly correlated with quantitative assessment of disc area. (ClinicalTrials.gov number, NCT01003639.).
Chang, Won-Du; Cha, Ho-Seung; Im, Chang-Hwan
2016-01-01
This paper introduces a method to remove the unwanted interdependency between vertical and horizontal eye-movement components in electrooculograms (EOGs). EOGs have been widely used to estimate eye movements without a camera in a variety of human-computer interaction (HCI) applications using pairs of electrodes generally attached either above and below the eye (vertical EOG) or to the left and right of the eyes (horizontal EOG). It has been well documented that the vertical EOG component has less stability than the horizontal EOG one, making accurate estimation of the vertical location of the eyes difficult. To address this issue, an experiment was designed in which ten subjects participated. Visual inspection of the recorded EOG signals showed that the vertical EOG component is highly influenced by horizontal eye movements, whereas the horizontal EOG is rarely affected by vertical eye movements. Moreover, the results showed that this interdependency could be effectively removed by introducing an individual constant value. It is therefore expected that the proposed method can enhance the overall performance of practical EOG-based eye-tracking systems. PMID:26907271
NASA Astrophysics Data System (ADS)
Pani, R.; Pellegrini, R.; Betti, M.; De Vincentis, G.; Cinti, M. N.; Bennati, P.; Vittorini, F.; Casali, V.; Mattioli, M.; Orsolini Cencelli, V.; Navarria, F.; Bollini, D.; Moschini, G.; Iurlaro, G.; Montani, L.; de Notaristefani, F.
2007-02-01
The principal limiting factor in the clinical acceptance of scintimammography is certainly its low sensitivity for cancers sized <1 cm, mainly due to the lack of equipment specifically designed for breast imaging. The National Institute of Nuclear Physics (INFN) has been developing a new scintillation camera based on Lanthanum tri-Bromide Cerium-doped crystal (LaBr 3:Ce), that demonstrating superior imaging performances with respect to the dedicated scintillation γ-camera that was previously developed. The proposed detector consists of continuous LaBr 3:Ce scintillator crystal coupled to a Hamamatsu H8500 Flat Panel PMT. One centimeter thick crystal has been chosen to increase crystal detection efficiency. In this paper, we propose a comparison and evaluation between lanthanum γ-camera and a Multi PSPMT camera, NaI(Tl) discrete pixel based, previously developed under "IMI" Italian project for technological transfer of INFN. A phantom study has been developed to test both the cameras before introducing them in clinical trials. High resolution scans produced by LaBr 3:Ce camera showed higher tumor contrast with a detailed imaging of uptake area than pixellated NaI(Tl) dedicated camera. Furthermore, with the lanthanum camera, the Signal-to-Noise Ratio ( SNR) value was increased for a lesion as small as 5 mm, with a consequent strong improvement in detectability.
The NASA 2003 Mars Exploration Rover Panoramic Camera (Pancam) Investigation
NASA Astrophysics Data System (ADS)
Bell, J. F.; Squyres, S. W.; Herkenhoff, K. E.; Maki, J.; Schwochert, M.; Morris, R. V.; Athena Team
2002-12-01
The Panoramic Camera System (Pancam) is part of the Athena science payload to be launched to Mars in 2003 on NASA's twin Mars Exploration Rover missions. The Pancam imaging system on each rover consists of two major components: a pair of digital CCD cameras, and the Pancam Mast Assembly (PMA), which provides the azimuth and elevation actuation for the cameras as well as a 1.5 meter high vantage point from which to image. Pancam is a multispectral, stereoscopic, panoramic imaging system, with a field of regard provided by the PMA that extends across 360o of azimuth and from zenith to nadir, providing a complete view of the scene around the rover. Pancam utilizes two 1024x2048 Mitel frame transfer CCD detector arrays, each having a 1024x1024 active imaging area and 32 optional additional reference pixels per row for offset monitoring. Each array is combined with optics and a small filter wheel to become one "eye" of a multispectral, stereoscopic imaging system. The optics for both cameras consist of identical 3-element symmetrical lenses with an effective focal length of 42 mm and a focal ratio of f/20, yielding an IFOV of 0.28 mrad/pixel or a rectangular FOV of 16o\\x9D 16o per eye. The two eyes are separated by 30 cm horizontally and have a 1o toe-in to provide adequate parallax for stereo imaging. The cameras are boresighted with adjacent wide-field stereo Navigation Cameras, as well as with the Mini-TES instrument. The Pancam optical design is optimized for best focus at 3 meters range, and allows Pancam to maintain acceptable focus from infinity to within 1.5 meters of the rover, with a graceful degradation (defocus) at closer ranges. Each eye also contains a small 8-position filter wheel to allow multispectral sky imaging, direct Sun imaging, and surface mineralogic studies in the 400-1100 nm wavelength region. Pancam has been designed and calibrated to operate within specifications from -55oC to +5oC. An onboard calibration target and fiducial marks provide the ability to validate the radiometric and geometric calibration on Mars. Pancam relies heavily on use of the JPL ICER wavelet compression algorithm to maximize data return within stringent mission downlink limits. The scientific goals of the Pancam investigation are to: (a) obtain monoscopic and stereoscopic image mosaics to assess the morphology, topography, and geologic context of each MER landing site; (b) obtain multispectral visible to short-wave near-IR images of selected regions to determine surface color and mineralogic properties; (c) obtain multispectral images over a range of viewing geometries to constrain surface photometric and physical properties; and (d) obtain images of the Martian sky, including direct images of the Sun, to determine dust and aerosol opacity and physical properties. In addition, Pancam also serves a variety of operational functions on the MER mission, including (e) serving as the primary Sun-finding camera for rover navigation; (f) resolving objects on the scale of the rover wheels to distances of ~100 m to help guide navigation decisions; (g) providing stereo coverage adequate for the generation of digital terrain models to help guide and refine rover traverse decisions; (h) providing high resolution images and other context information to guide the selection of the most interesting in situ sampling targets; and (i) supporting acquisition and release of exciting E/PO products.
Dynamic Light Scattering Developed to Look Through the Eye's Window Into the Body
NASA Technical Reports Server (NTRS)
Stauber, Laurel J.
2001-01-01
Microgravity researcher Dr. Rafat R. Ansari, from the NASA Glenn Research Center, has found that the eye operates much like a camera and is the "window to the body." The eye contains transparent tissue through which light passes, providing us a view of what's going on inside. These transparent tissues represent nearly every tissue type that exists throughout the body. With the correlations and comparisons of these tissues done at Glenn, we hope to improve doctors' ability to diagnose diseases at much earlier stages. The medical community will be able to look noninvasively and quantitatively into a patient's eyes to detect disease before symptoms appear. Since the eye is easily accessed by light, the optical technologies created at Glenn can be used to evaluate its structure and physiology in health, aging, and disease.
The Twin Peaks in 3-D, as Viewed by the Mars Pathfinder IMP Camera
NASA Technical Reports Server (NTRS)
1997-01-01
The Twin Peaks are modest-size hills to the southwest of the Mars Pathfinder landing site. They were discovered on the first panoramas taken by the IMP camera on the 4th of July, 1997, and subsequently identified in Viking Orbiter images taken over 20 years ago. The peaks are approximately 30-35 meters (-100 feet) tall. North Twin is approximately 860 meters (2800 feet) from the lander, and South Twin is about a kilometer away (3300 feet). The scene includes bouldery ridges and swales or 'hummocks' of flood debris that range from a few tens of meters away from the lander to the distance of the South Twin Peak. The large rock at the right edge of the scene is nicknamed 'Hippo'. This rock is about a meter (3 feet) across and 25 meters (80 feet) distant.
This view of the Twin Peaks was produced by combining 4 individual 'Superpan' scenes from the left and right eyes of the IMP camera to cover both peaks. Each frame consists of 8 individual frames (left eye) and 7 frames (right eye) taken with different color filters that were enlarged by 500% and then co-added using Adobe Photoshop to produce, in effect, a super-resolution pancromatic frame that is sharper than an individual frame would be.The anaglyph view of the Twin Peaks was produced by combining the left and right eye mosaics (above) by assigning the left eye view to the red color plane and the right eye view to the green and blue color planes (cyan), to produce a stereo anaglyph mosaic. This mosaic can be viewed in 3-D on your computer monitor or in color print form by wearing red-blue 3-D glasses.Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is a division of the California Institute of Technology (Caltech). The IMP was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.Click below to see the left and right views individually. [figure removed for brevity, see original site] Left [figure removed for brevity, see original site] RightLoss of the six3/6 controlling pathways might have resulted in pinhole-eye evolution in Nautilus.
Ogura, Atsushi; Yoshida, Masa-aki; Moritaki, Takeya; Okuda, Yuki; Sese, Jun; Shimizu, Kentaro K; Sousounis, Konstantinos; Tsonis, Panagiotis A
2013-01-01
Coleoid cephalopods have an elaborate camera eye whereas nautiloids have primitive pinhole eye without lens and cornea. The Nautilus pinhole eye provides a unique example to explore the module of lens formation and its evolutionary mechanism. Here, we conducted an RNA-seq study of developing eyes of Nautilus and pygmy squid. First, we found that evolutionary distances from the common ancestor to Nautilus or squid are almost the same. Although most upstream eye development controlling genes were expressed in both species, six3/6 that are required for lens formation in vertebrates was not expressed in Nautilus. Furthermore, many downstream target genes of six3/6 including crystallin genes and other lens protein related genes were not expressed in Nautilus. As six3/6 and its controlling pathways are widely conserved among molluscs other than Nautilus, the present data suggest that deregulation of the six3/6 pathway led to the pinhole eye evolution in Nautilus.
Loss of the six3/6 controlling pathways might have resulted in pinhole-eye evolution in Nautilus
Ogura, Atsushi; Yoshida, Masa-aki; Moritaki, Takeya; Okuda, Yuki; Sese, Jun; Shimizu, Kentaro K.; Sousounis, Konstantinos; Tsonis, Panagiotis A.
2013-01-01
Coleoid cephalopods have an elaborate camera eye whereas nautiloids have primitive pinhole eye without lens and cornea. The Nautilus pinhole eye provides a unique example to explore the module of lens formation and its evolutionary mechanism. Here, we conducted an RNA-seq study of developing eyes of Nautilus and pygmy squid. First, we found that evolutionary distances from the common ancestor to Nautilus or squid are almost the same. Although most upstream eye development controlling genes were expressed in both species, six3/6 that are required for lens formation in vertebrates was not expressed in Nautilus. Furthermore, many downstream target genes of six3/6 including crystallin genes and other lens protein related genes were not expressed in Nautilus. As six3/6 and its controlling pathways are widely conserved among molluscs other than Nautilus, the present data suggest that deregulation of the six3/6 pathway led to the pinhole eye evolution in Nautilus. PMID:23478590
3D ocular ultrasound using gaze tracking on the contralateral eye: a feasibility study.
Afsham, Narges; Najafi, Mohammad; Abolmaesumi, Purang; Rohling, Robert
2011-01-01
A gaze-deviated examination of the eye with a 2D ultrasound transducer is a common and informative ophthalmic test; however, the complex task of the pose estimation of the ultrasound images relative to the eye affects 3D interpretation. To tackle this challenge, a novel system for 3D image reconstruction based on gaze tracking of the contralateral eye has been proposed. The gaze fixates on several target points and, for each fixation, the pose of the examined eye is inferred from the gaze tracking. A single camera system has been developed for pose estimation combined with subject-specific parameter identification. The ultrasound images are then transformed to the coordinate system of the examined eye to create a 3D volume. Accuracy of the proposed gaze tracking system and the pose estimation of the eye have been validated in a set of experiments. Overall system error, including pose estimation and calibration, are 3.12 mm and 4.68 degrees.
Mack, Maura; Kowalski, Elizabeth; Grahn, Robert; Bras, Dineli; Penedo, Maria Cecilia T.; Bellone, Rebecca
2017-01-01
A unique eye color, called tiger-eye, segregates in the Puerto Rican Paso Fino (PRPF) horse breed and is characterized by a bright yellow, amber, or orange iris. Pedigree analysis identified a simple autosomal recessive mode of inheritance for this trait. A genome-wide association study (GWAS) with 24 individuals identified a locus on ECA 1 reaching genome-wide significance (Pcorrected = 1.32 × 10−5). This ECA1 locus harbors the candidate gene, Solute Carrier Family 24 (Sodium/Potassium/Calcium Exchanger), Member 5 (SLC24A5), with known roles in pigmentation in humans, mice, and zebrafish. Humans with compound heterozygous mutations in SLC24A5 have oculocutaneous albinism (OCA) type 6 (OCA6), which is characterized by dilute skin, hair, and eye pigmentation, as well as ocular anomalies. Twenty tiger-eye horses were homozygous for a nonsynonymous mutation in exon 2 (p.Phe91Tyr) of SLC24A5 (called here Tiger-eye 1), which is predicted to be deleterious to protein function. Additionally, eight of the remaining 12 tiger-eye horses heterozygous for the p.Phe91Tyr variant were also heterozygous for a 628 bp deletion encompassing all of exon 7 of SLC24A5 (c.875-340_1081+82del), which we will call here the Tiger-eye 2 allele. None of the 122 brown-eyed horses were homozygous for either tiger-eye-associated allele or were compound heterozygotes. Further, neither variant was detected in 196 horses from four related breeds not known to have the tiger-eye phenotype. Here, we propose that two mutations in SLC24A5 affect iris pigmentation in tiger-eye PRPF horses. Further, unlike OCA6 in humans, the Tiger-eye 1 mutation in its homozygous state or as a compound heterozygote (Tiger-eye 1/Tiger-eye 2) does not appear to cause ocular anomalies or a change in coat color in the PRPF horse. PMID:28655738
Near infrared observations of S 155. Evidence of induced star formation?
NASA Astrophysics Data System (ADS)
Hunt, L. K.; Lisi, F.; Felli, M.; Tofani, G.
In order to investigate the possible existence of embedded objects of recent formation in the area of the Cepheus B - Sh2-155 interface, the authors have observed the region of the compact radio continuum source with the new near infrared camera ARNICA and the TIRGO telescope.
Arikawa, K; Morikawa, Y; Suzuki, T; Eguchi, E
1988-03-15
Under conditions of constant darkness, rhabdom volume and the amount of visual pigment chromophore show circadian changes in the compound eye of the crab Hemigrapsus sanguineus. The present results indicate that an intrinsic circadian biological clock is involved in the control of the changes.
Laparoscopic female sterilisation by a single port through monitor--a better alternative.
Sewta, Rajender Singh
2011-04-01
Female sterilisation by tubal occlusion method by laparocator is most widely used and accepted technique of all family planning measures all over the world. After the development of laparoscopic surgery in all faculties of surgery by monitor, now laparoscopic female sterilisation has been developed to do under monitor control by two ports--one for laparoscope and second for ring applicator. But the technique has been modified using single port with monitor through laparocator in which camera is fitted on the eye piece of laparocator (the same laparocator which is commonly used in camps without monitor since a long time in India). In this study over a period of about 2 years, a total 2011 cases were operated upon. In this study, I used camera and monitor through a single port by laparocator to visualise as well as to apply ring on fallopian tubes. The result is excellent and is a better alternative to conventional laparoscopic sterilisation and double puncture technique through camera--which give two scars and an extra assistant is required. However, there was no failure and the strain on surgeon's eye was minimum. Single port is much easier, safe, equally effective and better acceptable method.
2004-06-17
This 3-D image taken by the left and right eyes of the panoramic camera on NASA Mars Exploration Rover Spirit shows the odd rock formation dubbed Cobra Hoods center. 3D glasses are necessary to view this image.
Node 1 taken during Expedition 26
2010-11-26
ISS026-E-005318 (26 Nov. 2010) --- A fish-eye lens attached to an electronic still camera was used by an Expedition 26 crew member to capture this image of the Unity node of the International Space Station.
Node 1 taken during Expedition 26
2010-11-26
ISS026-E-005316 (26 Nov. 2010) --- A fish-eye lens attached to an electronic still camera was used by an Expedition 26 crew member to capture this image of the Unity node of the International Space Station.
Peteye detection and correction
NASA Astrophysics Data System (ADS)
Yen, Jonathan; Luo, Huitao; Tretter, Daniel
2007-01-01
Redeyes are caused by the camera flash light reflecting off the retina. Peteyes refer to similar artifacts in the eyes of other mammals caused by camera flash. In this paper we present a peteye removal algorithm for detecting and correcting peteye artifacts in digital images. Peteye removal for animals is significantly more difficult than redeye removal for humans, because peteyes can be any of a variety of colors, and human face detection cannot be used to localize the animal eyes. In many animals, including dogs and cats, the retina has a special reflective layer that can cause a variety of peteye colors, depending on the animal's breed, age, or fur color, etc. This makes the peteye correction more challenging. We have developed a semi-automatic algorithm for peteye removal that can detect peteyes based on the cursor position provided by the user and correct them by neutralizing the colors with glare reduction and glint retention.
Plenoptic Ophthalmoscopy: A Novel Imaging Technique.
Adam, Murtaza K; Aenchbacher, Weston; Kurzweg, Timothy; Hsu, Jason
2016-11-01
This prospective retinal imaging case series was designed to establish feasibility of plenoptic ophthalmoscopy (PO), a novel mydriatic fundus imaging technique. A custom variable intensity LED array light source adapter was created for the Lytro Gen1 light-field camera (Lytro, Mountain View, CA). Initial PO testing was performed on a model eye and rabbit fundi. PO image acquisition was then performed on dilated human subjects with a variety of retinal pathology and images were subjected to computational enhancement. The Lytro Gen1 light-field camera with custom LED array captured fundus images of eyes with diabetic retinopathy, age-related macular degeneration, retinal detachment, and other diagnoses. Post-acquisition computational processing allowed for refocusing and perspective shifting of retinal PO images, resulting in improved image quality. The application of PO to image the ocular fundus is feasible. Additional studies are needed to determine its potential clinical utility. [Ophthalmic Surg Lasers Imaging Retina. 2016;47:1038-1043.]. Copyright 2016, SLACK Incorporated.
Robustness of an artificially tailored fisheye imaging system with a curvilinear image surface
NASA Astrophysics Data System (ADS)
Lee, Gil Ju; Nam, Won Il; Song, Young Min
2017-11-01
Curved image sensors inspired by animal and insect eyes have provided a new development direction in next-generation digital cameras. It is known that natural fish eyes afford an extremely wide field of view (FOV) imaging due to the geometrical properties of the spherical lens and hemispherical retina. However, its inherent drawbacks, such as the low off-axis illumination and the fabrication difficulty of a 'dome-like' hemispherical imager, limit the development of bio-inspired wide FOV cameras. Here, a new type of fisheye imaging system is introduced that has simple lens configurations with a curvilinear image surface, while maintaining high off-axis illumination and a wide FOV. Moreover, through comparisons with commercial conventional fisheye designs, it is determined that the volume and required number of optical elements of the proposed design is practical while capturing the fundamental optical performances. Detailed design guidelines for tailoring the proposed optic system are also discussed.
Optimized Two-Party Video Chat with Restored Eye Contact Using Graphics Hardware
NASA Astrophysics Data System (ADS)
Dumont, Maarten; Rogmans, Sammy; Maesen, Steven; Bekaert, Philippe
We present a practical system prototype to convincingly restore eye contact between two video chat participants, with a minimal amount of constraints. The proposed six-fold camera setup is easily integrated into the monitor frame, and is used to interpolate an image as if its virtual camera captured the image through a transparent screen. The peer user has a large freedom of movement, resulting in system specifications that enable genuine practical usage. Our software framework thereby harnesses the powerful computational resources inside graphics hardware, and maximizes arithmetic intensity to achieve over real-time performance up to 42 frames per second for 800 ×600 resolution images. Furthermore, an optimal set of fine tuned parameters are presented, that optimizes the end-to-end performance of the application to achieve high subjective visual quality, and still allows for further algorithmic advancement without loosing its real-time capabilities.
NASA Astrophysics Data System (ADS)
Li, Hao; Liu, Wenzhong; Zhang, Hao F.
2015-10-01
Rodent models are indispensable in studying various retinal diseases. Noninvasive, high-resolution retinal imaging of rodent models is highly desired for longitudinally investigating the pathogenesis and therapeutic strategies. However, due to severe aberrations, the retinal image quality in rodents can be much worse than that in humans. We numerically and experimentally investigated the influence of chromatic aberration and optical illumination bandwidth on retinal imaging. We confirmed that the rat retinal image quality decreased with increasing illumination bandwidth. We achieved the retinal image resolution of 10 μm using a 19 nm illumination bandwidth centered at 580 nm in a home-built fundus camera. Furthermore, we observed higher chromatic aberration in albino rat eyes than in pigmented rat eyes. This study provides a design guide for high-resolution fundus camera for rodents. Our method is also beneficial to dispersion compensation in multiwavelength retinal imaging applications.
An autonomous sensor module based on a legacy CCTV camera
NASA Astrophysics Data System (ADS)
Kent, P. J.; Faulkner, D. A. A.; Marshall, G. F.
2016-10-01
A UK MoD funded programme into autonomous sensors arrays (SAPIENT) has been developing new, highly capable sensor modules together with a scalable modular architecture for control and communication. As part of this system there is a desire to also utilise existing legacy sensors. The paper reports upon the development of a SAPIENT-compliant sensor module using a legacy Close-Circuit Television (CCTV) pan-tilt-zoom (PTZ) camera. The PTZ camera sensor provides three modes of operation. In the first mode, the camera is automatically slewed to acquire imagery of a specified scene area, e.g. to provide "eyes-on" confirmation for a human operator or for forensic purposes. In the second mode, the camera is directed to monitor an area of interest, with zoom level automatically optimized for human detection at the appropriate range. Open source algorithms (using OpenCV) are used to automatically detect pedestrians; their real world positions are estimated and communicated back to the SAPIENT central fusion system. In the third mode of operation a "follow" mode is implemented where the camera maintains the detected person within the camera field-of-view without requiring an end-user to directly control the camera with a joystick.
Crew Earth Observations (CEO) by Expedition Five Crew
2002-09-16
ISS005- E-15375 (22 September 2002) --- This digital still camera's picture, taken from the International Space Station (ISS) on September 22, 2002, shows the central eye of Hurricane Isidore. The eye become less defined as the hurricane stalled and weakened over the Yucatan Peninsula near Merida. Onboard the orbital outpost for the Expedition Five mission are cosmonauts Valery G. Korzun, commander, and Sergei Y. Treschev, flight engineer, both with Rosaviakosmos; and astronaut Peggy A. Whitson, flight engineer.
Fish-eye view of the STS-90 Columbia's payload bay with sunburst
1998-05-07
STS090-361-022 (17 April - 3 May 1998) --- A special lens on a 35mm camera gives a fish-eye effect to this out-the-window view from the Space Shuttle Columbia's cabin. The Spacelab Science Module, hosting 16-days of Neurolab research, is in frame center. This picture clearly depicts the configuration of the tunnel that leads from the cabin to the module in the center of the cargo bay.
Time for a Change; Spirit's View on Sol 1843 (Stereo)
NASA Technical Reports Server (NTRS)
2009-01-01
[figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11973 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11973 NASA's Mars Exploration Rover Spirit used its navigation camera to take the images that have been combined into this stereo, full-circle view of the rover's surroundings during the 1,843rd Martian day, or sol, of Spirit's surface mission (March 10, 2009). South is in the middle. North is at both ends. This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The rover had driven 36 centimeters downhill earlier on Sol 1854, but had not been able to get free of ruts in soft material that had become an obstacle to getting around the northeastern corner of the low plateau called 'Home Plate.' The Sol 1854 drive, following two others in the preceding four sols that also achieved little progress in the soft ground, prompted the rover team to switch to a plan of getting around Home Plate counterclockwise, instead of clockwise. The drive direction in subsequent sols was westward past the northern edge of Home Plate. This view is presented as a cylindrical-perspective projection with geometric seam correction.View Ahead After Spirit's Sol 1861 Drive (Stereo)
NASA Technical Reports Server (NTRS)
2009-01-01
[figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11977 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11977 NASA's Mars Exploration Rover Spirit used its navigation camera to take the images combined into this stereo, 210-degree view of the rover's surroundings during the 1,861st to 1,863rd Martian days, or sols, of Spirit's surface mission (March 28 to 30, 2009). This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The center of the scene is toward the south-southwest. East is on the left. West-northwest is on the right. The rover had driven 22.7 meters (74 feet) southwestward on Sol 1861 before beginning to take the frames in this view. The drive brought Spirit past the northwestern corner of Home Plate. In this view, the western edge of Home Plate is on the portion of the horizon farthest to the left. A mound in middle distance near the center of the view is called 'Tsiolkovsky' and is about 40 meters (about 130 feet) from the rover's position. This view is presented as a cylindrical-perspective projection with geometric seam correction.[Optimization of end-tool parameters based on robot hand-eye calibration].
Zhang, Lilong; Cao, Tong; Liu, Da
2017-04-01
A new one-time registration method was developed in this research for hand-eye calibration of a surgical robot to simplify the operation process and reduce the preparation time. And a new and practical method is introduced in this research to optimize the end-tool parameters of the surgical robot based on analysis of the error sources in this registration method. In the process with one-time registration method, firstly a marker on the end-tool of the robot was recognized by a fixed binocular camera, and then the orientation and position of the marker were calculated based on the joint parameters of the robot. Secondly the relationship between the camera coordinate system and the robot base coordinate system could be established to complete the hand-eye calibration. Because of manufacturing and assembly errors of robot end-tool, an error equation was established with the transformation matrix between the robot end coordinate system and the robot end-tool coordinate system as the variable. Numerical optimization was employed to optimize end-tool parameters of the robot. The experimental results showed that the one-time registration method could significantly improve the efficiency of the robot hand-eye calibration compared with the existing methods. The parameter optimization method could significantly improve the absolute positioning accuracy of the one-time registration method. The absolute positioning accuracy of the one-time registration method can meet the requirements of the clinical surgery.
Real Time Eye Tracking and Hand Tracking Using Regular Video Cameras for Human Computer Interaction
2011-01-01
Paperwork Reduction Project (0704-0188) Washington, DC 20503. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE (DD-MM-YYYY) January...understand us. More specifically, the computer should be able to infer what we wish to see, do , and interact with through our movements, gestures, and...in depth freedom. Our system differs from the majority of other systems in that we do not use infrared, stereo-cameras, specially-constructed
First experiences with ARNICA, the ARCETRI observatory imaging camera
NASA Astrophysics Data System (ADS)
Lisi, F.; Baffa, C.; Hunt, L.; Maiolino, R.; Moriondo, G.; Stanga, R.
1994-03-01
ARNICA (ARcetri Near Infrared CAmera) is the imaging camera for the near infrared bands between 1.0 and 2.5 micrometer that Arcetri Observatory has designed and built as a common use instrument for the TIRGO telescope (1.5 m diameter, f/20) located at Gornergrat (Switzerland). The scale is 1 sec per pixel, with sky coverage of more than 4 min x 4 min on the NICMOS 3 (256 x 256 pixels, 40 micrometer side) detector array. The optical path is compact enough to be enclosed in a 25.4 cm diameter dewar; the working temperature of detector and optics is 76 K. We give an estimate of performance, in terms of sensitivity with an assigned observing time, along with some preliminary considerations on photometric accuracy.
Gamma-ray imaging system for real-time measurements in nuclear waste characterisation
NASA Astrophysics Data System (ADS)
Caballero, L.; Albiol Colomer, F.; Corbi Bellot, A.; Domingo-Pardo, C.; Leganés Nieto, J. L.; Agramunt Ros, J.; Contreras, P.; Monserrate, M.; Olleros Rodríguez, P.; Pérez Magán, D. L.
2018-03-01
A compact, portable and large field-of-view gamma camera that is able to identify, locate and quantify gamma-ray emitting radioisotopes in real-time has been developed. The device delivers spectroscopic and imaging capabilities that enable its use it in a variety of nuclear waste characterisation scenarios, such as radioactivity monitoring in nuclear power plants and more specifically for the decommissioning of nuclear facilities. The technical development of this apparatus and some examples of its application in field measurements are reported in this article. The performance of the presented gamma-camera is also benchmarked against other conventional techniques.
Spectral imaging spreads into new industrial and on-field applications
NASA Astrophysics Data System (ADS)
Bouyé, Clémentine; Robin, Thierry; d'Humières, Benoît
2018-02-01
Numerous recent innovative developments have led to a high reduction of hyperspectral and multispectral cameras cost and size. The achieved products - compact, reliable, low-cot, easy-to-use - meet end-user requirements in major fields: agriculture, food and beverages, pharmaceutics, machine vision, health. The booming of this technology in industrial and on-field applications is getting closer. Indeed, the Spectral Imaging market is at a turning point. A high growth rate of 20% is expected in the next 5 years. The number of cameras sold will increase from 3 600 in 2017 to more than 9 000 in 2022.
Development of Digital SLR Camera: PENTAX K-7
NASA Astrophysics Data System (ADS)
Kawauchi, Hiraku
The DSLR "PENTAX K-7" comes with an easy-to-carry, minimal yet functional small form factor, a long inherited identities of the PENTAX brand. Nevertheless for its compact body, this camera has up-to-date enhanced fundamental features such as high-quality viewfinder, enhanced shutter mechanism, extended continuous shooting capabilities, reliable exposure control, and fine-tuned AF systems, as well as strings of newest technologies such as movie recording capability and automatic leveling function. The main focus of this article is to reveal the ideas behind the concept making of this product and its distinguished features.
Development of an Extra-vehicular (EVA) Infrared (IR) Camera Inspection System
NASA Technical Reports Server (NTRS)
Gazarik, Michael; Johnson, Dave; Kist, Ed; Novak, Frank; Antill, Charles; Haakenson, David; Howell, Patricia; Pandolf, John; Jenkins, Rusty; Yates, Rusty
2006-01-01
Designed to fulfill a critical inspection need for the Space Shuttle Program, the EVA IR Camera System can detect crack and subsurface defects in the Reinforced Carbon-Carbon (RCC) sections of the Space Shuttle s Thermal Protection System (TPS). The EVA IR Camera performs this detection by taking advantage of the natural thermal gradients induced in the RCC by solar flux and thermal emission from the Earth. This instrument is a compact, low-mass, low-power solution (1.2cm3, 1.5kg, 5.0W) for TPS inspection that exceeds existing requirements for feature detection. Taking advantage of ground-based IR thermography techniques, the EVA IR Camera System provides the Space Shuttle program with a solution that can be accommodated by the existing inspection system. The EVA IR Camera System augments the visible and laser inspection systems and finds cracks and subsurface damage that is not measurable by the other sensors, and thus fills a critical gap in the Space Shuttle s inspection needs. This paper discusses the on-orbit RCC inspection measurement concept and requirements, and then presents a detailed description of the EVA IR Camera System design.
Motion camera based on a custom vision sensor and an FPGA architecture
NASA Astrophysics Data System (ADS)
Arias-Estrada, Miguel
1998-09-01
A digital camera for custom focal plane arrays was developed. The camera allows the test and development of analog or mixed-mode arrays for focal plane processing. The camera is used with a custom sensor for motion detection to implement a motion computation system. The custom focal plane sensor detects moving edges at the pixel level using analog VLSI techniques. The sensor communicates motion events using the event-address protocol associated to a temporal reference. In a second stage, a coprocessing architecture based on a field programmable gate array (FPGA) computes the time-of-travel between adjacent pixels. The FPGA allows rapid prototyping and flexible architecture development. Furthermore, the FPGA interfaces the sensor to a compact PC computer which is used for high level control and data communication to the local network. The camera could be used in applications such as self-guided vehicles, mobile robotics and smart surveillance systems. The programmability of the FPGA allows the exploration of further signal processing like spatial edge detection or image segmentation tasks. The article details the motion algorithm, the sensor architecture, the use of the event- address protocol for velocity vector computation and the FPGA architecture used in the motion camera system.
Yoshida, M A; Ogura, A; Ikeo, K; Shigeno, S; Moritaki, T; Winters, G C; Kohn, A B; Moroz, L L
2015-12-01
Coleoid cephalopods show remarkable evolutionary convergence with vertebrates in their neural organization, including (1) eyes and visual system with optic lobes, (2) specialized parts of the brain controlling learning and memory, such as vertical lobes, and (3) unique vasculature supporting such complexity of the central nervous system. We performed deep sequencing of eye transcriptomes of pygmy squids (Idiosepius paradoxus) and chambered nautiluses (Nautilus pompilius) to decipher the molecular basis of convergent evolution in cephalopods. RNA-seq was complemented by in situ hybridization to localize the expression of selected genes. We found three types of genomic innovations in the evolution of complex brains: (1) recruitment of novel genes into morphogenetic pathways, (2) recombination of various coding and regulatory regions of different genes, often called "evolutionary tinkering" or "co-option", and (3) duplication and divergence of genes. Massive recruitment of novel genes occurred in the evolution of the "camera" eye from nautilus' "pinhole" eye. We also showed that the type-2 co-option of transcription factors played important roles in the evolution of the lens and visual neurons. In summary, the cephalopod convergent morphological evolution of the camera eyes was driven by a mosaic of all types of gene recruitments. In addition, our analysis revealed unexpected variations of squids' opsins, retinochromes, and arrestins, providing more detailed information, valuable for further research on intra-ocular and extra-ocular photoreception of the cephalopods. © The Author 2015. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology. All rights reserved. For permissions please email: journals.permissions@oup.com.
Do advertisements at the roadside distract the driver?
NASA Astrophysics Data System (ADS)
Kettwich, Carmen; Klinger, Karsten; Lemmer, Uli
2008-04-01
Nowadays drivers have to get along with an increasing complex visual environment. More and more cars are on the road. There are not only distractions available within the vehicle, like radio and navigation system, the environment outside the car has also become more and more complex. Hoardings, advertising pillars, shop fronts and video screens are just a few examples. For this reason the potential risk of driver distraction is rising. But in which way do the advertisements at the roadside influence the driver's attention? The investigation which is described is devoted to this topic. Various kinds of advertisements played an important role, like illuminated and non-illuminated posters as well as illuminated animated ads. Several test runs in an urban environment were performed. The gaze direction of the driver's eye was measured with an eye tracking system. The latter consists of three cameras which logged the eye movements during the test run and a small-sized scene camera recording the traffic scene. 16 subjects (six female and ten male) between 21 and 65 years of age took part in this experiment. Thus the driver's fixation duration of the different advertisements could be determined.
Blur spot limitations in distal endoscope sensors
NASA Astrophysics Data System (ADS)
Yaron, Avi; Shechterman, Mark; Horesh, Nadav
2006-02-01
In years past, the picture quality of electronic video systems was limited by the image sensor. In the present, the resolution of miniature image sensors, as in medical endoscopy, is typically superior to the resolution of the optical system. This "excess resolution" is utilized by Visionsense to create stereoscopic vision. Visionsense has developed a single chip stereoscopic camera that multiplexes the horizontal dimension of the image sensor into two (left and right) images, compensates the blur phenomena, and provides additional depth resolution without sacrificing planar resolution. The camera is based on a dual-pupil imaging objective and an image sensor coated by an array of microlenses (a plenoptic camera). The camera has the advantage of being compact, providing simultaneous acquisition of left and right images, and offering resolution comparable to a dual chip stereoscopic camera with low to medium resolution imaging lenses. A stereoscopic vision system provides an improved 3-dimensional perspective of intra-operative sites that is crucial for advanced minimally invasive surgery and contributes to surgeon performance. An additional advantage of single chip stereo sensors is improvement of tolerance to electronic signal noise.
Martian Terrain Near Curiosity Precipice Target
2016-12-06
This view from the Navigation Camera (Navcam) on the mast of NASA's Curiosity Mars rover shows rocky ground within view while the rover was working at an intended drilling site called "Precipice" on lower Mount Sharp. The right-eye camera of the stereo Navcam took this image on Dec. 2, 2016, during the 1,537th Martian day, or sol, of Curiosity's work on Mars. On the previous sol, an attempt to collect a rock-powder sample with the rover's drill ended before drilling began. This led to several days of diagnostic work while the rover remained in place, during which it continued to use cameras and a spectrometer on its mast, plus environmental monitoring instruments. In this view, hardware visible at lower right includes the sundial-theme calibration target for Curiosity's Mast Camera. http://photojournal.jpl.nasa.gov/catalog/PIA21140
Analysis of crystalline lens coloration using a black and white charge-coupled device camera.
Sakamoto, Y; Sasaki, K; Kojima, M
1994-01-01
To analyze lens coloration in vivo, we used a new type of Scheimpflug camera that is a black and white type of charge-coupled device (CCD) camera. A new methodology was proposed. Scheimpflug images of the lens were taken three times through red (R), green (G), and blue (B) filters, respectively. Three images corresponding with the R, G, and B channels were combined into one image on the cathode-ray tube (CRT) display. The spectral transmittance of the tricolor filters and the spectral sensitivity of the CCD camera were used to correct the scattering-light intensity of each image. Coloration of the lens was expressed on a CIE standard chromaticity diagram. The lens coloration of seven eyes analyzed by this method showed values almost the same as those obtained by the previous method using color film.
Optical Verification Laboratory Demonstration System for High Security Identification Cards
NASA Technical Reports Server (NTRS)
Javidi, Bahram
1997-01-01
Document fraud including unauthorized duplication of identification cards and credit cards is a serious problem facing the government, banks, businesses, and consumers. In addition, counterfeit products such as computer chips, and compact discs, are arriving on our shores in great numbers. With the rapid advances in computers, CCD technology, image processing hardware and software, printers, scanners, and copiers, it is becoming increasingly easy to reproduce pictures, logos, symbols, paper currency, or patterns. These problems have stimulated an interest in research, development and publications in security technology. Some ID cards, credit cards and passports currently use holograms as a security measure to thwart copying. The holograms are inspected by the human eye. In theory, the hologram cannot be reproduced by an unauthorized person using commercially-available optical components; in practice, however, technology has advanced to the point where the holographic image can be acquired from a credit card-photographed or captured with by a CCD camera-and a new hologram synthesized using commercially-available optical components or hologram-producing equipment. Therefore, a pattern that can be read by a conventional light source and a CCD camera can be reproduced. An optical security and anti-copying device that provides significant security improvements over existing security technology was demonstrated. The system can be applied for security verification of credit cards, passports, and other IDs so that they cannot easily be reproduced. We have used a new scheme of complex phase/amplitude patterns that cannot be seen and cannot be copied by an intensity-sensitive detector such as a CCD camera. A random phase mask is bonded to a primary identification pattern which could also be phase encoded. The pattern could be a fingerprint, a picture of a face, or a signature. The proposed optical processing device is designed to identify both the random phase mask and the primary pattern [1-3]. We have demonstrated experimentally an optical processor for security verification of objects, products, and persons. This demonstration is very important to encourage industries to consider the proposed system for research and development.
Kelly in the Cupola Module during Expedition 26
2010-11-26
ISS026-E-005313 (26 Nov. 2010) --- A fish-eye lens attached to an electronic still camera was used to capture this image of NASA astronaut Scott Kelly, Expedition 26 commander, in the Cupola of the International Space Station.
Toward individually tunable compound eyes with transparent graphene electrode.
Shahini, Ali; Jin, Hai; Zhou, Zhixian; Zhao, Yang; Chen, Pai-Yen; Hua, Jing; Cheng, Mark Ming-Cheng
2017-06-08
We present tunable compound eyes made of ionic liquid lenses, of which both curvatures (R 1 and R 2 in the lensmaker's equation) can be individually changed using electrowetting on dielectric (EWOD) and applied pressure. Flexible graphene is used as a transparent electrode and is integrated on a flexible polydimethylsiloxane (PDMS)/parylene hybrid substrate. Graphene electrodes allow a large lens aperture diameter of between 2.4 mm and 2.74 mm. Spherical aberration analysis is performed using COMSOL to investigate the optical property of the lens under applied voltage and pressure. The final lens system shows a resolution of 645.1 line pair per millimeter. A prototype of a tunable lens array is proposed for the application of a compound eye.
First results from the TOPSAT camera
NASA Astrophysics Data System (ADS)
Greenway, Paul; Tosh, Ian; Morris, Nigel; Burton, Gary; Cawley, Steve
2017-11-01
The TopSat camera is a low cost remote sensing imager capable of producing 2.5 metre resolution panchromatic imagery, funded by the British National Space Centre's Mosaic programme. The instrument was designed and assembled at the Space Science & Technology Department of the CCLRC's Rutherford Appleton Laboratory (RAL) in the UK, and was launched on the 27th October 2005 from Plesetsk Cosmodrome in Northern Russia on a Kosmos-3M. The camera utilises an off-axis three mirror system, which has the advantages of excellent image quality over a wide field of view, combined with a compactness that makes its overall dimensions smaller than its focal length. Keeping the costs to a minimum has been a major design driver in the development of this camera. The camera is part of the TopSat mission, which is a collaboration between four UK organisations; QinetiQ, Surrey Satellite Technology Ltd (SSTL), RAL and Infoterra. Its objective is to demonstrate provision of rapid response high resolution imagery to fixed and mobile ground stations using a low cost minisatellite. The paper "Development of the TopSat Camera" presented by RAL at the 5th ICSO in 2004 described the opto-mechanical design, assembly, alignment and environmental test methods implemented. Now that the spacecraft is in orbit and successfully acquiring images, this paper presents the first results from the camera and makes an initial assessment of the camera's in-orbit performance.
Basics of Sterile Compounding: Ophthalmic Preparations, Part 2: Suspensions and Ointments.
Allen, Loyd V
2016-01-01
Ophthalmic preparations are used to treat allergies, bacterial and viral infections, glaucoma, and numerous other eye conditions. When the eye's natural defensive mechanisms are compromised or overcome, an ophthalmic preparation, in a solution, suspension, or ointment form, may be indicated, with solutions being the most common form used to deliver a drug to the eye. This article discusses ophthalmic suspensions and ointments and represents part 2 of a 2-part article, the first of which discussed ophthalmic solutions. Copyright© by International Journal of Pharmaceutical Compounding, Inc.
3D-printed eagle eye: Compound microlens system for foveated imaging
Thiele, Simon; Arzenbacher, Kathrin; Gissibl, Timo; Giessen, Harald; Herkommer, Alois M.
2017-01-01
We present a highly miniaturized camera, mimicking the natural vision of predators, by 3D-printing different multilens objectives directly onto a complementary metal-oxide semiconductor (CMOS) image sensor. Our system combines four printed doublet lenses with different focal lengths (equivalent to f = 31 to 123 mm for a 35-mm film) in a 2 × 2 arrangement to achieve a full field of view of 70° with an increasing angular resolution of up to 2 cycles/deg field of view in the center of the image. The footprint of the optics on the chip is below 300 μm × 300 μm, whereas their height is <200 μm. Because the four lenses are printed in one single step without the necessity for any further assembling or alignment, this approach allows for fast design iterations and can lead to a plethora of different miniaturized multiaperture imaging systems with applications in fields such as endoscopy, optical metrology, optical sensing, surveillance drones, or security. PMID:28246646
Hornets can fly at night without obvious adaptations of eyes and ocelli.
Kelber, Almut; Jonsson, Fredrik; Wallén, Rita; Warrant, Eric; Kornfeldt, Torill; Baird, Emily
2011-01-01
Hornets, the largest social wasps, have a reputation of being facultatively nocturnal. Here we confirm flight activity of hornet workers in dim twilight. We studied the eyes and ocelli of European hornets (Vespa crabro) and common wasps (Vespula vulgaris) with the goal to find the optical and anatomical adaptations that enable them to fly in dim light. Adaptations described for obligately nocturnal hymenoptera such as the bees Xylocopa tranquebarica and Megalopta genalis and the wasp Apoica pallens include large ocelli and compound eyes with wide rhabdoms and large facet lenses. Interestingly, we did not find any such adaptations in hornet eyes or ocelli. On the contrary, their eyes are even less sensitive than those of the obligately diurnal common wasps. Therefore we conclude that hornets, like several facultatively nocturnal bee species such as Apis mellifera adansonii, A. dorsata and X. tenuiscapa are capable of seeing in dim light simply due to the large body and thus eye size. We propose that neural pooling strategies and behavioural adaptations precede anatomical adaptations in the eyes and ocelli when insects with apposition compound eyes turn to dim light activity.
[Regarding the Manuscript D " Dell' occhio " of Leonardo da Vinci].
Heitz, Robert F
2009-01-01
Leonardo da Vinci's Manuscript D consists of five double pages sheets, which, folded in two, comprise ten folios. This document, in the old Tuscan dialect and mirror writing, reveals the ideas of Leonardo on the anatomy of the eye in relation to the formation of images and visual perception. Leonardo explains in particular the behavior of the rays in the eye in terms of refraction and reflection, and is very mechanistic in his conception of the eye and of the visual process. The most significant innovations found in these folios are the concept of the eye as a camera obscura and the intersection of light rays in the interior of the eye. His texts nevertheless show hesitation, doubts and a troubled confusion, reflecting the ideas and uncertainties of his era. He did not share his results in his lifetime, despite both printing and etching being readily available to him.
Estimating the gaze of a virtuality human.
Roberts, David J; Rae, John; Duckworth, Tobias W; Moore, Carl M; Aspin, Rob
2013-04-01
The aim of our experiment is to determine if eye-gaze can be estimated from a virtuality human: to within the accuracies that underpin social interaction; and reliably across gaze poses and camera arrangements likely in every day settings. The scene is set by explaining why Immersive Virtuality Telepresence has the potential to meet the grand challenge of faithfully communicating both the appearance and the focus of attention of a remote human participant within a shared 3D computer-supported context. Within the experiment n=22 participants rotated static 3D virtuality humans, reconstructed from surround images, until they felt most looked at. The dependent variable was absolute angular error, which was compared to that underpinning social gaze behaviour in the natural world. Independent variables were 1) relative orientations of eye, head and body of captured subject; and 2) subset of cameras used to texture the form. Analysis looked for statistical and practical significance and qualitative corroborating evidence. The analysed results tell us much about the importance and detail of the relationship between gaze pose, method of video based reconstruction, and camera arrangement. They tell us that virtuality can reproduce gaze to an accuracy useful in social interaction, but with the adopted method of Video Based Reconstruction, this is highly dependent on combination of gaze pose and camera arrangement. This suggests changes in the VBR approach in order to allow more flexible camera arrangements. The work is of interest to those wanting to support expressive meetings that are both socially and spatially situated, and particular those using or building Immersive Virtuality Telepresence to accomplish this. It is also of relevance to the use of virtuality humans in applications ranging from the study of human interactions to gaming and the crossing of the stage line in films and TV.
Design of an ultra-thin near-eye display with geometrical waveguide and freeform optics
NASA Astrophysics Data System (ADS)
Tsai, Meng-Che; Lee, Tsung-Xian
2017-02-01
Due to the worldwide portable devices and illumination technology trends, researches interest in laser diodes applications are booming in recent years. One of the popular and potential LDs applications is near-eye display used in VR/AR. An ideal near-eye display needs to provide high resolution, wide FOV imagery with compact magnifying optics, and long battery life for prolonged use. However, previous studies still cannot reach high light utilization efficiency in illumination and imaging optical systems which should be raised as possible to increase wear comfort. To meet these needs, a waveguide illumination system of near-eye display is presented in this paper. We focused on proposing a high efficiency RGB LDs light engine which could reduce power consumption and increase flexibility of mechanism design by using freeform TIR reflectors instead of beam splitters. By these structures, the total system efficiency of near-eye display is successfully increased, and the improved results in efficiency and fabrication tolerance of near-eye displays are shown in this paper.
NASA Astrophysics Data System (ADS)
Dullo, Bililign T.; Graham, Alister W.
2013-05-01
We have used the full radial extent of images from the Hubble Space Telescope's Advanced Camera for Surveys and Wide Field Planetary Camera 2 to extract surface brightness profiles from a sample of six, local lenticular galaxy candidates. We have modeled these profiles using a core-Sérsic bulge plus an exponential disk model. Our fast rotating lenticular disk galaxies with bulge magnitudes MV <~ -21.30 mag have central stellar deficits, suggesting that these bulges may have formed from "dry" merger events involving supermassive black holes (BHs) while their surrounding disk was subsequently built up, perhaps via cold gas accretion scenarios. The central stellar mass deficits M def are roughly 0.5-2 M BH (BH mass), rather than ~10-20 M BH as claimed from some past studies, which is in accord with core-Sérsic model mass deficit measurements in elliptical galaxies. Furthermore, these bulges have Sérsic indices n ~3, half-light radii Re < 2 kpc and masses >1011 M ⊙, and therefore appear to be descendants of the compact galaxies reported at z ~ 1.5-2. Past studies which have searched for these local counterparts by using single-component galaxy models to provide the z ~ 0 size comparisons have overlooked these dense, compact, and massive bulges in today's early-type disk galaxies. This evolutionary scenario not only accounts for what are today generally old bulges—which must be present in z ~ 1.5 images—residing in what are generally young disks, but it eliminates the uncomfortable suggestion of a factor of three to five growth in size for the compact, z ~ 1.5 galaxies that are known to possess infant disks.
Technical and instrumental prerequisites for single-port laparoscopic solo surgery: state of art.
Kim, Say-June; Lee, Sang Chul
2015-04-21
With the aid of advanced surgical techniques and instruments, single-port laparoscopic surgery (SPLS) can be accomplished with just two surgical members: an operator and a camera assistant. Under these circumstances, the reasonable replacement of a human camera assistant by a mechanical camera holder has resulted in a new surgical procedure termed single-port solo surgery (SPSS). In SPSS, the fixation and coordinated movement of a camera held by mechanical devices provides fixed and stable operative images that are under the control of the operator. Therefore, SPSS primarily benefits from the provision of the operator's eye-to-hand coordination. Because SPSS is an intuitive modification of SPLS, the indications for SPSS are the same as those for SPLS. Though SPSS necessitates more actions than the surgery with a human assistant, these difficulties seem to be easily overcome by the greater provision of static operative images and the need for less lens cleaning and repositioning of the camera. When the operation is expected to be difficult and demanding, the SPSS process could be assisted by the addition of another instrument holder besides the camera holder.
Continuous monitoring of Hawaiian volcanoes with thermal cameras
Patrick, Matthew R.; Orr, Tim R.; Antolik, Loren; Lee, Robert Lopaka; Kamibayashi, Kevan P.
2014-01-01
Continuously operating thermal cameras are becoming more common around the world for volcano monitoring, and offer distinct advantages over conventional visual webcams for observing volcanic activity. Thermal cameras can sometimes “see” through volcanic fume that obscures views to visual webcams and the naked eye, and often provide a much clearer view of the extent of high temperature areas and activity levels. We describe a thermal camera network recently installed by the Hawaiian Volcano Observatory to monitor Kīlauea’s summit and east rift zone eruptions (at Halema‘uma‘u and Pu‘u ‘Ō‘ō craters, respectively) and to keep watch on Mauna Loa’s summit caldera. The cameras are long-wave, temperature-calibrated models protected in custom enclosures, and often positioned on crater rims close to active vents. Images are transmitted back to the observatory in real-time, and numerous Matlab scripts manage the data and provide automated analyses and alarms. The cameras have greatly improved HVO’s observations of surface eruptive activity, which includes highly dynamic lava lake activity at Halema‘uma‘u, major disruptions to Pu‘u ‘Ō‘ō crater and several fissure eruptions.
Development of the compact infrared camera (CIRC) for Earth observation
NASA Astrophysics Data System (ADS)
Naitoh, Masataka; Katayama, Haruyoshi; Harada, Masatomo; Nakamura, Ryoko; Kato, Eri; Tange, Yoshio; Sato, Ryota; Nakau, Koji
2017-11-01
The Compact Infrared Camera (CIRC) is an instrument equipped with an uncooled infrared array detector (microbolometer). We adopted the microbolometer, because it does not require a cooling system such as a mechanical cooler, and athermal optics, which does not require an active thermal control of optics. This can reduce the size, cost, and electrical power consumption of the sensor. The main mission of the CIRC is to demonstrate the technology for detecting wildfire, which are major and chronic disasters affecting many countries in the Asia-Pacific region. It is possible to increase observational frequency of wildfires, if CIRCs are carried on a various satellites by taking advantages of small size and light weight. We have developed two CIRCs. The first will be launched in JFY 2013 onboard Advanced Land Observing Satellite-2 (ALOS- 2), and the second will be launched in JFY 2014 onboard CALorimetric Electron Telescope (CALET) of the Japanese Experiment Module (JEM) at the International Space Station(ISS). We have finished the ground Calibration of the first CIRC onboard ALOS-2. In this paper, we provide an overview of the CIRC and its results of ground calibration.
ARNICA: the Arcetri Observatory NICMOS3 imaging camera
NASA Astrophysics Data System (ADS)
Lisi, Franco; Baffa, Carlo; Hunt, Leslie K.
1993-10-01
ARNICA (ARcetri Near Infrared CAmera) is the imaging camera for the near infrared bands between 1.0 and 2.5 micrometers that Arcetri Observatory has designed and built as a general facility for the TIRGO telescope (1.5 m diameter, f/20) located at Gornergrat (Switzerland). The scale is 1' per pixel, with sky coverage of more than 4' X 4' on the NICMOS 3 (256 X 256 pixels, 40 micrometers side) detector array. The optical path is compact enough to be enclosed in a 25.4 cm diameter dewar; the working temperature is 76 K. The camera is remotely controlled by a 486 PC, connected to the array control electronics via a fiber-optics link. A C-language package, running under MS-DOS on the 486 PC, acquires and stores the frames, and controls the timing of the array. We give an estimate of performance, in terms of sensitivity with an assigned observing time, along with some details on the main parameters of the NICMOS 3 detector.
Low-cost laser speckle contrast imaging of blood flow using a webcam.
Richards, Lisa M; Kazmi, S M Shams; Davis, Janel L; Olin, Katherine E; Dunn, Andrew K
2013-01-01
Laser speckle contrast imaging has become a widely used tool for dynamic imaging of blood flow, both in animal models and in the clinic. Typically, laser speckle contrast imaging is performed using scientific-grade instrumentation. However, due to recent advances in camera technology, these expensive components may not be necessary to produce accurate images. In this paper, we demonstrate that a consumer-grade webcam can be used to visualize changes in flow, both in a microfluidic flow phantom and in vivo in a mouse model. A two-camera setup was used to simultaneously image with a high performance monochrome CCD camera and the webcam for direct comparison. The webcam was also tested with inexpensive aspheric lenses and a laser pointer for a complete low-cost, compact setup ($90, 5.6 cm length, 25 g). The CCD and webcam showed excellent agreement with the two-camera setup, and the inexpensive setup was used to image dynamic blood flow changes before and after a targeted cerebral occlusion.
Low-cost laser speckle contrast imaging of blood flow using a webcam
Richards, Lisa M.; Kazmi, S. M. Shams; Davis, Janel L.; Olin, Katherine E.; Dunn, Andrew K.
2013-01-01
Laser speckle contrast imaging has become a widely used tool for dynamic imaging of blood flow, both in animal models and in the clinic. Typically, laser speckle contrast imaging is performed using scientific-grade instrumentation. However, due to recent advances in camera technology, these expensive components may not be necessary to produce accurate images. In this paper, we demonstrate that a consumer-grade webcam can be used to visualize changes in flow, both in a microfluidic flow phantom and in vivo in a mouse model. A two-camera setup was used to simultaneously image with a high performance monochrome CCD camera and the webcam for direct comparison. The webcam was also tested with inexpensive aspheric lenses and a laser pointer for a complete low-cost, compact setup ($90, 5.6 cm length, 25 g). The CCD and webcam showed excellent agreement with the two-camera setup, and the inexpensive setup was used to image dynamic blood flow changes before and after a targeted cerebral occlusion. PMID:24156082
The integrated design and archive of space-borne signal processing and compression coding
NASA Astrophysics Data System (ADS)
He, Qiang-min; Su, Hao-hang; Wu, Wen-bo
2017-10-01
With the increasing demand of users for the extraction of remote sensing image information, it is very urgent to significantly enhance the whole system's imaging quality and imaging ability by using the integrated design to achieve its compact structure, light quality and higher attitude maneuver ability. At this present stage, the remote sensing camera's video signal processing unit and image compression and coding unit are distributed in different devices. The volume, weight and consumption of these two units is relatively large, which unable to meet the requirements of the high mobility remote sensing camera. This paper according to the high mobility remote sensing camera's technical requirements, designs a kind of space-borne integrated signal processing and compression circuit by researching a variety of technologies, such as the high speed and high density analog-digital mixed PCB design, the embedded DSP technology and the image compression technology based on the special-purpose chips. This circuit lays a solid foundation for the research of the high mobility remote sensing camera.
Design of a high-numerical-aperture digital micromirror device camera with high dynamic range.
Qiao, Yang; Xu, Xiping; Liu, Tao; Pan, Yue
2015-01-01
A high-NA imaging system with high dynamic range is presented based on a digital micromirror device (DMD). The DMD camera consists of an objective imaging system and a relay imaging system, connected by a DMD chip. With the introduction of a total internal reflection prism system, the objective imaging system is designed with a working F/# of 1.97, breaking through the F/2.45 limitation of conventional DMD projection lenses. As for the relay imaging system, an off-axis design that could correct off-axis aberrations of the tilt relay imaging system is developed. This structure has the advantage of increasing the NA of the imaging system while maintaining a compact size. Investigation revealed that the dynamic range of a DMD camera could be greatly increased, by 2.41 times. We built one prototype DMD camera with a working F/# of 1.23, and the field experiments proved the validity and reliability our work.
Optical performance analysis of plenoptic camera systems
NASA Astrophysics Data System (ADS)
Langguth, Christin; Oberdörster, Alexander; Brückner, Andreas; Wippermann, Frank; Bräuer, Andreas
2014-09-01
Adding an array of microlenses in front of the sensor transforms the capabilities of a conventional camera to capture both spatial and angular information within a single shot. This plenoptic camera is capable of obtaining depth information and providing it for a multitude of applications, e.g. artificial re-focusing of photographs. Without the need of active illumination it represents a compact and fast optical 3D acquisition technique with reduced effort in system alignment. Since the extent of the aperture limits the range of detected angles, the observed parallax is reduced compared to common stereo imaging systems, which results in a decreased depth resolution. Besides, the gain of angular information implies a degraded spatial resolution. This trade-off requires a careful choice of the optical system parameters. We present a comprehensive assessment of possible degrees of freedom in the design of plenoptic systems. Utilizing a custom-built simulation tool, the optical performance is quantified with respect to particular starting conditions. Furthermore, a plenoptic camera prototype is demonstrated in order to verify the predicted optical characteristics.
3D kinematic measurement of human movement using low cost fish-eye cameras
NASA Astrophysics Data System (ADS)
Islam, Atiqul; Asikuzzaman, Md.; Garratt, Matthew A.; Pickering, Mark R.
2017-02-01
3D motion capture is difficult when the capturing is performed in an outdoor environment without controlled surroundings. In this paper, we propose a new approach of using two ordinary cameras arranged in a special stereoscopic configuration and passive markers on a subject's body to reconstruct the motion of the subject. Firstly for each frame of the video, an adaptive thresholding algorithm is applied for extracting the markers on the subject's body. Once the markers are extracted, an algorithm for matching corresponding markers in each frame is applied. Zhang's planar calibration method is used to calibrate the two cameras. As the cameras use the fisheye lens, they cannot be well estimated using a pinhole camera model which makes it difficult to estimate the depth information. In this work, to restore the 3D coordinates we use a unique calibration method for fisheye lenses. The accuracy of the 3D coordinate reconstruction is evaluated by comparing with results from a commercially available Vicon motion capture system.
Lee, Jong-Hyuck; Kim, Jae Hyuck; Kim, Sun Woong
2017-02-27
To compare the repeatability of central corneal thickness (CCT) measurement using the Pentacam between dry eyes and healthy eyes, as well as to investigate the effect of artificial tears on CCT measurement. The corneal thicknesses of 34 patients with dry eye and 28 healthy subjects were measured using the Pentacam. One eye from each subject was assigned randomly to a repeatability test, wherein a single operator performed three successive CCT measurements time points-before and 5 min after instillation of one artificial teardrop. The repeatability of measurements was assessed using the coefficient of repeatability and the intraclass correlation coefficient. The coefficient of repeatability values of the CCT measurements in dry and healthy eyes were 24.36 and 10.69 μm before instillation, and 16.85 and 9.72 μm after instillation, respectively. The intraclass correlation coefficient was higher in healthy eyes than that of in dry eyes (0.987 vs. 0.891), and it had improved significantly in dry eyes (0.948) after instillation of one artificial teardrop. The CCT measurement fluctuated in dry eyes (repeated-measures analysis of variance, P<0.001), whereas no significant changes were detected in healthy eyes, either before or after artificial tear instillation. Central corneal thickness measurement is less repeatable in dry eyes than in healthy eyes. Artificial tears improve the repeatability of CCT measurements obtained using the Pentacam in dry eyes.
Thermal fluctuation based study of aqueous deficient dry eyes by non-invasive thermal imaging.
Azharuddin, Mohammad; Bera, Sumanta Kr; Datta, Himadri; Dasgupta, Anjan Kr
2014-03-01
In this paper we have studied the thermal fluctuation patterns occurring at the ocular surface of the left and right eyes for aqueous deficient dry eye (ADDE) patients and control subjects by thermal imaging. We conducted our experiment on 42 patients (84 eyes) with aqueous deficient dry eyes and compared with 36 healthy volunteers (72 eyes) without any history of ocular surface disorder. Schirmer's test, Tear Break-up Time, tear Meniscus height and fluorescein staining tests were conducted. Ocular surface temperature measurement was done, using an FL-IR thermal camera and thermal fluctuation in left and right eyes was calculated and analyzed using MATLAB. The time series containing the sum of squares of the temperature fluctuation on the ocular surface were compared for aqueous deficient dry eye and control subjects. Significant statistical difference between the fluctuation patterns for control and ADDE was observed (p < 0.001 at 95% confidence interval). Thermal fluctuations in left and right eyes are significantly correlated in controls but not in ADDE subjects. The possible origin of such correlation in control and lack of correlation in the ADDE subjects is discussed in the text. Copyright © 2014 Elsevier Ltd. All rights reserved.
The Optical Green Valley Versus Mid-infrared Canyon in Compact Groups
NASA Technical Reports Server (NTRS)
Walker, Lisa May; Butterfield, Natalie; Johnson, Kelsey; Zucker, Catherine; Gallagher, Sarah; Konstantopoulos, Iraklis; Zabludoff, Ann; Hornschemeier, Ann E.; Tzanavaris, Panayiotis; Charlton, Jane C.
2013-01-01
Compact groups of galaxies provide conditions similar to those experienced by galaxies in the earlier universe. Recent work on compact groups has led to the discovery of a dearth of mid-infrared transition galaxies (MIRTGs) in Infrared Array Camera (3.6-8.0 micrometers) color space as well as at intermediate specific star formation rates. However, we find that in compact groups these MIRTGs have already transitioned to the optical ([g-r]) red sequence. We investigate the optical color-magnitude diagram (CMD) of 99 compact groups containing 348 galaxies and compare the optical CMD with mid-infrared (mid-IR) color space for compact group galaxies. Utilizing redshifts available from Sloan Digital Sky Survey, we identified new galaxy members for four groups. By combining optical and mid-IR data, we obtain information on both the dust and the stellar populations in compact group galaxies. We also compare with more isolated galaxies and galaxies in the Coma Cluster, which reveals that, similar to clusters, compact groups are dominated by optically red galaxies. While we find that compact group transition galaxies lie on the optical red sequence, LVL (Local Volume Legacy) + (plus) SINGS (Spitzer Infrared Nearby Galaxies Survey) mid-IR (infrared) transition galaxies span the range of optical colors. The dearth of mid-IR transition galaxies in compact groups may be due to a lack of moderately star-forming low mass galaxies; the relative lack of these galaxies could be due to their relatively small gravitational potential wells. This makes them more susceptible to this dynamic environment, thus causing them to more easily lose gas or be accreted by larger members.
Sensor fusion in identified visual interneurons.
Parsons, Matthew M; Krapp, Holger G; Laughlin, Simon B
2010-04-13
Animal locomotion often depends upon stabilization reflexes that use sensory feedback to maintain trajectories and orientation. Such stabilizing reflexes are critically important for the blowfly, whose aerodynamic instability permits outstanding maneuverability but increases the demands placed on flight control. Flies use several sensory systems to drive reflex responses, and recent studies have provided access to the circuitry responsible for combining and employing these sensory inputs. We report that lobula plate VS neurons combine inputs from two optical sensors, the ocelli and the compound eyes. Both systems deliver essential information on in-flight rotations, but our neuronal recordings reveal that the ocelli encode this information in three axes, whereas the compound eyes encode in nine. The difference in dimensionality is reconciled by tuning each VS neuron to the ocellar axis closest to its compound eye axis. We suggest that this simple projection combines the speed of the ocelli with the accuracy of the compound eyes without compromising either. Our findings also support the suggestion that the coordinates of sensory information processing are aligned with axes controlling the natural modes of the fly's flight to improve the efficiency with which sensory signals are transformed into appropriate motor commands.
Compact fluorescence and white-light imaging system for intraoperative visualization of nerves
NASA Astrophysics Data System (ADS)
Gray, Dan; Kim, Evgenia; Cotero, Victoria; Staudinger, Paul; Yazdanfar, Siavash; tan Hehir, Cristina
2012-02-01
Fluorescence image guided surgery (FIGS) allows intraoperative visualization of critical structures, with applications spanning neurology, cardiology and oncology. An unmet clinical need is prevention of iatrogenic nerve damage, a major cause of post-surgical morbidity. Here we describe the advancement of FIGS imaging hardware, coupled with a custom nerve-labeling fluorophore (GE3082), to bring FIGS nerve imaging closer to clinical translation. The instrument is comprised of a 405nm laser and a white light LED source for excitation and illumination. A single 90 gram color CCD camera is coupled to a 10mm surgical laparoscope for image acquisition. Synchronization of the light source and camera allows for simultaneous visualization of reflected white light and fluorescence using only a single camera. The imaging hardware and contrast agent were evaluated in rats during in situ surgical procedures.
A compact fluorescence and white light imaging system for intraoperative visualization of nerves
NASA Astrophysics Data System (ADS)
Gray, Dan; Kim, Evgenia; Cotero, Victoria; Staudinger, Paul; Yazdanfar, Siavash; Tan Hehir, Cristina
2012-03-01
Fluorescence image guided surgery (FIGS) allows intraoperative visualization of critical structures, with applications spanning neurology, cardiology and oncology. An unmet clinical need is prevention of iatrogenic nerve damage, a major cause of post-surgical morbidity. Here we describe the advancement of FIGS imaging hardware, coupled with a custom nerve-labeling fluorophore (GE3082), to bring FIGS nerve imaging closer to clinical translation. The instrument is comprised of a 405nm laser and a white light LED source for excitation and illumination. A single 90 gram color CCD camera is coupled to a 10mm surgical laparoscope for image acquisition. Synchronization of the light source and camera allows for simultaneous visualization of reflected white light and fluorescence using only a single camera. The imaging hardware and contrast agent were evaluated in rats during in situ surgical procedures.
Eye size and behaviour of day- and night-flying leafcutting ant alates
John C. Moser; John D. Reeve; José Maunício S. Bento; Terezinha M.C. Della Lucia; R. Scott Cameron; Natalie M. Heck
2004-01-01
The morphology of insect eyes often seems to be shaped by evolution to match their behaviour and lifestyle. Here the relationship between the nuptial flight behaviour of 10 Atta species (Hymenoptera: Fonnicidae) and the eye size of male and female alates, including the compound eyes, ommatidia facets, and ocelli were examined. These species can be...
Using a trichromatic CCD camera for spectral skylight estimation.
López-Alvarez, Miguel A; Hernández-Andrés, Javier; Romero, Javier; Olmo, F J; Cazorla, A; Alados-Arboledas, L
2008-12-01
In a previous work [J. Opt. Soc. Am. A 24, 942-956 (2007)] we showed how to design an optimum multispectral system aimed at spectral recovery of skylight. Since high-resolution multispectral images of skylight could be interesting for many scientific disciplines, here we also propose a nonoptimum but much cheaper and faster approach to achieve this goal by using a trichromatic RGB charge-coupled device (CCD) digital camera. The camera is attached to a fish-eye lens, hence permitting us to obtain a spectrum of every point of the skydome corresponding to each pixel of the image. In this work we show how to apply multispectral techniques to the sensors' responses of a common trichromatic camera in order to obtain skylight spectra from them. This spectral information is accurate enough to estimate experimental values of some climate parameters or to be used in algorithms for automatic cloud detection, among many other possible scientific applications.
The effects of shock wave compaction on the transition temperatures of A15 structure superconductors
NASA Technical Reports Server (NTRS)
Otto, G. H.
1974-01-01
Several superconductors with the A15 structure exhibit a positive pressure coefficient, indicating that their transition temperatures increase with applied pressure. Powders of the composition Nb3Al, Nb3Ge, Nb3(Al0.75Ge0.25), and V3Si were compacted by explosive shock waves. The superconducting properties of these materials were measured before and after compaction and it was found that regardless of the sign of the pressure coefficient, the transition temperature is always lowered. The decrease in transition temperature is associated with a decrease in the particle diameter. The shock wave passage through a 3Nb:1Ge powder mixture leads to the formation of at least one compound (probably Nb5Ge3). However, the formation of the A15 compound Nb3Ge is not observed. Elemental niobium powder can be compacted by converging shock waves close to the expected value of the bulk density. Under special circumstances a partial remelting in the center of the sample is observed.
USDA-ARS?s Scientific Manuscript database
Unmanned aerial vehicles (UAVs) have tremendous potential as tools for evaluation of research field plots. Standard cameras mounted to UAVs can document plant growth throughout the season and provide a permanent record of field performance. They can also be used to identify regions of the field with...
Empty STS-114 orbiter Discovery Payload bay
2005-07-29
ISS011-E-11340 (29 July 2005) --- A "fish-eye" lens on a digital still camera was used to record this image of the Space Shuttle Discovery from the International Space Station, to which it is docked for several days of joint activities.
Fixed-focus camera objective for small remote sensing satellites
NASA Astrophysics Data System (ADS)
Topaz, Jeremy M.; Braun, Ofer; Freiman, Dov
1993-09-01
An athermalized objective has been designed for a compact, lightweight push-broom camera which is under development at El-Op Ltd. for use in small remote-sensing satellites. The high performance objective has a fixed focus setting, but maintains focus passively over the full range of temperatures encountered in small satellites. The lens is an F/5.0, 320 mm focal length Tessar type, operating over the range 0.5 - 0.9 micrometers . It has a 16 degree(s) field of view and accommodates various state-of-the-art silicon detector arrays. The design and performance of the objective is described in this paper.
GPS free navigation inspired by insects through monocular camera and inertial sensors
NASA Astrophysics Data System (ADS)
Liu, Yi; Liu, J. G.; Cao, H.; Huang, Y.
2015-12-01
Navigation without GPS and other knowledge of environment have been studied for many decades. Advance technology have made sensors more compact and subtle that can be easily integrated into micro and hand-hold device. Recently researchers found that bee and fruit fly have an effectively and efficiently navigation mechanism through optical flow information and process only with their miniature brain. We present a navigation system inspired by the study of insects through a calibrated camera and other inertial sensors. The system utilizes SLAM theory and can be worked in many GPS denied environment. Simulation and experimental results are presented for validation and quantification.
Kosheleva, N V; Saburina, I N; Zurina, I M; Gorkun, A A; Borzenok, S A; Nikishin, D A; Kolokoltsova, T D; Ustinova, E E; Repin, V S
2016-01-01
It is known that stem and progenitor cells open new possibilities for restoring injured eye tissues. Limbal eye zone, formed mainly by derivatives of neural crest, is the main source of stem cells for regeneration. The current study considers development of innovative technology for obtaining 3D spheroids from L-MMSC. It was shown that under 3D conditions L-MMSC due to compactization and mesenchymal-epithelial transition self-organize into cellular reparative modules. Formed L-MMSC spheroids retain and promote undifferentiated population of stem and progenitor limbal cells, as supported by expression of pluripotency markers - Oct4, Sox2, Nanog. Extracellular matrix synthetized by cells in spheroids allows retaining the functional potential of L-MMSC that are involved in regeneration of both anterior and, probably, posterior eye segment.
Optimized keratometry and total corneal astigmatism for toric intraocular lens calculation.
Savini, Giacomo; Næser, Kristian; Schiano-Lomoriello, Domenico; Ducoli, Pietro
2017-09-01
To compare keratometric astigmatism (KA) and different modalities of measuring total corneal astigmatism (TCA) for toric intraocular lens (IOL) calculation and optimize corneal measurements to eliminate the residual refractive astigmatism. G.B. Bietti Foundation IRCCS, Rome, Italy. Prospective case series. Patients who had a toric IOL were enrolled. Preoperatively, a Scheimpflug camera (Pentacam HR) was used to measure TCA through ray tracing. Different combinations of measurements at a 3.0 mm diameter, centered on the pupil or the corneal vertex and performed along a ring or within it, were compared. Keratometric astigmatism was measured using the same Scheimpflug camera and a corneal topographer (Keratron). Astigmatism was analyzed with Næser's polar value method. The optimized preoperative corneal astigmatism was back-calculated from the postoperative refractive astigmatism. The study comprised 62 patients (64 eyes). With both devices, KA produced an overcorrection of with-the-rule (WTR) astigmatism by 0.6 diopter (D) and an undercorrection of against-the-rule (ATR) astigmatism by 0.3 D. The lowest meridional error in refractive astigmatism was achieved by the TCA pupil/zone measurement in WTR eyes (0.27 D overcorrection) and the TCA apex/zone measurement in ATR eyes (0.07 D undercorrection). In the whole sample, no measurement allowed more than 43.75% of eyes to yield an absolute error in astigmatism magnitude lower than 0.5 D. Optimized astigmatism values increased the percentage of eyes with this error up to 57.81%, with no difference compared with the Barrett calculator and the Abulafia-Koch calculator. Compared with KA, TCA improved calculations for toric IOLs; however, optimization of corneal astigmatism measurements led to more accurate results. Copyright © 2017 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Real-time rendering for multiview autostereoscopic displays
NASA Astrophysics Data System (ADS)
Berretty, R.-P. M.; Peters, F. J.; Volleberg, G. T. G.
2006-02-01
In video systems, the introduction of 3D video might be the next revolution after the introduction of color. Nowadays multiview autostereoscopic displays are in development. Such displays offer various views at the same time and the image content observed by the viewer depends upon his position with respect to the screen. His left eye receives a signal that is different from what his right eye gets; this gives, provided the signals have been properly processed, the impression of depth. The various views produced on the display differ with respect to their associated camera positions. A possible video format that is suited for rendering from different camera positions is the usual 2D format enriched with a depth related channel, e.g. for each pixel in the video not only its color is given, but also e.g. its distance to a camera. In this paper we provide a theoretical framework for the parallactic transformations which relates captured and observed depths to screen and image disparities. Moreover we present an efficient real time rendering algorithm that uses forward mapping to reduce aliasing artefacts and that deals properly with occlusions. For improved perceived resolution, we take the relative position of the color subpixels and the optics of the lenticular screen into account. Sophisticated filtering techniques results in high quality images.
2015-05-08
NASA's Curiosity Mars rover recorded this view of the sun setting at the close of the mission's 956th Martian day, or sol (April 15, 2015), from the rover's location in Gale Crater. This was the first sunset observed in color by Curiosity. The image comes from the left-eye camera of the rover's Mast Camera (Mastcam). The color has been calibrated and white-balanced to remove camera artifacts. Mastcam sees color very similarly to what human eyes see, although it is actually a little less sensitive to blue than people are. Dust in the Martian atmosphere has fine particles that permit blue light to penetrate the atmosphere more efficiently than longer-wavelength colors. That causes the blue colors in the mixed light coming from the sun to stay closer to sun's part of the sky, compared to the wider scattering of yellow and red colors. The effect is most pronounced near sunset, when light from the sun passes through a longer path in the atmosphere than it does at mid-day. Malin Space Science Systems, San Diego, built and operates the rover's Mastcam. NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology, Pasadena, manages the Mars Science Laboratory Project for NASA's Science Mission Directorate, Washington. JPL designed and built the project's Curiosity rover. http://photojournal.jpl.nasa.gov/catalog/PIA19400
Whose point-of-view is it anyway?
NASA Astrophysics Data System (ADS)
Garvey, Gregory P.
2011-03-01
Shared virtual worlds such as Second Life privilege a single point-of-view, namely that of the user. When logged into Second Life a user sees the virtual world from a default viewpoint, which is from slightly above and behind the user's avatar (the user's alter ego 'in-world.') This point-of-view is as if the user were viewing his or her avatar using a camera floating a few feet behind it. In fact it is possible to set the view to as if you were seeing the world through the eyes of your avatar or you can even move the camera completely independent of your avatar. A change in point-of-view, means, more than just a different camera point-of-view. The practice of using multiple avatars requires a transformation of identity and personality. When a user 'enacts' the identity of a particular avatar, their 'real' personality is masked by the assumed personality. The technology of virtual worlds permits both a change of point-of -view and also facilitates a change in identity. Does this cause any psychological distress? Or is the ability to be someone else and see a world (a game, a virtual world) through a different set of eyes somehow liberating and even beneficial?
Schmidt, Jürgen; Laarousi, Rihab; Stolzmann, Wolfgang; Karrer-Gauß, Katja
2018-06-01
In this article, we examine the performance of different eye blink detection algorithms under various constraints. The goal of the present study was to evaluate the performance of an electrooculogram- and camera-based blink detection process in both manually and conditionally automated driving phases. A further comparison between alert and drowsy drivers was performed in order to evaluate the impact of drowsiness on the performance of blink detection algorithms in both driving modes. Data snippets from 14 monotonous manually driven sessions (mean 2 h 46 min) and 16 monotonous conditionally automated driven sessions (mean 2 h 45 min) were used. In addition to comparing two data-sampling frequencies for the electrooculogram measures (50 vs. 25 Hz) and four different signal-processing algorithms for the camera videos, we compared the blink detection performance of 24 reference groups. The analysis of the videos was based on very detailed definitions of eyelid closure events. The correct detection rates for the alert and manual driving phases (maximum 94%) decreased significantly in the drowsy (minus 2% or more) and conditionally automated (minus 9% or more) phases. Blinking behavior is therefore significantly impacted by drowsiness as well as by automated driving, resulting in less accurate blink detection.
Retinal and optical adaptations for nocturnal vision in the halictid bee Megalopta genalis.
Greiner, Birgit; Ribi, Willi A; Warrant, Eric J
2004-06-01
The apposition compound eye of a nocturnal bee, the halictid Megalopta genalis, is described for the first time. Compared to the compound eye of the worker honeybee Apis mellifera and the diurnal halictid bee Lasioglossum leucozonium, the eye of M. genalis shows specific retinal and optical adaptations for vision in dim light. The major anatomical adaptations within the eye of the nocturnal bee are (1) nearly twofold larger ommatidial facets and (2) a 4-5 times wider rhabdom diameter than found in the diurnal bees studied. Optically, the apposition eye of M. genalis is 27 times more sensitive to light than the eyes of the diurnal bees. This increased optical sensitivity represents a clear optical adaptation to low light intensities. Although this unique nocturnal apposition eye has a greatly improved ability to catch light, a 27-fold increase in sensitivity alone cannot account for nocturnal vision at light intensities that are 8 log units dimmer than during daytime. New evidence suggests that additional neuronal spatial summation within the first optic ganglion, the lamina, is involved.
Hand-eye calibration for rigid laparoscopes using an invariant point.
Thompson, Stephen; Stoyanov, Danail; Schneider, Crispin; Gurusamy, Kurinchi; Ourselin, Sébastien; Davidson, Brian; Hawkes, David; Clarkson, Matthew J
2016-06-01
Laparoscopic liver resection has significant advantages over open surgery due to less patient trauma and faster recovery times, yet it can be difficult due to the restricted field of view and lack of haptic feedback. Image guidance provides a potential solution but one current challenge is in accurate "hand-eye" calibration, which determines the position and orientation of the laparoscope camera relative to the tracking markers. In this paper, we propose a simple and clinically feasible calibration method based on a single invariant point. The method requires no additional hardware, can be constructed by theatre staff during surgical setup, requires minimal image processing and can be visualised in real time. Real-time visualisation allows the surgical team to assess the calibration accuracy before use in surgery. In addition, in the laboratory, we have developed a laparoscope with an electromagnetic tracking sensor attached to the camera end and an optical tracking marker attached to the distal end. This enables a comparison of tracking performance. We have evaluated our method in the laboratory and compared it to two widely used methods, "Tsai's method" and "direct" calibration. The new method is of comparable accuracy to existing methods, and we show RMS projected error due to calibration of 1.95 mm for optical tracking and 0.85 mm for EM tracking, versus 4.13 and 1.00 mm respectively, using existing methods. The new method has also been shown to be workable under sterile conditions in the operating room. We have proposed a new method of hand-eye calibration, based on a single invariant point. Initial experience has shown that the method provides visual feedback, satisfactory accuracy and can be performed during surgery. We also show that an EM sensor placed near the camera would provide significantly improved image overlay accuracy.
Traumatic eye injuries as a result of blunt impact
NASA Astrophysics Data System (ADS)
Clemente, Chiara; Esposito, Luca; Bonora, Nicola; Limido, Jerome; Lacome, Jean-Luc; Rossi, Tommaso
2013-06-01
The detachment or tearing of the retina in the human eye as a result of a collision is a phenomenon that occurs very often. This research is aimed at identifying and understanding the actual dynamic physical mechanisms responsible for traumatic eye injuries accompanying blunt impact, with particular attention to the damage processes that take place at the retina. To this purpose, a numerical and experimental investigation of the dynamic response of the eye during an impact event was performed. Numerical simulation of both tests was performed with IMPETUS-FEA, a general non-linear finite element software which offers NURBS finite element technology for the simulation of large deformation and fracture in materials. Computational results were compared with the experimental results on fresh enucleated porcine eyes impacted with airsoft pellets. The eyes were placed in a container filled with 10 percent ballistic gelatin simulating the fatty tissue surrounding the eye. A miniature pressure transducer was inserted into the eye bulb through the optic nerve in order to measure the pressure of the eye during blunt-projectile impacts. Each test was recorded using a high speed video camera. The ocular injuries observed in the impacted eyes were assessed by an ophthalmologist in order to evaluate the correlation between the pressure measures and the risk of retinal damage.
NASA Technical Reports Server (NTRS)
2002-01-01
NASA's Jet Propulsion Laboratory's collaborated with LC Technologies, Inc., to improve LCT's Eyegaze Communication System, an eye tracker that enables people with severe cerebral palsy, muscular dystrophy, multiple sclerosis, strokes, brain injuries, spinal cord injuries, and ALS (amyotrophic lateral sclerosis) to communicate and control their environment using their eye movements. To operate the system, the user sits in front of the computer monitor while the camera focuses on one eye. By looking at control keys on the monitor for a fraction of a second, the user can 'talk' with speech synthesis, type, operate a telephone, access the Internet and e-mail, and run computer software. Nothing is attached to the user's head or body, and the improved size and portability allow the system to be mounted on a wheelchair. LCT and JPL are working on several other areas of improvement that have commercial add-on potential.
Rabin, Yoed; Taylor, Michael J.; Feig, Justin S. G.; Baicu, Simona; Chen, Zhen
2013-01-01
The objective of the current study is to develop a new cryomacroscope prototype for the study of vitrification in large-size specimens. The unique contribution in the current study is in developing a cryomacroscope setup as an add-on device to a commercial controlled-rate cooler and in demonstration of physical events in cryoprotective cocktails containing synthetic ice modulators (SIM)—compounds which hinder ice crystal growth. Cryopreservation by vitrification is a highly complex application, where the likelihood of crystallization, fracture formation, degradation of the biomaterial quality, and other physical events are dependent not only upon the instantaneous cryogenic conditions, but more significantly upon the evolution of conditions along the cryogenic protocol. Nevertheless, cryopreservation success is most frequently assessed by evaluating the cryopreserved product at its end states—either at the cryogenic storage temperature or room temperature. The cryomacroscope is the only available device for visualization of large-size specimens along the thermal protocol, in an effort to correlate the quality of the cryopreserved product with physical events. Compared with earlier cryomacroscope prototypes, the new Cryomacroscope-III evaluated here benefits from a higher resolution color camera, improved illumination, digital recording capabilities, and high repeatability in tested thermal conditions via a commercial controlled-rate cooler. A specialized software package was developed in the current study, having two modes of operation: (a) experimentation mode to control the operation of the camera, record camera frames sequentially, log thermal data from sensors, and save case-specific information; and (b) post-processing mode to generate a compact file integrating images, elapsed time, and thermal data for each experiment. The benefits of the Cryomacroscope-III are demonstrated using various tested mixtures of SIMs with the cryoprotective cocktail DP6, which were found effective in preventing ice growth, even at significantly subcritical cooling rates with reference to the pure DP6. PMID:23993920
Computer vision camera with embedded FPGA processing
NASA Astrophysics Data System (ADS)
Lecerf, Antoine; Ouellet, Denis; Arias-Estrada, Miguel
2000-03-01
Traditional computer vision is based on a camera-computer system in which the image understanding algorithms are embedded in the computer. To circumvent the computational load of vision algorithms, low-level processing and imaging hardware can be integrated in a single compact module where a dedicated architecture is implemented. This paper presents a Computer Vision Camera based on an open architecture implemented in an FPGA. The system is targeted to real-time computer vision tasks where low level processing and feature extraction tasks can be implemented in the FPGA device. The camera integrates a CMOS image sensor, an FPGA device, two memory banks, and an embedded PC for communication and control tasks. The FPGA device is a medium size one equivalent to 25,000 logic gates. The device is connected to two high speed memory banks, an IS interface, and an imager interface. The camera can be accessed for architecture programming, data transfer, and control through an Ethernet link from a remote computer. A hardware architecture can be defined in a Hardware Description Language (like VHDL), simulated and synthesized into digital structures that can be programmed into the FPGA and tested on the camera. The architecture of a classical multi-scale edge detection algorithm based on a Laplacian of Gaussian convolution has been developed to show the capabilities of the system.
Hornets Can Fly at Night without Obvious Adaptations of Eyes and Ocelli
Kelber, Almut; Jonsson, Fredrik; Wallén, Rita; Warrant, Eric; Kornfeldt, Torill; Baird, Emily
2011-01-01
Hornets, the largest social wasps, have a reputation of being facultatively nocturnal. Here we confirm flight activity of hornet workers in dim twilight. We studied the eyes and ocelli of European hornets (Vespa crabro) and common wasps (Vespula vulgaris) with the goal to find the optical and anatomical adaptations that enable them to fly in dim light. Adaptations described for obligately nocturnal hymenoptera such as the bees Xylocopa tranquebarica and Megalopta genalis and the wasp Apoica pallens include large ocelli and compound eyes with wide rhabdoms and large facet lenses. Interestingly, we did not find any such adaptations in hornet eyes or ocelli. On the contrary, their eyes are even less sensitive than those of the obligately diurnal common wasps. Therefore we conclude that hornets, like several facultatively nocturnal bee species such as Apis mellifera adansonii, A. dorsata and X. tenuiscapa are capable of seeing in dim light simply due to the large body and thus eye size. We propose that neural pooling strategies and behavioural adaptations precede anatomical adaptations in the eyes and ocelli when insects with apposition compound eyes turn to dim light activity. PMID:21765923
EYE MOVEMENT RECORDING AND NONLINEAR DYNAMICS ANALYSIS – THE CASE OF SACCADES#
Aştefănoaei, Corina; Pretegiani, Elena; Optican, L.M.; Creangă, Dorina; Rufa, Alessandra
2015-01-01
Evidence of a chaotic behavioral trend in eye movement dynamics was examined in the case of a saccadic temporal series collected from a healthy human subject. Saccades are highvelocity eye movements of very short duration, their recording being relatively accessible, so that the resulting data series could be studied computationally for understanding the neural processing in a motor system. The aim of this study was to assess the complexity degree in the eye movement dynamics. To do this we analyzed the saccadic temporal series recorded with an infrared camera eye tracker from a healthy human subject in a special experimental arrangement which provides continuous records of eye position, both saccades (eye shifting movements) and fixations (focusing over regions of interest, with rapid, small fluctuations). The semi-quantitative approach used in this paper in studying the eye functioning from the viewpoint of non-linear dynamics was accomplished by some computational tests (power spectrum, portrait in the state space and its fractal dimension, Hurst exponent and largest Lyapunov exponent) derived from chaos theory. A high complexity dynamical trend was found. Lyapunov largest exponent test suggested bi-stability of cellular membrane resting potential during saccadic experiment. PMID:25698889
NASA Astrophysics Data System (ADS)
Yu, Liping; Pan, Bing
2016-12-01
A low-cost, easy-to-implement but practical single-camera stereo-digital image correlation (DIC) system using a four-mirror adapter is established for accurate shape and three-dimensional (3D) deformation measurements. The mirrors assisted pseudo-stereo imaging system can convert a single camera into two virtual cameras, which view a specimen from different angles and record the surface images of the test object onto two halves of the camera sensor. To enable deformation measurement in non-laboratory conditions or extreme high temperature environments, an active imaging optical design, combining an actively illuminated monochromatic source with a coupled band-pass optical filter, is compactly integrated to the pseudo-stereo DIC system. The optical design, basic principles and implementation procedures of the established system for 3D profile and deformation measurements are described in detail. The effectiveness and accuracy of the established system are verified by measuring the profile of a regular cylinder surface and displacements of a translated planar plate. As an application example, the established system is used to determine the tensile strains and Poisson's ratio of a composite solid propellant specimen during stress relaxation test. Since the established single-camera stereo-DIC system only needs a single camera and presents strong robustness against variations in ambient light or the thermal radiation of a hot object, it demonstrates great potential in determining transient deformation in non-laboratory or high-temperature environments with the aid of a single high-speed camera.
Photometric Calibration and Image Stitching for a Large Field of View Multi-Camera System
Lu, Yu; Wang, Keyi; Fan, Gongshu
2016-01-01
A new compact large field of view (FOV) multi-camera system is introduced. The camera is based on seven tiny complementary metal-oxide-semiconductor sensor modules covering over 160° × 160° FOV. Although image stitching has been studied extensively, sensor and lens differences have not been considered in previous multi-camera devices. In this study, we have calibrated the photometric characteristics of the multi-camera device. Lenses were not mounted on the sensor in the process of radiometric response calibration to eliminate the influence of the focusing effect of uniform light from an integrating sphere. Linearity range of the radiometric response, non-linearity response characteristics, sensitivity, and dark current of the camera response function are presented. The R, G, and B channels have different responses for the same illuminance. Vignetting artifact patterns have been tested. The actual luminance of the object is retrieved by sensor calibration results, and is used to blend images to make panoramas reflect the objective luminance more objectively. This compensates for the limitation of stitching images that are more realistic only through the smoothing method. The dynamic range limitation of can be resolved by using multiple cameras that cover a large field of view instead of a single image sensor with a wide-angle lens. The dynamic range is expanded by 48-fold in this system. We can obtain seven images in one shot with this multi-camera system, at 13 frames per second. PMID:27077857
ERIC Educational Resources Information Center
Lewicki, Martin; Hughes, Stephen
2012-01-01
This article describes a method for making a spectroscope from scrap materials, i.e. a fragment of compact disc, a cardboard box, a tube and a digital camera to record the spectrum. An image processing program such as ImageJ can be used to calculate the wavelength of emission and absorption lines from the digital photograph. Multiple images of a…
Hand-Held Ultrasonic Instrument for Reading Matrix Symbols
NASA Technical Reports Server (NTRS)
Schramm, Harry F.; Kula, John P.; Gurney, John W.; Lior, Ephraim D.
2008-01-01
A hand-held instrument that would include an ultrasonic camera has been proposed as an efficient means of reading matrix symbols. The proposed instrument could be operated without mechanical raster scanning. All electronic functions from excitation of ultrasonic pulses through final digital processing for decoding matrix symbols would be performed by dedicated circuitry within the single, compact instrument housing.
Bayer Filter Snapshot Hyperspectral Fundus Camera for Human Retinal Imaging
Liu, Wenzhong; Nesper, Peter; Park, Justin; Zhang, Hao F.; Fawzi, Amani A.
2016-01-01
Purpose To demonstrate the versatility and performance of a compact Bayer filter snapshot hyperspectral fundus camera for in-vivo clinical applications including retinal oximetry and macular pigment optical density measurements. Methods 12 healthy volunteers were recruited under an Institutional Review Board (IRB) approved protocol. Fundus images were taken with a custom hyperspectral camera with a spectral range of 460–630 nm. We determined retinal vascular oxygen saturation (sO2) for the healthy population using the captured spectra by least squares curve fitting. Additionally, macular pigment optical density was localized and visualized using multispectral reflectometry from selected wavelengths. Results We successfully determined the mean sO2 of arteries and veins of each subject (ages 21–80) with excellent intrasubject repeatability (1.4% standard deviation). The mean arterial sO2 for all subjects was 90.9% ± 2.5%, whereas the mean venous sO2 for all subjects was 64.5% ± 3.5%. The mean artery–vein (A–V) difference in sO2 varied between 20.5% and 31.9%. In addition, we were able to reveal and quantify macular pigment optical density. Conclusions We demonstrated a single imaging tool capable of oxygen saturation and macular pigment density measurements in vivo. The unique combination of broad spectral range, high spectral–spatial resolution, rapid and robust imaging capability, and compact design make this system a valuable tool for multifunction spectral imaging that can be easily performed in a clinic setting. PMID:27767345
A new method to acquire 3-D images of a dental cast
NASA Astrophysics Data System (ADS)
Li, Zhongke; Yi, Yaxing; Zhu, Zhen; Li, Hua; Qin, Yongyuan
2006-01-01
This paper introduced our newly developed method to acquire three-dimensional images of a dental cast. A rotatable table, a laser-knife, a mirror, a CCD camera and a personal computer made up of a three-dimensional data acquiring system. A dental cast is placed on the table; the mirror is installed beside the table; a linear laser is projected to the dental cast; the CCD camera is put up above the dental cast, it can take picture of the dental cast and the shadow in the mirror; while the table rotating, the camera records the shape of the laser streak projected on the dental cast, and transmit the data to the computer. After the table rotated one circuit, the computer processes the data, calculates the three-dimensional coordinates of the dental cast's surface. In data processing procedure, artificial neural networks are enrolled to calibrate the lens distortion, map coordinates form screen coordinate system to world coordinate system. According to the three-dimensional coordinates, the computer reconstructs the stereo image of the dental cast. It is essential for computer-aided diagnosis and treatment planning in orthodontics. In comparison with other systems in service, for example, laser beam three-dimensional scanning system, the characteristic of this three-dimensional data acquiring system: a. celerity, it casts only 1 minute to scan a dental cast; b. compact, the machinery is simple and compact; c. no blind zone, a mirror is introduced ably to reduce blind zone.
Compact opto-electronic engine for high-speed compressive sensing
NASA Astrophysics Data System (ADS)
Tidman, James; Weston, Tyler; Hewitt, Donna; Herman, Matthew A.; McMackin, Lenore
2013-09-01
The measurement efficiency of Compressive Sensing (CS) enables the computational construction of images from far fewer measurements than what is usually considered necessary by the Nyquist- Shannon sampling theorem. There is now a vast literature around CS mathematics and applications since the development of its theoretical principles about a decade ago. Applications include quantum information to optical microscopy to seismic and hyper-spectral imaging. In the application of shortwave infrared imaging, InView has developed cameras based on the CS single-pixel camera architecture. This architecture is comprised of an objective lens to image the scene onto a Texas Instruments DLP® Micromirror Device (DMD), which by using its individually controllable mirrors, modulates the image with a selected basis set. The intensity of the modulated image is then recorded by a single detector. While the design of a CS camera is straightforward conceptually, its commercial implementation requires significant development effort in optics, electronics, hardware and software, particularly if high efficiency and high-speed operation are required. In this paper, we describe the development of a high-speed CS engine as implemented in a lab-ready workstation. In this engine, configurable measurement patterns are loaded into the DMD at speeds up to 31.5 kHz. The engine supports custom reconstruction algorithms that can be quickly implemented. Our work includes optical path design, Field programmable Gate Arrays for DMD pattern generation, and circuit boards for front end data acquisition, ADC and system control, all packaged in a compact workstation.
3D tomographic imaging with the γ-eye planar scintigraphic gamma camera
NASA Astrophysics Data System (ADS)
Tunnicliffe, H.; Georgiou, M.; Loudos, G. K.; Simcox, A.; Tsoumpas, C.
2017-11-01
γ-eye is a desktop planar scintigraphic gamma camera (100 mm × 50 mm field of view) designed by BET Solutions as an affordable tool for dynamic, whole body, small-animal imaging. This investigation tests the viability of using γ-eye for the collection of tomographic data for 3D SPECT reconstruction. Two software packages, QSPECT and STIR (software for tomographic image reconstruction), have been compared. Reconstructions have been performed using QSPECT’s implementation of the OSEM algorithm and STIR’s OSMAPOSL (Ordered Subset Maximum A Posteriori One Step Late) and OSSPS (Ordered Subsets Separable Paraboloidal Surrogate) algorithms. Reconstructed images of phantom and mouse data have been assessed in terms of spatial resolution, sensitivity to varying activity levels and uniformity. The effect of varying the number of iterations, the voxel size (1.25 mm default voxel size reduced to 0.625 mm and 0.3125 mm), the point spread function correction and the weight of prior terms were explored. While QSPECT demonstrated faster reconstructions, STIR outperformed it in terms of resolution (as low as 1 mm versus 3 mm), particularly when smaller voxel sizes were used, and in terms of uniformity, particularly when prior terms were used. Little difference in terms of sensitivity was seen throughout.
Barnacle Bill in Super Resolution from Super Panorama
1998-07-03
"Barnacle Bill" is a small rock immediately west-northwest of the Mars Pathfinder lander and was the first rock visited by the Sojourner Rover's alpha proton X-ray spectrometer (APXS) instrument. This image shows super resolution techniques applied to the first APXS target rock, which was never imaged with the rover's forward cameras. Super resolution was applied to help to address questions about the texture of this rock and what it might tell us about its mode of origin. This view of Barnacle Bill was produced by combining the "Super Panorama" frames from the IMP camera. Super resolution was applied to help to address questions about the texture of these rocks and what it might tell us about their mode of origin. The composite color frames that make up this anaglyph were produced for both the right and left eye of the IMP. The composites consist of 7 frames in the right eye and 8 frames in the left eye, taken with different color filters that were enlarged by 500% and then co-added using Adobe Photoshop to produce, in effect, a super-resolution panchromatic frame that is sharper than an individual frame would be. These panchromatic frames were then colorized with the red, green, and blue filtered images from the same sequence. The color balance was adjusted to approximate the true color of Mars. The anaglyph view was produced by combining the left with the right eye color composite frames by assigning the left eye composite view to the red color plane and the right eye composite view to the green and blue color planes (cyan), to produce a stereo anaglyph mosaic. This mosaic can be viewed in 3-D on your computer monitor or in color print form by wearing red-blue 3-D glasses. http://photojournal.jpl.nasa.gov/catalog/PIA01409
Matching between the light spots and lenslets of an artificial compound eye system
NASA Astrophysics Data System (ADS)
He, Jianzheng; Jian, Huijie; Zhu, Qitao; Ma, Mengchao; Wang, Keyi
2017-10-01
As the visual organ of many arthropods, the compound eye has attracted a lot of attention with the advantage of wide field-of-view, multi-channel imaging ability and high agility. Extended from this concept, a new kind of artificial compound eye device is developed. There are 141 lenslets which share one image sensor distributed evenly on a curved surface, thus it is difficult to distinguish the lenslets which the light spot belongs to during calibration and positioning process. Therefore, the matching algorithm is proposed based on the device structure and the principle of calibration and positioning. Region partition of lenslet array is performed at first. Each lenslet and its adjacent lenslets are defined as cluster eyes and constructed into an index table. In the calibration process, a polar coordinate system is established, and the matching can be accomplished by comparing the rotary table position in the polar coordinate system and the central light spot angle in the image. In the positioning process, the spot is paired to the correct region according to the spots distribution firstly, and the final results is determined by the dispersion of the distance from the target point to the incident ray in the region traversal matching. Finally, the experiment results show that the presented algorithms provide a feasible and efficient way to match the spot to the lenslet, and perfectly meet the needs in the practical application of the compound eye system.
Intra-cavity upconversion to 631 nm of images illuminated by an eye-safe ASE source at 1550 nm.
Torregrosa, A J; Maestre, H; Capmany, J
2015-11-15
We report an image wavelength upconversion system. The system mixes an incoming image at around 1550 nm (eye-safe region) illuminated by an amplified spontaneous emission (ASE) fiber source with a Gaussian beam at 1064 nm generated in a continuous-wave diode-pumped Nd(3+):GdVO(4) laser. Mixing takes place in a periodically poled lithium niobate (PPLN) crystal placed intra-cavity. The upconverted image obtained by sum-frequency mixing falls around the 631 nm red spectral region, well within the spectral response of standard silicon focal plane array bi-dimensional sensors, commonly used in charge-coupled device (CCD) or complementary metal-oxide-semiconductor (CMOS) video cameras, and of most image intensifiers. The use of ASE illumination benefits from a noticeable increase in the field of view (FOV) that can be upconverted with regard to using coherent laser illumination. The upconverted power allows us to capture real-time video in a standard nonintensified CCD camera.
Tuschy, Benjamin; Berlit, Sebastian; Brade, Joachim; Sütterlin, Marc; Hornemann, Amadeus
2014-01-01
To investigate the clinical assessment of a full high-definition (HD) three-dimensional robot-assisted laparoscopic device in gynaecological surgery. This study included 70 women who underwent gynaecological laparoscopic procedures. Demographic parameters, type and duration of surgery and perioperative complications were analyzed. Fifteen surgeons were postoperatively interviewed regarding their assessment of this new system with a standardized questionnaire. The clinical assessment revealed that three-dimensional full-HD visualisation is comfortable and improves spatial orientation and hand-to-eye coordination. The majority of the surgeons stated they would prefer a three-dimensional system to a conventional two-dimensional device and stated that the robotic camera arm led to more relaxed working conditions. Three-dimensional laparoscopy is feasible, comfortable and well-accepted in daily routine. The three-dimensional visualisation improves surgeons' hand-to-eye coordination, intracorporeal suturing and fine dissection. The combination of full-HD three-dimensional visualisation with the robotic camera arm results in very high image quality and stability.
Observations of Leonids 2009 by the Tajikistan Fireball Network
NASA Technical Reports Server (NTRS)
Borovicka, J.; Borovicka, J.
2011-01-01
The fireball network in Tajikistan has operated since 2009. Five stations of the network covering the territory of near eleven thousands square kilometers are equipped with all-sky cameras with the Zeiss Distagon "fish-eye" objectives and by digital SLR cameras Nikon with the Nikkor "fish-eye" objectives. Observations of the Leonid activity in 2009 were carried out during November 13-21. In this period, 16 Leonid fireballs have been photographed. As a result of astrometric and photometric reductions, the precise data including atmospheric trajectories, velocities, orbits, light curves, photometric masses and densities were determined for 10 fireballs. The radiant positions during the maximum night suggest that the majority of the fireball activity was caused by the annual stream component with only minor contribution from the 1466 trail. According to the PE criterion, the majority of Leonid fireballs belonged to the most fragile and weak fireball group IIIB. However, one detected Leonid belonged to the fireball group I. This is the first detection of an anomalously strong Leonid individual.
NASA Astrophysics Data System (ADS)
Sorokoumov, P. S.; Khabibullin, T. R.; Tolstaya, A. M.
2017-01-01
The existing psychological theories associate the movement of a human eye with its reactions to external change: what we see, hear and feel. By analyzing the glance, we can compare the external human response (which shows the behavior of a person), and the natural reaction (that they actually feels). This article describes the complex for detection of visual activity and its application for evaluation of the psycho-physiological state of a person. The glasses with a camera capture all the movements of the human eye in real time. The data recorded by the camera are transmitted to the computer for processing implemented with the help of the software developed by the authors. The result is given in an informative and an understandable report, which can be used for further analysis. The complex shows a high efficiency and stable operation and can be used both, for the pedagogic personnel recruitment and for testing students during the educational process.
NASA Astrophysics Data System (ADS)
Morison, Ian
2017-02-01
1. Imaging star trails; 2. Imaging a constellation with a DSLR and tripod; 3. Imaging the Milky Way with a DSLR and tracking mount; 4. Imaging the Moon with a compact camera or smartphone; 5. Imaging the Moon with a DSLR; 6. Imaging the Pleiades Cluster with a DSLR and small refractor; 7. Imaging the Orion Nebula, M42, with a modified Canon DSLR; 8. Telescopes and their accessories for use in astroimaging; 9. Towards stellar excellence; 10. Cooling a DSLR camera to reduce sensor noise; 11. Imaging the North American and Pelican Nebulae; 12. Combating light pollution - the bane of astrophotographers; 13. Imaging planets with an astronomical video camera or Canon DSLR; 14. Video imaging the Moon with a webcam or DSLR; 15. Imaging the Sun in white light; 16. Imaging the Sun in the light of its H-alpha emission; 17. Imaging meteors; 18. Imaging comets; 19. Using a cooled 'one shot colour' camera; 20. Using a cooled monochrome CCD camera; 21. LRGB colour imaging; 22. Narrow band colour imaging; Appendix A. Telescopes for imaging; Appendix B. Telescope mounts; Appendix C. The effects of the atmosphere; Appendix D. Auto guiding; Appendix E. Image calibration; Appendix F. Practical aspects of astroimaging.
Strategic options towards an affordable high-performance infrared camera
NASA Astrophysics Data System (ADS)
Oduor, Patrick; Mizuno, Genki; Dutta, Achyut K.; Lewis, Jay; Dhar, Nibir K.
2016-05-01
The promise of infrared (IR) imaging attaining low-cost akin to CMOS sensors success has been hampered by the inability to achieve cost advantages that are necessary for crossover from military and industrial applications into the consumer and mass-scale commercial realm despite well documented advantages. Banpil Photonics is developing affordable IR cameras by adopting new strategies to speed-up the decline of the IR camera cost curve. We present a new short-wave IR (SWIR) camera; 640x512 pixel InGaAs uncooled system that is high sensitivity low noise (<50e-), high dynamic range (100 dB), high-frame rates (> 500 frames per second (FPS)) at full resolution, and low power consumption (< 1 W) in a compact system. This camera paves the way towards mass market adoption by not only demonstrating high-performance IR imaging capability value add demanded by military and industrial application, but also illuminates a path towards justifiable price points essential for consumer facing application industries such as automotive, medical, and security imaging adoption. Among the strategic options presented include new sensor manufacturing technologies that scale favorably towards automation, multi-focal plane array compatible readout electronics, and dense or ultra-small pixel pitch devices.
An evolution of technologies and applications of gamma imagers in the nuclear cycle industry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khalil, R. A.; Carrel, F.; Menaa, N.
The tracking of radiation contamination and distribution has become a high priority in the nuclear cycle industry in order to respect the ALARA principle which is a main challenge during decontamination and dismantling activities. To support this need, AREVA/CANBERRA and CEA LIST have been actively carrying out research and development on a gamma-radiation imager. In this paper we will present the new generation of gamma camera, called GAMPIX. This system is based on the Timepix chip, hybridized with a CdTe substrate. A coded mask could be used in order to increase the sensitivity of the camera. Moreover, due to themore » USB connection with a standard computer, this gamma camera is immediately operational and user-friendly. The final system is a very compact gamma camera (global weight is less than 1 kg without any shielding) which could be used as a hand-held device for radioprotection purposes. In this article, we present the main characteristics of this new generation of gamma camera and we expose experimental results obtained during in situ measurements. Even though we present preliminary results the final product is under industrialization phase to address various applications specifications. (authors)« less
NASA Astrophysics Data System (ADS)
Deng, Shengfeng; Lyu, Jinke; Sun, Hongda; Cui, Xiaobin; Wang, Tun; Lu, Miao
2015-03-01
A chirped artificial compound eye on a curved surface was fabricated using an optical resin and then mounted on the end of an endoscopic imaging fiber bundle. The focal length of each lenslet on the curved surface was variable to realize a flat focal plane, which matched the planar end surface of the fiber bundle. The variation of the focal length was obtained by using a photoresist mold formed by dose-modulated laser lithography and subsequent thermal reflow. The imaging performance of the fiber bundle was characterized by coupling with a coaxial light microscope, and the result demonstrated a larger field of view and better imaging quality than that of an artificial compound eye with a uniform focal length. Accordingly, this technology has potential application in stereoscopic endoscopy.
MEMS compatible illumination and imaging micro-optical systems
NASA Astrophysics Data System (ADS)
Bräuer, A.; Dannberg, P.; Duparré, J.; Höfer, B.; Schreiber, P.; Scholles, M.
2007-01-01
The development of new MOEMS demands for cooperation between researchers in micromechanics, optoelectronics and microoptics at a very early state. Additionally, microoptical technologies being compatible with structured silicon have to be developed. The microoptical technologies used for two silicon based microsystems are described in the paper. First, a very small scanning laser projector with a volume of less than 2 cm 3, which operates with a directly modulated lasers collimated with a microlens, is shown. The laser radiation illuminates a 2D-MEMS scanning mirror. The optical design is optimized for high resolution (VGA). Thermomechanical stability is realized by design and using a structured ceramics motherboard. Secondly, an ultrathin CMOS-camera having an insect inspired imaging system has been realized. It is the first experimental realization of an artificial compound eye. Micro-optical design principles and technology is used. The overall thickness of the imaging system is only 320 μm, the diagonal field of view is 21°, and the f-number is 2.6. The monolithic device consists of an UV-replicated microlens array upon a thin silica substrate with a pinhole array in a metal layer on the back side. The pitch of the pinholes differs from that of the lens array to provide individual viewing angle for each channel. The imaging chip is directly glued to a CMOS sensor with adapted pitch. The whole camera is less than 1mm thick. New packaging methods for these systems are under development.
Development of a vision-based pH reading system
NASA Astrophysics Data System (ADS)
Hur, Min Goo; Kong, Young Bae; Lee, Eun Je; Park, Jeong Hoon; Yang, Seung Dae; Moon, Ha Jung; Lee, Dong Hoon
2015-10-01
pH paper is generally used for pH interpretation in the QC (quality control) process of radiopharmaceuticals. pH paper is easy to handle and useful for small samples such as radio-isotopes and radioisotope (RI)-labeled compounds for positron emission tomography (PET). However, pHpaper-based detecting methods may have some errors due limitations of eye sight and inaccurate readings. In this paper, we report a new device for pH reading and related software. The proposed pH reading system is developed with a vision algorithm based on the RGB library. The pH reading system is divided into two parts. First is the reading device that consists of a light source, a CCD camera and a data acquisition (DAQ) board. To improve the accuracy of the sensitivity, we utilize the three primary colors of the LED (light emission diode) in the reading device. The use of three colors is better than the use of a single color for a white LED because of wavelength. The other is a graph user interface (GUI) program for a vision interface and report generation. The GUI program inserts the color codes of the pH paper into the database; then, the CCD camera captures the pH paper and compares its color with the RGB database image in the reading mode. The software captures and reports information on the samples, such as pH results, capture images, and library images, and saves them as excel files.
Cai, Xue; Conley, Shannon M; Nash, Zack; Fliesler, Steven J; Cooper, Mark J; Naash, Muna I
2010-04-01
The purpose of the present study was to test the therapeutic efficiency and safety of compacted-DNA nanoparticle-mediated gene delivery into the subretinal space of a juvenile mouse model of retinitis pigmentosa. Nanoparticles containing the mouse opsin promoter and wild-type mouse Rds gene were injected subretinally into mice carrying a haploinsufficiency mutation in the retinal degeneration slow (rds(+ or -)) gene at postnatal day (P)5 and 22. Control mice were either injected with saline, injected with uncompacted naked plasmid DNA carrying the Rds gene, or remained untreated. Rds mRNA levels peaked at postinjection day 2 to 7 (PI-2 to PI-7) for P5 injections, stabilized at levels 2-fold higher than in uninjected controls for both P5 and P22 injections, and remained elevated at the latest time point examined (PI-120). Rod function (measured by electroretinography) showed modest but statistically significant improvement compared with controls after both P5 and P22 injections. Cone function in nanoparticle-injected eyes reached wild-type levels for both ages of injections, indicating full prevention of cone degeneration. Ultrastructural examination at PI-120 revealed significant improvement in outer segment structures in P5 nanoparticle-injected eyes, while P22 injection had a modest structural improvement. There was no evidence of macrophage activation or induction of IL-6 or TNF-alpha mRNA in P5 or P22 nanoparticle-dosed eyes at either PI-2 or PI-30. Thus, compacted-DNA nanoparticles can efficiently and safely drive gene expression in both mitotic and postmitotic photoreceptors and retard degeneration in this model. These findings, using a clinically relevant treatment paradigm, illustrate the potential for application of nanoparticle-based gene replacement therapy for treatment of human retinal degenerations.-Cai, X., Conley, S. M., Nash, Z., Fliesler, S. J., Cooper, M. J., Naash, M. I. Gene delivery to mitotic and postmitotic photoreceptors via compacted DNA nanoparticles results in improved phenotype in a mouse model of retinitis pigmentosa.
NASA Technical Reports Server (NTRS)
2004-01-01
This is the left-eye version of the 3-D cylindrical-perspective mosaic showing the view south of the martian crater dubbed 'Bonneville.' The image was taken by the navigation camera on the Mars Exploration Rover Spirit. The rover will travel toward the Columbia Hills, seen here at the upper left. The rock dubbed 'Mazatzal' and the hole the rover drilled in to it can be seen at the lower left. The rover's position is referred to as 'Site 22, Position 32.' This image was geometrically corrected to make the horizon appear flat.
NASA Astrophysics Data System (ADS)
Motta, Danilo A.; Serillo, André; de Matos, Luciana; Yasuoka, Fatima M. M.; Bagnato, Vanderlei S.; Carvalho, Luis A. V.
2014-03-01
Glaucoma is the second main cause of the blindness in the world and there is a tendency to increase this number due to the lifetime expectation raise of the population. Glaucoma is related to the eye conditions, which leads the damage to the optic nerve. This nerve carries visual information from eye to brain, then, if it has damage, it compromises the visual quality of the patient. In the majority cases the damage of the optic nerve is irreversible and it happens due to increase of intraocular pressure. One of main challenge for the diagnosis is to find out this disease, because any symptoms are not present in the initial stage. When is detected, it is already in the advanced stage. Currently the evaluation of the optic disc is made by sophisticated fundus camera, which is inaccessible for the majority of Brazilian population. The purpose of this project is to develop a specific fundus camera without fluorescein angiography and red-free system to accomplish 3D image of optic disc region. The innovation is the new simplified design of a stereo-optical system, in order to make capable the 3D image capture and in the same time quantitative measurements of excavation and topography of optic nerve; something the traditional fundus cameras do not do. The dedicated hardware and software is developed for this ophthalmic instrument, in order to permit quick capture and print of high resolution 3D image and videos of optic disc region (20° field-of-view) in the mydriatic and nonmydriatic mode.
Duangsang, Suampa; Tengtrisorn, Supaporn
2012-05-01
To determine the normal range of Central Corneal Light Reflex Ratio (CCLRR) from photographs of young adults. A digital camera equipped with a telephoto lens with a flash attachment placed directly above the lens was used to obtain corneal light reflex photographs of 104 subjects, first with the subject fixating on the lens of the camera at a distance of 43 centimeters, and then while looking past the camera to a wall at a distance of 5.4 meters. Digital images were displayed using Adobe Photoshop at a magnification of l200%. The CCLRR was the ratio of the sum of distances between the inner margin of cornea and the central corneal light reflex of each eye to the sum of horizontal corneal diameter of each eye. Measurements were made by three technicians on all subjects, and repeated on a 16% (n=17) subsample. Mean ratios (standard deviation-SD) from near/distance measurements were 0.468 (0.012)/0.452 (0.019). Limits of the normal range, with 95% certainty, were 0.448 and 0.488 for near measurements and 0.419 and 0.484 for distance measurements. Lower and upper indeterminate zones were 0.440-0.447 and 0.489-0.497 for near measurements and 0.406-0.418 and 0.485-0.497 for distance measurements. More extreme values can be considered as abnormal. The reproducibility and repeatability of the test was good. This method is easy to perform and has potential for use in strabismus screening by paramedical personnel.
Protecting Emergency Responders: Lessons Learned from Terrorist Attacks
2002-03-05
Contact lenses tended to dry out when worn with respirators for long periods. Wet shoes and socks caused blisters.1 Trades panel members believed...the confusion and compounds the safety and rescue responsibilities of firefighters and other responders who are in the command structure. In the...enter eyes and to irritate them. Heavy labor in hot weather, which caused de- hydration and dry eyes, apparently compounded this problem at the sites
The Zwicky Transient Facility Camera
NASA Astrophysics Data System (ADS)
Dekany, Richard; Smith, Roger M.; Belicki, Justin; Delacroix, Alexandre; Duggan, Gina; Feeney, Michael; Hale, David; Kaye, Stephen; Milburn, Jennifer; Murphy, Patrick; Porter, Michael; Reiley, Daniel J.; Riddle, Reed L.; Rodriguez, Hector; Bellm, Eric C.
2016-08-01
The Zwicky Transient Facility Camera (ZTFC) is a key element of the ZTF Observing System, the integrated system of optoelectromechanical instrumentation tasked to acquire the wide-field, high-cadence time-domain astronomical data at the heart of the Zwicky Transient Facility. The ZTFC consists of a compact cryostat with large vacuum window protecting a mosaic of 16 large, wafer-scale science CCDs and 4 smaller guide/focus CCDs, a sophisticated vacuum interface board which carries data as electrical signals out of the cryostat, an electromechanical window frame for securing externally inserted optical filter selections, and associated cryo-thermal/vacuum system support elements. The ZTFC provides an instantaneous 47 deg2 field of view, limited by primary mirror vignetting in its Schmidt telescope prime focus configuration. We report here on the design and performance of the ZTF CCD camera cryostat and report results from extensive Joule-Thompson cryocooler tests that may be of broad interest to the instrumentation community.
Deflection Measurements of a Thermally Simulated Nuclear Core Using a High-Resolution CCD-Camera
NASA Technical Reports Server (NTRS)
Stanojev, B. J.; Houts, M.
2004-01-01
Space fission systems under consideration for near-term missions all use compact. fast-spectrum reactor cores. Reactor dimensional change with increasing temperature, which affects neutron leakage. is the dominant source of reactivity feedback in these systems. Accurately measuring core dimensional changes during realistic non-nuclear testing is therefore necessary in predicting the system nuclear equivalent behavior. This paper discusses one key technique being evaluated for measuring such changes. The proposed technique is to use a Charged Couple Device (CCD) sensor to obtain deformation readings of electrically heated prototypic reactor core geometry. This paper introduces a technique by which a single high spatial resolution CCD camera is used to measure core deformation in Real-Time (RT). Initial system checkout results are presented along with a discussion on how additional cameras could be used to achieve a three- dimensional deformation profile of the core during test.
Osanai-Futahashi, M; Tatematsu, K-i; Futahashi, R; Narukawa, J; Takasu, Y; Kayukawa, T; Shinoda, T; Ishige, T; Yajima, S; Tamura, T; Yamamoto, K; Sezutsu, H
2016-01-01
Ommochromes are major insect pigments involved in coloration of compound eyes, eggs, epidermis and wings. In the silkworm Bombyx mori, adult compound eyes and eggs contain a mixture of the ommochrome pigments such as ommin and xanthommatin. Here, we identified the gene involved in ommochrome biosynthesis by positional cloning of B. mori egg and eye color mutant pink-eyed white egg (pe). The recessive homozygote of pe has bright red eyes and white or pale pink eggs instead of a normal dark coloration due to the decrease of dark ommochrome pigments. By genetic linkage analysis, we narrowed down the pe-linked region to ~258 kb, containing 17 predicted genes. RNA sequencing analyses showed that the expression of one candidate gene, the ortholog of Drosophila haem peroxidase cardinal, coincided with egg pigmentation timing, similar to other ommochrome-related genes such as Bm-scarlet and Bm-re. In two pe strains, a common missense mutation was found within a conserved motif of B. mori cardinal homolog (Bm-cardinal). RNA interference-mediated knockdown and transcription activator-like effector nuclease (TALEN)-mediated knockout of the Bm-cardinal gene produced the same phenotype as pe in terms of egg, adult eye and larval epidermis coloration. A complementation test of the pe mutant with the TALEN-mediated Bm-cardinal-deficient strain showed that the mutant phenotype could not be rescued, indicating that Bm-cardinal is responsible for pe. Moreover, knockdown of the cardinal homolog in Tribolium castaneum also induced red compound eyes. Our results indicate that cardinal plays a major role in ommochrome synthesis of holometabolous insects. PMID:26328757
Osanai-Futahashi, M; Tatematsu, K-I; Futahashi, R; Narukawa, J; Takasu, Y; Kayukawa, T; Shinoda, T; Ishige, T; Yajima, S; Tamura, T; Yamamoto, K; Sezutsu, H
2016-02-01
Ommochromes are major insect pigments involved in coloration of compound eyes, eggs, epidermis and wings. In the silkworm Bombyx mori, adult compound eyes and eggs contain a mixture of the ommochrome pigments such as ommin and xanthommatin. Here, we identified the gene involved in ommochrome biosynthesis by positional cloning of B. mori egg and eye color mutant pink-eyed white egg (pe). The recessive homozygote of pe has bright red eyes and white or pale pink eggs instead of a normal dark coloration due to the decrease of dark ommochrome pigments. By genetic linkage analysis, we narrowed down the pe-linked region to ~258 kb, containing 17 predicted genes. RNA sequencing analyses showed that the expression of one candidate gene, the ortholog of Drosophila haem peroxidase cardinal, coincided with egg pigmentation timing, similar to other ommochrome-related genes such as Bm-scarlet and Bm-re. In two pe strains, a common missense mutation was found within a conserved motif of B. mori cardinal homolog (Bm-cardinal). RNA interference-mediated knockdown and transcription activator-like effector nuclease (TALEN)-mediated knockout of the Bm-cardinal gene produced the same phenotype as pe in terms of egg, adult eye and larval epidermis coloration. A complementation test of the pe mutant with the TALEN-mediated Bm-cardinal-deficient strain showed that the mutant phenotype could not be rescued, indicating that Bm-cardinal is responsible for pe. Moreover, knockdown of the cardinal homolog in Tribolium castaneum also induced red compound eyes. Our results indicate that cardinal plays a major role in ommochrome synthesis of holometabolous insects.
Chen, Tijun; Gao, Min; Tong, Yunqi
2018-01-01
To prepare core-shell-structured Ti@compound particle (Ti@compoundp) reinforced Al matrix composite via powder thixoforming, the effects of alloying elements, such as Si, Cu, Mg, and Zn, on the reaction between Ti powders and Al melt, and the microstructure of the resulting reinforcements were investigated during heating of powder compacts at 993 K (720 °C). Simultaneously, the situations of the reinforcing particles in the corresponding semisolid compacts were also studied. Both thermodynamic analysis and experiment results all indicate that Si participated in the reaction and promoted the formation of Al–Ti–Si ternary compounds, while Cu, Mg, and Zn did not take part in the reaction and facilitated Al3Ti phase to form to different degrees. The first-formed Al–Ti–Si ternary compound was τ1 phase, and then it gradually transformed into (Al,Si)3Ti phase. The proportion and existing time of τ1 phase all increased as the Si content increased. In contrast, Mg had the largest, Cu had the least, and Si and Zn had an equivalent middle effect on accelerating the reaction. The thicker the reaction shell was, the larger the stress generated in the shell was, and thus the looser the shell microstructure was. The stress generated in (Al,Si)3Ti phase was larger than that in τ1 phase, but smaller than that in Al3Ti phase. So, the shells in the Al–Ti–Si system were more compact than those in the other systems, and Si element was beneficial to obtain thick and compact compound shells. Most of the above results were consistent to those in the semisolid state ones except the product phase constituents in the Al–Ti–Mg system and the reaction rate in the Al–Ti–Zn system. More importantly, the desirable core-shell structured Ti@compoundp was only achieved in the semisolid Al–Ti–Si system. PMID:29342946
Yuan, Jin; Chen, Jia-qi; Zhou, Shi-you; Wang, Zhi-chong; Huang, Ting; Gu, Jian-jun; Shao, Ying-feng
2009-02-01
To explore the clinical value and management of complications of the transplantation of Titanium skirt compounded keratoprosthesis for severe corneal blindness eyes. It was a retrospective case series study. Nine eyes from 9 male patients, aged 28 to 52 years old, accepted permanent keratoprosthesis transplantation in Zhongshan Ophthalmic Center from March 2002 to June 2005. All patients had corneal lesion in both eyes for 1.5 to 5.0 years. Among the 9 treated eyes, 6 eyes was severe vascularization after alkali burns, 3 eyes explosive injuries. Light perception was remained in all patients before surgery, however, 2 eyes only had a questionable orientation of light perception among them. Surgical management was divided into two stages. In the first stage, transplantation of Titanium skirt compound keratoprosthesis was performed, and the explant was reinforced by the self auricular cartilage and Tendons capsule. The second stage of surgery was performed in 5 to 6 months later, in which the membrane in the front of keratoprosthesis was cut. After the surgery, visual acuity, visual field, intraocular pressure and retina were examined. The complications were noticed and managed. All treated eyes were followed up for 1 to 3 years. After the treatment, 7 eyes divorced from blindness with uncorrected visual acuity 20/200 (0.1), and 2 eyes among them got corrected visual acuity 20/30 (0.6). Two eyes with the questionable orientation of light perception before treatment gained uncorrected visual acuity 4/200 (0.02) and 8/200 (0.04) after treatment respectively. Complications were found to include 5 recurrent frontal membrane of keratoprosthesis, one back membrane of keratoprosthesis, and one limited corneal melting. Complications were controlled by the corresponding treatments, such as membrane resection for the recurrent frontal membrane of keratoprosthesis, courage under microscope for back membrane of keratoprosthesis, and reinforcement of acellular dermis for corneal melting. All keratoprosthesis were maintained in situ, and no rejection and leakage of aqueous humor happened. It is effective to use transplantation of keratoprosthesis for the severe corneal blindness eyes. Combination with self auricular cartilage and Tendons capsular reinforcement may reduce the complications and improve the biocompatibility of keratoprosthesis.
NASA Astrophysics Data System (ADS)
Joshi, V.; Manivannan, N.; Jarry, Z.; Carmichael, J.; Vahtel, M.; Zamora, G.; Calder, C.; Simon, J.; Burge, M.; Soliz, P.
2018-02-01
Diabetic peripheral neuropathy (DPN) accounts for around 73,000 lower-limb amputations annually in the US on patients with diabetes. Early detection of DPN is critical. Current clinical methods for diagnosing DPN are subjective and effective only at later stages. Until recently, thermal cameras used for medical imaging have been expensive and hence prohibitive to be installed in primary care setting. The objective of this study is to compare results from a low-cost thermal camera with a high-end thermal camera used in screening for DPN. Thermal imaging has demonstrated changes in microvascular function that correlates with nerve function affected by DPN. The limitations for using low-cost cameras for DPN imaging are: less resolution (active pixels), frame rate, thermal sensitivity etc. We integrated two FLIR Lepton (80x60 active pixels, 50° HFOV, thermal sensitivity < 50mK) as one unit. Right and left cameras record the videos of right and left foot respectively. A compactible embedded system (raspberry pi3 model Bv1.2) is used to configure the sensors, capture and stream the video via ethernet. The resulting video has 160x120 active pixels (8 frames/second). We compared the temperature measurement of feet obtained using low-cost camera against the gold standard highend FLIR SC305. Twelve subjects (aged 35-76) were recruited. Difference in the temperature measurements between cameras was calculated for each subject and the results show that the difference between the temperature measurements of two cameras (mean difference=0.4, p-value=0.2) is not statistically significant. We conclude that the low-cost thermal camera system shows potential for use in detecting early-signs of DPN in under-served and rural clinics.
Compact Autonomous Hemispheric Vision System
NASA Technical Reports Server (NTRS)
Pingree, Paula J.; Cunningham, Thomas J.; Werne, Thomas A.; Eastwood, Michael L.; Walch, Marc J.; Staehle, Robert L.
2012-01-01
Solar System Exploration camera implementations to date have involved either single cameras with wide field-of-view (FOV) and consequently coarser spatial resolution, cameras on a movable mast, or single cameras necessitating rotation of the host vehicle to afford visibility outside a relatively narrow FOV. These cameras require detailed commanding from the ground or separate onboard computers to operate properly, and are incapable of making decisions based on image content that control pointing and downlink strategy. For color, a filter wheel having selectable positions was often added, which added moving parts, size, mass, power, and reduced reliability. A system was developed based on a general-purpose miniature visible-light camera using advanced CMOS (complementary metal oxide semiconductor) imager technology. The baseline camera has a 92 FOV and six cameras are arranged in an angled-up carousel fashion, with FOV overlaps such that the system has a 360 FOV (azimuth). A seventh camera, also with a FOV of 92 , is installed normal to the plane of the other 6 cameras giving the system a > 90 FOV in elevation and completing the hemispheric vision system. A central unit houses the common electronics box (CEB) controlling the system (power conversion, data processing, memory, and control software). Stereo is achieved by adding a second system on a baseline, and color is achieved by stacking two more systems (for a total of three, each system equipped with its own filter.) Two connectors on the bottom of the CEB provide a connection to a carrier (rover, spacecraft, balloon, etc.) for telemetry, commands, and power. This system has no moving parts. The system's onboard software (SW) supports autonomous operations such as pattern recognition and tracking.
Design of integrated eye tracker-display device for head mounted systems
NASA Astrophysics Data System (ADS)
David, Y.; Apter, B.; Thirer, N.; Baal-Zedaka, I.; Efron, U.
2009-08-01
We propose an Eye Tracker/Display system, based on a novel, dual function device termed ETD, which allows sharing the optical paths of the Eye tracker and the display and on-chip processing. The proposed ETD design is based on a CMOS chip combining a Liquid-Crystal-on-Silicon (LCoS) micro-display technology with near infrared (NIR) Active Pixel Sensor imager. The ET operation allows capturing the Near IR (NIR) light, back-reflected from the eye's retina. The retinal image is then used for the detection of the current direction of eye's gaze. The design of the eye tracking imager is based on the "deep p-well" pixel technology, providing low crosstalk while shielding the active pixel circuitry, which serves the imaging and the display drivers, from the photo charges generated in the substrate. The use of the ETD in the HMD Design enables a very compact design suitable for Smart Goggle applications. A preliminary optical, electronic and digital design of the goggle and its associated ETD chip and digital control, are presented.
Technical and instrumental prerequisites for single-port laparoscopic solo surgery: State of art
Kim, Say-June; Lee, Sang Chul
2015-01-01
With the aid of advanced surgical techniques and instruments, single-port laparoscopic surgery (SPLS) can be accomplished with just two surgical members: an operator and a camera assistant. Under these circumstances, the reasonable replacement of a human camera assistant by a mechanical camera holder has resulted in a new surgical procedure termed single-port solo surgery (SPSS). In SPSS, the fixation and coordinated movement of a camera held by mechanical devices provides fixed and stable operative images that are under the control of the operator. Therefore, SPSS primarily benefits from the provision of the operator’s eye-to-hand coordination. Because SPSS is an intuitive modification of SPLS, the indications for SPSS are the same as those for SPLS. Though SPSS necessitates more actions than the surgery with a human assistant, these difficulties seem to be easily overcome by the greater provision of static operative images and the need for less lens cleaning and repositioning of the camera. When the operation is expected to be difficult and demanding, the SPSS process could be assisted by the addition of another instrument holder besides the camera holder. PMID:25914453
Pan, Weiyuan; Jung, Dongwook; Yoon, Hyo Sik; Lee, Dong Eun; Naqvi, Rizwan Ali; Lee, Kwan Woo; Park, Kang Ryoung
2016-08-31
Gaze tracking is the technology that identifies a region in space that a user is looking at. Most previous non-wearable gaze tracking systems use a near-infrared (NIR) light camera with an NIR illuminator. Based on the kind of camera lens used, the viewing angle and depth-of-field (DOF) of a gaze tracking camera can be different, which affects the performance of the gaze tracking system. Nevertheless, to our best knowledge, most previous researches implemented gaze tracking cameras without ground truth information for determining the optimal viewing angle and DOF of the camera lens. Eye-tracker manufacturers might also use ground truth information, but they do not provide this in public. Therefore, researchers and developers of gaze tracking systems cannot refer to such information for implementing gaze tracking system. We address this problem providing an empirical study in which we design an optimal gaze tracking camera based on experimental measurements of the amount and velocity of user's head movements. Based on our results and analyses, researchers and developers might be able to more easily implement an optimal gaze tracking system. Experimental results show that our gaze tracking system shows high performance in terms of accuracy, user convenience and interest.
Pan, Weiyuan; Jung, Dongwook; Yoon, Hyo Sik; Lee, Dong Eun; Naqvi, Rizwan Ali; Lee, Kwan Woo; Park, Kang Ryoung
2016-01-01
Gaze tracking is the technology that identifies a region in space that a user is looking at. Most previous non-wearable gaze tracking systems use a near-infrared (NIR) light camera with an NIR illuminator. Based on the kind of camera lens used, the viewing angle and depth-of-field (DOF) of a gaze tracking camera can be different, which affects the performance of the gaze tracking system. Nevertheless, to our best knowledge, most previous researches implemented gaze tracking cameras without ground truth information for determining the optimal viewing angle and DOF of the camera lens. Eye-tracker manufacturers might also use ground truth information, but they do not provide this in public. Therefore, researchers and developers of gaze tracking systems cannot refer to such information for implementing gaze tracking system. We address this problem providing an empirical study in which we design an optimal gaze tracking camera based on experimental measurements of the amount and velocity of user’s head movements. Based on our results and analyses, researchers and developers might be able to more easily implement an optimal gaze tracking system. Experimental results show that our gaze tracking system shows high performance in terms of accuracy, user convenience and interest. PMID:27589768
Geometrical theory to predict eccentric photorefraction intensity profiles in the human eye
NASA Astrophysics Data System (ADS)
Roorda, Austin; Campbell, Melanie C. W.; Bobier, W. R.
1995-08-01
In eccentric photorefraction, light returning from the retina of the eye is photographed by a camera focused on the eye's pupil. We use a geometrical model of eccentric photorefraction to generate intensity profiles across the pupil image. The intensity profiles for three different monochromatic aberration functions induced in a single eye are predicted and show good agreement with the measured eccentric photorefraction intensity profiles. A directional reflection from the retina is incorporated into the calculation. Intensity profiles for symmetric and asymmetric aberrations are generated and measured. The latter profile shows a dependency on the source position and the meridian. The magnitude of the effect of thresholding on measured pattern extents is predicted. Monochromatic aberrations in human eyes will cause deviations in the eccentric photorefraction measurements from traditional crescents caused by defocus and may cause misdiagnoses of ametropia or anisometropia. Our results suggest that measuring refraction along the vertical meridian is preferred for screening studies with the eccentric photorefractor.
ERIC Educational Resources Information Center
Lewis, Scott M.
1985-01-01
"Io," one of four satellites of Jupiter, orbits its mother planet in roughly the same plane as Earth orbits the sun. Guidelines for collecting data about Io using a reflecting telescope, 35mm camera, and adapter are presented. A computer program used in studying Io's maximum distance from Jupiter is available. (DH)
Application of a wide-field phantom eye for optical coherence tomography and reflectance imaging.
Corcoran, Anthony; Muyo, Gonzalo; van Hemert, Jano; Gorman, Alistair; Harvey, Andrew R
2015-12-15
Optical coherence tomography (OCT) and reflectance imaging are used in clinical practice to measure the thickness and transverse dimensions of retinal features. The recent trend towards increasing the field of view (FOV) of these devices has led to an increasing significance of the optical aberrations of both the human eye and the device. We report the design, manufacture and application of the first phantom eye that reproduces the off-axis optical characteristics of the human eye, and allows the performance assessment of wide-field ophthalmic devices. We base our design and manufacture on the wide-field schematic eye, [Navarro, R. J. Opt. Soc. Am. A , 1985, 2 .] as an accurate proxy to the human eye and enable assessment of ophthalmic imaging performance for a [Formula: see text] external FOV. We used multi-material 3D-printed retinal targets to assess imaging performance of the following ophthalmic instruments: the Optos 200Tx, Heidelberg Spectralis, Zeiss FF4 fundus camera and Optos OCT SLO and use the phantom to provide an insight into some of the challenges of wide-field OCT.
Application of a wide-field phantom eye for optical coherence tomography and reflectance imaging
NASA Astrophysics Data System (ADS)
Corcoran, Anthony; Muyo, Gonzalo; van Hemert, Jano; Gorman, Alistair; Harvey, Andrew R.
2015-12-01
Optical coherence tomography (OCT) and reflectance imaging are used in clinical practice to measure the thickness and transverse dimensions of retinal features. The recent trend towards increasing the field of view (FOV) of these devices has led to an increasing significance of the optical aberrations of both the human eye and the device. We report the design, manufacture and application of the first phantom eye that reproduces the off-axis optical characteristics of the human eye, and allows the performance assessment of wide-field ophthalmic devices. We base our design and manufacture on the wide-field schematic eye, [Navarro, R. J. Opt. Soc. Am. A, 1985, 2.] as an accurate proxy to the human eye and enable assessment of ophthalmic imaging performance for a ? external FOV. We used multi-material 3D-printed retinal targets to assess imaging performance of the following ophthalmic instruments: the Optos 200Tx, Heidelberg Spectralis, Zeiss FF4 fundus camera and Optos OCT SLO and use the phantom to provide an insight into some of the challenges of wide-field OCT.
A Web Browsing System by Eye-gaze Input
NASA Astrophysics Data System (ADS)
Abe, Kiyohiko; Owada, Kosuke; Ohi, Shoichi; Ohyama, Minoru
We have developed an eye-gaze input system for people with severe physical disabilities, such as amyotrophic lateral sclerosis (ALS) patients. This system utilizes a personal computer and a home video camera to detect eye-gaze under natural light. The system detects both vertical and horizontal eye-gaze by simple image analysis, and does not require special image processing units or sensors. We also developed the platform for eye-gaze input based on our system. In this paper, we propose a new web browsing system for physically disabled computer users as an application of the platform for eye-gaze input. The proposed web browsing system uses a method of direct indicator selection. The method categorizes indicators by their function. These indicators are hierarchized relations; users can select the felicitous function by switching indicators group. This system also analyzes the location of selectable object on web page, such as hyperlink, radio button, edit box, etc. This system stores the locations of these objects, in other words, the mouse cursor skips to the object of candidate input. Therefore it enables web browsing at a faster pace.
Custer, T.W.; Kannan, K.; Tao, L.; Yun, S.-H.; Trowbridge, A.
2010-01-01
Archived Great Blue Heron (Ardea herodias) eggs (N = 16) collected in 1993 from three colonies on the Mississippi River in Minnesota were analyzed in 2007 for perfluorinated compounds (PFCs) and polybrominated diphenyl ethers (PBDEs). One of the three colonies, Pig's Eye, was located near a presumed source of PFCs. Based on a multivariate analysis, the pattern of nine PFC concentrations differed significantly between Pig's Eye and the upriver (P = 0.002) and downriver (P = 0.02) colonies; but not between the upriver and downriver colonies (P = 0.25). Mean concentrations of perfluorooctane sulfonate (PFOS), a major PFC compound, were significantly higher at the Pig's Eye colony (geometric mean = 940 ng/g wet weight) than at upriver (60 ng/g wet weight) and downriver (131 ng/g wet weight) colonies. Perfluorooctane sulfonate concentrations from the Pig's Eye colony are among the highest reported in bird eggs. Concentrations of PFOS in Great Blue Heron eggs from Pig's Eye were well below the toxicity thresholds estimated for Bobwhite Quail (Colinus virginianus) and Mallards (Anas platyrhynchos), but within the toxicity threshold estimated for White Leghorn Chickens (Gallus domesticus). The pattern of six PBDE congener concentrations did not differ among the three colonies (P = 0.08). Total PBDE concentrations, however, were significantly greater (P = 0.03) at Pig's Eye (geometric mean = 142 ng/g wet weight) than the upriver colony (13 ng/g wet weight). Polybrominated diphenyl ether concentrations in two of six Great Blue Heron eggs from the Pig's Eye colony were within levels associated with altered reproductive behavior in American Kestrels (Falco sparverius).
Barnacle Bill in Super Resolution from Insurance Panorama
NASA Technical Reports Server (NTRS)
1998-01-01
Barnacle Bill is a small rock immediately west-northwest of the Mars Pathfinder lander and was the first rock visited by the Sojourner Rover's alpha proton X-ray spectrometer (APXS) instrument. This image shows super resolution techniques applied to the first APXS target rock, which was never imaged with the rover's forward cameras. Super resolution was applied to help to address questions about the texture of this rock and what it might tell us about its mode of origin.
This view of Barnacle Bill was produced by combining the 'Insurance Pan' frames taken while the IMP camera was still in its stowed position on sol2. The composite color frames that make up this anaglyph were produced for both the right and left eye of the IMP. The right eye composite consists of 5 frames, taken with different color filters, the left eye consists of only 1 frame. The resultant image from each eye was enlarged by 500% and then co-added using Adobe Photoshop to produce, in effect, a super-resolution panchromatic frame that is sharper than an individual frame would be. These panchromatic frames were then colorized with the red, green, and blue filtered images from the same sequence. The color balance was adjusted to approximate the true color of Mars.The anaglyph view was produced by combining the left with the right eye color composite frames by assigning the left eye composite view to the red color plane and the right eye composite view to the green and blue color planes (cyan), to produce a stereo anaglyph mosaic. This mosaic can be viewed in 3-D on your computer monitor or in color print form by wearing red-blue 3-D glasses.Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. Barnacle Bill is a small rock immediately west-northwest of the Mars Pathfinder lander and was the first rock visited by the Sojourner Rover's alpha proton X-ray spectrometer (APXS) instrument.Galaxies Gather at Great Distances
NASA Technical Reports Server (NTRS)
2006-01-01
[figure removed for brevity, see original site] Distant Galaxy Cluster Infrared Survey Poster [figure removed for brevity, see original site] [figure removed for brevity, see original site] Bird's Eye View Mosaic Bird's Eye View Mosaic with Clusters [figure removed for brevity, see original site] [figure removed for brevity, see original site] [figure removed for brevity, see original site] 9.1 Billion Light-Years 8.7 Billion Light-Years 8.6 Billion Light-Years Astronomers have discovered nearly 300 galaxy clusters and groups, including almost 100 located 8 to 10 billion light-years away, using the space-based Spitzer Space Telescope and the ground-based Mayall 4-meter telescope at Kitt Peak National Observatory in Tucson, Ariz. The new sample represents a six-fold increase in the number of known galaxy clusters and groups at such extreme distances, and will allow astronomers to systematically study massive galaxies two-thirds of the way back to the Big Bang. A mosaic portraying a bird's eye view of the field in which the distant clusters were found is shown at upper left. It spans a region of sky 40 times larger than that covered by the full moon as seen from Earth. Thousands of individual images from Spitzer's infrared array camera instrument were stitched together to create this mosaic. The distant clusters are marked with orange dots. Close-up images of three of the distant galaxy clusters are shown in the adjoining panels. The clusters appear as a concentration of red dots near the center of each image. These images reveal the galaxies as they were over 8 billion years ago, since that's how long their light took to reach Earth and Spitzer's infrared eyes. These pictures are false-color composites, combining ground-based optical images captured by the Mosaic-I camera on the Mayall 4-meter telescope at Kitt Peak, with infrared pictures taken by Spitzer's infrared array camera. Blue and green represent visible light at wavelengths of 0.4 microns and 0.8 microns, respectively, while red indicates infrared light at 4.5 microns. Kitt Peak National Observatory is part of the National Optical Astronomy Observatory in Tuscon, Ariz.Breccia-Conglomerate Rocks on Lower Mount Sharp, Mars Stereo
2016-08-19
This stereo scene from the Mast Camera (Mastcam) on NASA's Curiosity Mars Rover shows boulders composed, in part, of pebble-size (0.2 to 2.6 inches, or 0.5 to 6.5 centimeters across) and larger rock fragments. The size and shape of the fragments provide clues to the origins of these boulders. This image is an anaglyph that appears three dimensional when viewed through red-blue glasses with the red lens on the left. The separate right-eye and left-eye views combined into the stereo version are Figure 1 and Figure 2. Mastcam's right-eye camera has a telephoto lens, with focal length of 100 millimeters. The left-eye camera provides a wider view, with a 34-millimeter lens. These images were taken on July 22, 2016, during the 1,408th Martian day, or sol, of Curiosity's work on Mars. For scale, the relatively flat rock at left is about 5 feet (1.5 meters) across. The rock in the foreground at right is informally named "Balombo." The group of boulders is at a site called "Bimbe." The Curiosity team chose to drive the rover to Bimbe to further understand patches of boulders first identified from orbit and seen occasionally on the rover's traverse. The boulders at Bimbe consist of multiple rock types. Some include pieces, or "clasts," of smaller, older rock cemented together, called breccias or conglomerates. The shapes of the inclusion clasts -- whether they are rounded or sharp-edged -- may indicate how far the clasts were transported, and by what processes. Breccias have more angular clasts, while conglomerates have more rounded clasts. As is clear by looking at these boulders, they contain both angular and rounded clasts, leading to some uncertainty about how they formed. Conglomerate rocks such as "Hottah" were inspected near Curiosity's landing site and interpreted as part of an ancient streambed. Breccias are generally formed by consolidation of fragments under pressure. On Mars such pressure might come from crater-forming impact, or by deep burial and exhumation. http://photojournal.jpl.nasa.gov/catalog/PIA20836
Research on the liquid crystal adaptive optics system for human retinal imaging
NASA Astrophysics Data System (ADS)
Zhang, Lei; Tong, Shoufeng; Song, Yansong; Zhao, Xin
2013-12-01
The blood vessels only in Human eye retinal can be observed directly. Many diseases that are not obvious in their early symptom can be diagnosed through observing the changes of distal micro blood vessel. In order to obtain the high resolution human retinal images,an adaptive optical system for correcting the aberration of the human eye was designed by using the Shack-Hartmann wavefront sensor and the Liquid Crystal Spatial Light Modulator(LCLSM) .For a subject eye with 8m-1 (8D)myopia, the wavefront error is reduced to 0.084 λ PV and 0.12 λRMS after adaptive optics(AO) correction ,which has reached diffraction limit.The results show that the LCLSM based AO system has the ability of correcting the aberration of the human eye efficiently,and making the blurred photoreceptor cell to clearly image on a CCD camera.
2014-08-04
Like a giant eye for the giant planet, Saturn great vortex at its north pole appears to stare back at Cassini as NASA Cassini spacecraft stares at it. Measurements have sized the "eye" at a staggering 1,240 miles (2,000 kilometers) across with cloud speeds as fast as 330 miles per hour (150 meters per second). For color views of the eye and the surrounding region, see PIA14946 and PIA14944. The image was taken with the Cassini spacecraft narrow-angle camera on April 2, 2014 using a combination of spectral filters which preferentially admit wavelengths of near-infrared light centered at 748 nanometers. The view was obtained at a distance of approximately 1.4 million miles (2.2 million kilometers) from Saturn and at a Sun-Saturn-spacecraft, or phase, angle of 43 degrees. Image scale is 8 miles (13 kilometers) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA18273
Lightweight helmet-mounted eye movement measurement system
NASA Technical Reports Server (NTRS)
Barnes, J. A.
1978-01-01
The helmet-mounted eye movement measuring system, weighs 1,530 grams; the weight of the present aviators' helmet in standard form with the visor is 1,545 grams. The optical head is standard NAC Eye-Mark. This optical head was mounted on a magnesium yoke which in turn was attached to a slide cam mounted on the flight helmet. The slide cam allows one to adjust the eye-to-optics system distance quite easily and to secure it so that the system will remain in calibration. The design of the yoke and slide cam is such that the subject can, in an emergency, move the optical head forward and upward to the stowed and locked position atop the helmet. This feature was necessary for flight safety. The television camera that is used in the system is a solid state General Electric TN-2000 with a charged induced device imager used as the vidicon.