Science.gov

Sample records for active stereo vision

  1. Active stereo vision routines using PRISM-3

    NASA Astrophysics Data System (ADS)

    Antonisse, Hendrick J.

    1992-11-01

    This paper describes work in progress on a set of visual routines and supporting capabilities implemented on the PRISM-3 real-time vision system. The routines are used in an outdoor robot retrieval task. The task requires the robot to locate a donor agent -- a Hero2000 -- which holds the object to be retrieved, to navigate to the donor, to accept the object from the donor, and return to its original location. The routines described here will form an integral part of the navigation and wide-area search tasks. Active perception is exploited to locate the donor using real-time stereo ranging directed by a pan/tilt/verge mechanism. A framework for orchestrating visual search has been implemented and is briefly described.

  2. Robust active stereo vision using Kullback-Leibler divergence.

    PubMed

    Wang, Yongchang; Liu, Kai; Hao, Qi; Wang, Xianwang; Lau, Daniel L; Hassebrook, Laurence G

    2012-03-01

    Active stereo vision is a method of 3D surface scanning involving the projecting and capturing of a series of light patterns where depth is derived from correspondences between the observed and projected patterns. In contrast, passive stereo vision reveals depth through correspondences between textured images from two or more cameras. By employing a projector, active stereo vision systems find correspondences between two or more cameras, without ambiguity, independent of object texture. In this paper, we present a hybrid 3D reconstruction framework that supplements projected pattern correspondence matching with texture information. The proposed scheme consists of using projected pattern data to derive initial correspondences across cameras and then using texture data to eliminate ambiguities. Pattern modulation data are then used to estimate error models from which Kullback-Leibler divergence refinement is applied to reduce misregistration errors. Using only a small number of patterns, the presented approach reduces measurement errors versus traditional structured light and phase matching methodologies while being insensitive to gamma distortion, projector flickering, and secondary reflections. Experimental results demonstrate these advantages in terms of enhanced 3D reconstruction performance in the presence of noise, deterministic distortions, and conditions of texture and depth contrast.

  3. Object tracking with stereo vision

    NASA Technical Reports Server (NTRS)

    Huber, Eric

    1994-01-01

    A real-time active stereo vision system incorporating gaze control and task directed vision is described. Emphasis is placed on object tracking and object size and shape determination. Techniques include motion-centroid tracking, depth tracking, and contour tracking.

  4. Panoramic stereo sphere vision

    NASA Astrophysics Data System (ADS)

    Feng, Weijia; Zhang, Baofeng; Röning, Juha; Zong, Xiaoning; Yi, Tian

    2013-01-01

    Conventional stereo vision systems have a small field of view (FOV) which limits their usefulness for certain applications. While panorama vision is able to "see" in all directions of the observation space, scene depth information is missed because of the mapping from 3D reference coordinates to 2D panoramic image. In this paper, we present an innovative vision system which builds by a special combined fish-eye lenses module, and is capable of producing 3D coordinate information from the whole global observation space and acquiring no blind area 360°×360° panoramic image simultaneously just using single vision equipment with one time static shooting. It is called Panoramic Stereo Sphere Vision (PSSV). We proposed the geometric model, mathematic model and parameters calibration method in this paper. Specifically, video surveillance, robotic autonomous navigation, virtual reality, driving assistance, multiple maneuvering target tracking, automatic mapping of environments and attitude estimation are some of the applications which will benefit from PSSV.

  5. Research on stereo vision odometry

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoling; Zhang, Baofeng; Tian, Xiuzhen

    2010-11-01

    The stereo visual odometer in vision based on the navigation system is proposed in the paper. The stereo visual odometer can obtain the motion data to implement the position and attitude estimation of ALV(Autonomous Land Vehicle). Two key technology in the stereo vision odometer are dissertated. The first is using SIFT(Scale Invariant Feature Transform) to extract suitable feature, match points pairs in the feature, and track the feature of fore and after frames of the same point on the object. The second is using matching and tracking to obtain the different 3-D coordinate of the feature of the point on the object, and to compute the motion parameters by motion estimate. The unknown outdoor environment is adopted in the experiment. The results show that the stereo vision odometer is more accurate, and the measurement error dose not increase with the movement distance increasing. It can be used as an important supplement of conventional odometer.

  6. Neural architectures for stereo vision

    PubMed Central

    2016-01-01

    Stereoscopic vision delivers a sense of depth based on binocular information but additionally acts as a mechanism for achieving correspondence between patterns arriving at the left and right eyes. We analyse quantitatively the cortical architecture for stereoscopic vision in two areas of macaque visual cortex. For primary visual cortex V1, the result is consistent with a module that is isotropic in cortical space with a diameter of at least 3 mm in surface extent. This implies that the module for stereo is larger than the repeat distance between ocular dominance columns in V1. By contrast, in the extrastriate cortical area V5/MT, which has a specialized architecture for stereo depth, the module for representation of stereo is about 1 mm in surface extent, so the representation of stereo in V5/MT is more compressed than V1 in terms of neural wiring of the neocortex. The surface extent estimated for stereo in V5/MT is consistent with measurements of its specialized domains for binocular disparity. Within V1, we suggest that long-range horizontal, anatomical connections form functional modules that serve both binocular and monocular pattern recognition: this common function may explain the distortion and disruption of monocular pattern vision observed in amblyopia. This article is part of the themed issue ‘Vision in our three-dimensional world’. PMID:27269604

  7. Neural architectures for stereo vision.

    PubMed

    Parker, Andrew J; Smith, Jackson E T; Krug, Kristine

    2016-06-19

    Stereoscopic vision delivers a sense of depth based on binocular information but additionally acts as a mechanism for achieving correspondence between patterns arriving at the left and right eyes. We analyse quantitatively the cortical architecture for stereoscopic vision in two areas of macaque visual cortex. For primary visual cortex V1, the result is consistent with a module that is isotropic in cortical space with a diameter of at least 3 mm in surface extent. This implies that the module for stereo is larger than the repeat distance between ocular dominance columns in V1. By contrast, in the extrastriate cortical area V5/MT, which has a specialized architecture for stereo depth, the module for representation of stereo is about 1 mm in surface extent, so the representation of stereo in V5/MT is more compressed than V1 in terms of neural wiring of the neocortex. The surface extent estimated for stereo in V5/MT is consistent with measurements of its specialized domains for binocular disparity. Within V1, we suggest that long-range horizontal, anatomical connections form functional modules that serve both binocular and monocular pattern recognition: this common function may explain the distortion and disruption of monocular pattern vision observed in amblyopia.This article is part of the themed issue 'Vision in our three-dimensional world'. PMID:27269604

  8. Stereo 3-D Vision in Teaching Physics

    ERIC Educational Resources Information Center

    Zabunov, Svetoslav

    2012-01-01

    Stereo 3-D vision is a technology used to present images on a flat surface (screen, paper, etc.) and at the same time to create the notion of three-dimensional spatial perception of the viewed scene. A great number of physical processes are much better understood when viewed in stereo 3-D vision compared to standard flat 2-D presentation. The…

  9. Stereo 3-D Vision in Teaching Physics

    NASA Astrophysics Data System (ADS)

    Zabunov, Svetoslav

    2012-03-01

    Stereo 3-D vision is a technology used to present images on a flat surface (screen, paper, etc.) and at the same time to create the notion of three-dimensional spatial perception of the viewed scene. A great number of physical processes are much better understood when viewed in stereo 3-D vision compared to standard flat 2-D presentation. The current paper describes the modern stereo 3-D technologies that are applicable to various tasks in teaching physics in schools, colleges, and universities. Examples of stereo 3-D simulations developed by the author can be observed on online.

  10. Stereo vision with distance and gradient recognition

    NASA Astrophysics Data System (ADS)

    Kim, Soo-Hyun; Kang, Suk-Bum; Yang, Tae-Kyu

    2007-12-01

    Robot vision technology is needed for the stable walking, object recognition and the movement to the target spot. By some sensors which use infrared rays and ultrasonic, robot can overcome the urgent state or dangerous time. But stereo vision of three dimensional space would make robot have powerful artificial intelligence. In this paper we consider about the stereo vision for stable and correct movement of a biped robot. When a robot confront with an inclination plane or steps, particular algorithms are needed to go on without failure. This study developed the recognition algorithm of distance and gradient of environment by stereo matching process.

  11. Near real-time stereo vision system

    NASA Technical Reports Server (NTRS)

    Anderson, Charles H. (Inventor); Matthies, Larry H. (Inventor)

    1993-01-01

    The apparatus for a near real-time stereo vision system for use with a robotic vehicle is described. The system is comprised of two cameras mounted on three-axis rotation platforms, image-processing boards, a CPU, and specialized stereo vision algorithms. Bandpass-filtered image pyramids are computed, stereo matching is performed by least-squares correlation, and confidence ranges are estimated by means of Bayes' theorem. In particular, Laplacian image pyramids are built and disparity maps are produced from the 60 x 64 level of the pyramids at rates of up to 2 seconds per image pair. The first autonomous cross-country robotic traverses (of up to 100 meters) have been achieved using the stereo vision system of the present invention with all computing done onboard the vehicle. The overall approach disclosed herein provides a unifying paradigm for practical domain-independent stereo ranging.

  12. Near real-time stereo vision system

    NASA Astrophysics Data System (ADS)

    Matthies, Larry H.; Anderson, Charles H.

    1991-12-01

    The apparatus for a near real-time stereo vision system for use with a robotic vehicle is described. The system is comprised of two cameras mounted on three-axis rotation platforms, image-processing boards, a CPU, and specialized stereo vision algorithms. Bandpass-filtered image pyramids are computed, stereo matching is performed by least-squares correlation, and confidence ranges are estimated by means of Bayes' theorem. In particular, Laplacian image pyramids are built and disparity maps are produced from the 60 x 64 level of the pyramids at rates of up to 2 seconds per image pair. The first autonomous cross-country robotic traverses (of up to 100 meters) have been achieved using the stereo vision system of the present invention with all computing done onboard the vehicle. The overall approach disclosed herein provides a unifying paradigm for practical domain-independent stereo ranging.

  13. Cooperative and asynchronous stereo vision for dynamic vision sensors

    NASA Astrophysics Data System (ADS)

    Piatkowska, E.; Belbachir, A. N.; Gelautz, M.

    2014-05-01

    Dynamic vision sensors (DVSs) encode visual input as a stream of events generated upon relative light intensity changes in the scene. These sensors have the advantage of allowing simultaneously high temporal resolution (better than 10 µs) and wide dynamic range (>120 dB) at sparse data representation, which is not possible with clocked vision sensors. In this paper, we focus on the task of stereo reconstruction. The spatiotemporal and asynchronous aspects of data provided by the sensor impose a different stereo reconstruction approach from the one applied for synchronous frame-based cameras. We propose to model the event-driven stereo matching by a cooperative network (Marr and Poggio 1976 Science 194 283-7). The history of the recent activity in the scene is stored in the network, which serves as spatiotemporal context used in disparity calculation for each incoming event. The network constantly evolves in time, as events are generated. In our work, not only the spatiotemporal aspect of the data is preserved but also the matching is performed asynchronously. The results of the experiments prove that the proposed approach is well adapted for DVS data and can be successfully used for disparity calculation.

  14. Detecting personnel around UGVs using stereo vision

    NASA Astrophysics Data System (ADS)

    Bajracharya, Max; Moghaddam, Baback; Howard, Andrew; Matthies, Larry H.

    2008-04-01

    Detecting people around unmanned ground vehicles (UGVs) to facilitate safe operation of UGVs is one of the highest priority issues in the development of perception technology for autonomous navigation. Research to date has not achieved the detection ranges or reliability needed in deployed systems to detect upright pedestrians in flat, relatively uncluttered terrain, let alone in more complex environments and with people in postures that are more difficult to detect. Range data is essential to solve this problem. Combining range data with high resolution imagery may enable higher performance than range data alone because image appearance can complement shape information in range data and because cameras may offer higher angular resolution than typical range sensors. This makes stereo vision a promising approach for several reasons: image resolution is high and will continue to increase, the physical size and power dissipation of the cameras and computers will continue to decrease, and stereo cameras provide range data and imagery that are automatically spatially and temporally registered. We describe a stereo vision-based pedestrian detection system, focusing on recent improvements to a shape-based classifier applied to the range data, and present frame-level performance results that show great promise for the overall approach.

  15. Forward Obstacle Detection System by Stereo Vision

    NASA Astrophysics Data System (ADS)

    Iwata, Hiroaki; Saneyoshi, Keiji

    Forward obstacle detection is needed to prevent car accidents. We have developed forward obstacle detection system which has good detectability and the accuracy of distance only by using stereo vision. The system runs in real time by using a stereo processing system based on a Field-Programmable Gate Array (FPGA). Road surfaces are detected and the space to drive can be limited. A smoothing filter is also used. Owing to these, the accuracy of distance is improved. In the experiments, this system could detect forward obstacles 100 m away. Its error of distance up to 80 m was less than 1.5 m. It could immediately detect cutting-in objects.

  16. Stereo Vision: The Haves and Have-Nots.

    PubMed

    Hess, Robert F; To, Long; Zhou, Jiawei; Wang, Guangyu; Cooperstock, Jeremy R

    2015-06-01

    Animals with front facing eyes benefit from a substantial overlap in the visual fields of each eye, and devote specialized brain processes to using the horizontal spatial disparities produced as a result of viewing the same object with two laterally placed eyes, to derived depth or 3-D stereo information. This provides the advantage to break the camouflage of objects in front of similarly textured background and improves hand eye coordination for grasping objects close at hand. It is widely thought that about 5% of the population have a lazy eye and lack stereo vision, so it is often supposed that most of the population (95%) have good stereo abilities. We show that this is not the case; 68% have good to excellent stereo (the haves) and 32% have moderate to poor stereo (the have-nots). Why so many people lack good 3-D stereo vision is unclear but it is likely to be neural and reversible. PMID:27433314

  17. Stereo vision enhances the learning of a catching skill.

    PubMed

    Mazyn, Liesbeth I N; Lenoir, Matthieu; Montagne, Gilles; Delaey, Christophe; Savelsbergh, Geert J P

    2007-06-01

    The aim of this study was to investigate the contribution of stereo vision to the acquisition of a natural interception task. Poor catchers with good (N = 8; Stereo+) and weak (N = 6; Stereo-) stereo vision participated in an intensive training program spread over 2 weeks, during which they caught over 1,400 tennis balls in a pre-post-retention design. While the Stereo+ group improved from a catching percentage of 18% to 59%, catchers in the Stereo- group did not significantly improve (from 10 to 31%), this progress being indifferent from a control group (N = 9) that did not practice at all. These results indicate that the development and use of of compensatory cues for depth perception in people with weak stereopsis is insufficient to successfully deal with interceptions under high temporal constraints, and that this disadvantage cannot be fully attenuated by specific and intensive training. PMID:17487478

  18. Stereo Vision: The Haves and Have-Nots

    PubMed Central

    To, Long; Zhou, Jiawei; Wang, Guangyu; Cooperstock, Jeremy R.

    2015-01-01

    Animals with front facing eyes benefit from a substantial overlap in the visual fields of each eye, and devote specialized brain processes to using the horizontal spatial disparities produced as a result of viewing the same object with two laterally placed eyes, to derived depth or 3-D stereo information. This provides the advantage to break the camouflage of objects in front of similarly textured background and improves hand eye coordination for grasping objects close at hand. It is widely thought that about 5% of the population have a lazy eye and lack stereo vision, so it is often supposed that most of the population (95%) have good stereo abilities. We show that this is not the case; 68% have good to excellent stereo (the haves) and 32% have moderate to poor stereo (the have-nots). Why so many people lack good 3-D stereo vision is unclear but it is likely to be neural and reversible. PMID:27433314

  19. Stereo vision enhances the learning of a catching skill.

    PubMed

    Mazyn, Liesbeth I N; Lenoir, Matthieu; Montagne, Gilles; Delaey, Christophe; Savelsbergh, Geert J P

    2007-06-01

    The aim of this study was to investigate the contribution of stereo vision to the acquisition of a natural interception task. Poor catchers with good (N = 8; Stereo+) and weak (N = 6; Stereo-) stereo vision participated in an intensive training program spread over 2 weeks, during which they caught over 1,400 tennis balls in a pre-post-retention design. While the Stereo+ group improved from a catching percentage of 18% to 59%, catchers in the Stereo- group did not significantly improve (from 10 to 31%), this progress being indifferent from a control group (N = 9) that did not practice at all. These results indicate that the development and use of of compensatory cues for depth perception in people with weak stereopsis is insufficient to successfully deal with interceptions under high temporal constraints, and that this disadvantage cannot be fully attenuated by specific and intensive training.

  20. The study of stereo vision technique for the autonomous vehicle

    NASA Astrophysics Data System (ADS)

    Li, Pei; Wang, Xi; Wang, Jiang-feng

    2015-08-01

    The stereo vision technology by two or more cameras could recovery 3D information of the field of view. This technology can effectively help the autonomous navigation system of unmanned vehicle to judge the pavement conditions within the field of view, and to measure the obstacles on the road. In this paper, the stereo vision technology in measuring the avoidance of the autonomous vehicle is studied and the key techniques are analyzed and discussed. The system hardware of the system is built and the software is debugged, and finally the measurement effect is explained by the measured data. Experiments show that the 3D reconstruction, within the field of view, can be rebuilt by the stereo vision technology effectively, and provide the basis for pavement condition judgment. Compared with unmanned vehicle navigation radar used in measuring system, the stereo vision system has the advantages of low cost, distance and so on, it has a good application prospect.

  1. Stereo vision for spacecraft formation flying relative navigation

    NASA Astrophysics Data System (ADS)

    Liu, Weifeng; Han, Long

    2007-11-01

    First this paper describes the principals of stereo vision, the application in spacecraft formation flying and deduces the formulations for the observation. Then a kalman filter enhanced vision system for spacecraft formation flying relative navigation is discussed. At last some virtual evaluations for proposed measurement is presented which showed the better stability and precision.

  2. A stereo model based upon mechanisms of human binocular vision

    NASA Technical Reports Server (NTRS)

    Griswold, N. C.; Yeh, C. P.

    1986-01-01

    A model for stereo vision, which is based on the human-binocular vision system, is proposed. Data collected from studies of neurophysiology of the human binocular system are discussed. An algorithm for the implementation of this stereo vision model is derived. The algorithm is tested on computer-generated and real scene images. Examples of a computer-generated image and a grey-level image are presented. It is noted that the proposed method is computationally efficient for depth perception, and the results indicate accuracies that are noise tolerant.

  3. Static stereo vision depth distortions in teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, D. B.; Von Sydow, M.

    1988-01-01

    A major problem in high-precision teleoperation is the high-resolution presentation of depth information. Stereo television has so far proved to be only a partial solution, due to an inherent trade-off among depth resolution, depth distortion and the alignment of the stereo image pair. Converged cameras can guarantee image alignment but suffer significant depth distortion when configured for high depth resolution. Moving the stereo camera rig to scan the work space further distorts depth. The 'dynamic' (camera-motion induced) depth distortion problem was solved by Diner and Von Sydow (1987), who have quantified the 'static' (camera-configuration induced) depth distortion. In this paper, a stereo image presentation technique which yields aligned images, high depth resolution and low depth distortion is demonstrated, thus solving the trade-off problem.

  4. Passive Night Vision Sensor Comparison for Unmanned Ground Vehicle Stereo Vision Navigation

    NASA Technical Reports Server (NTRS)

    Owens, Ken; Matthies, Larry

    2000-01-01

    One goal of the "Demo III" unmanned ground vehicle program is to enable autonomous nighttime navigation at speeds of up to 10 m.p.h. To perform obstacle detection at night with stereo vision will require night vision cameras that produce adequate image quality for the driving speeds, vehicle dynamics, obstacle sizes, and scene conditions that will be encountered. This paper analyzes the suitability of four classes of night vision cameras (3-5 micrometer cooled FLIR, 8-12 micrometer cooled FLIR, 8-12 micrometer uncooled FLIR, and image intensifiers) for night stereo vision, using criteria based on stereo matching quality, image signal to noise ratio, motion blur and synchronization capability. We find that only cooled FLIRs will enable stereo vision performance that meets the goals of the Demo III program for nighttime autonomous mobility.

  5. The advantages of stereo vision in a face recognition system

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Blasch, Erik

    2014-06-01

    Humans can recognize a face with binocular vision, while computers typically use a single face image. It is known that the performance of face recognition (by a computer) can be improved using the score fusion of multimodal images and multiple algorithms. A question is: Can we apply stereo vision to a face recognition system? We know that human binocular vision has many advantages such as stereopsis (3D vision), binocular summation, and singleness of vision including fusion of binocular images (cyclopean image). For face recognition, a 3D face or 3D facial features are typically computed from a pair of stereo images. In human visual processes, the binocular summation and singleness of vision are similar as image fusion processes. In this paper, we propose an advanced face recognition system with stereo imaging capability, which is comprised of two 2-in-1 multispectral (visible and thermal) cameras and three recognition algorithms (circular Gaussian filter, face pattern byte, and linear discriminant analysis [LDA]). Specifically, we present and compare stereo fusion at three levels (images, features, and scores) by using stereo images (from left camera and right camera). Image fusion is achieved with three methods (Laplacian pyramid, wavelet transform, average); feature fusion is done with three logical operations (AND, OR, XOR); and score fusion is implemented with four classifiers (LDA, k-nearest neighbor, support vector machine, binomial logical regression). The system performance is measured by probability of correct classification (PCC) rate (reported as accuracy rate in this paper) and false accept rate (FAR). The proposed approaches were validated with a multispectral stereo face dataset from 105 subjects. Experimental results show that any type of stereo fusion can improve the PCC, meanwhile reduce the FAR. It seems that stereo image/feature fusion is superior to stereo score fusion in terms of recognition performance. Further score fusion after image

  6. The contribution of stereo vision to one-handed catching.

    PubMed

    Mazyn, Liesbeth I N; Lenoir, Matthieu; Montagne, Gilles; Savelsbergh, Geert J P

    2004-08-01

    Participants with normal (StereoN) and weak (StereoW) stereopsis caught tennis balls under monocular and binocular viewing at three different speed conditions. Monocular or binocular viewing did not affect catching performance in catchers with weak stereopsis, while the StereoN group caught more balls under binocular vision as compared with the monocular condition. These effects were more pronounced with increasing ball speed. Kinematic analysis of the catch partially corroborated these findings. These results indicate that StereoW catchers have not developed a compensatory strategy for information pick-up, and that negative effects of a lack of stereopsis grow larger as temporal constraints become more severe. These findings also support the notion that several monocular and/or binocular information sources can be used in the control of interceptive action. PMID:15221161

  7. Self-supervised learning in cooperative stereo vision correspondence.

    PubMed

    Decoux, B

    1997-02-01

    This paper presents a neural network model of stereoscopic vision, in which a process of fusion seeks the correspondence between points of stereo inputs. Stereo fusion is obtained after a self-supervised learning phase, so called because the learning rule is a supervised-learning rule in which the supervisory information is autonomously extracted from the visual inputs by the model. This supervisory information arises from a global property of the potential matches between the points. The proposed neural network, which is of the cooperative type, and the learning procedure, are tested with random-dot stereograms (RDS) and feature points extracted from real-world images. Those feature points are extracted by a technique based on the use of sigma-pi units. The matching performance and the generalization ability of the model are quantified. The relationship between what have been learned by the network and the constraints used in previous cooperative models of stereo vision, is discussed. PMID:9228582

  8. Stereo vision based hand-held laser scanning system design

    NASA Astrophysics Data System (ADS)

    Xiong, Hanwei; Xu, Jun; Wang, Jinming

    2011-11-01

    Although 3D scanning system is used more and more broadly in many fields, such computer animate, computer aided design, digital museums, and so on, a convenient scanning device is expansive for most people to afford. In another hand, imaging devices are becoming cheaper, a stereo vision system with two video cameras cost little. In this paper, a hand held laser scanning system is design based on stereo vision principle. The two video cameras are fixed tighter, and are all calibrated in advance. The scanned object attached with some coded markers is in front of the stereo system, and can be changed its position and direction freely upon the need of scanning. When scanning, the operator swept a line laser source, and projected it on the object. At the same time, the stereo vision system captured the projected lines, and reconstructed their 3D shapes. The code markers are used to translate the coordinate system between scanned points under different view. Two methods are used to get more accurate results. One is to use NURBS curves to interpolate the sections of the laser lines to obtain accurate central points, and a thin plate spline is used to approximate the central points, and so, an exact laser central line is got, which guards an accurate correspondence between tow cameras. Another way is to incorporate the constraint of laser swept plane on the reconstructed 3D curves by a PCA (Principle Component Analysis) algorithm, and more accurate results are obtained. Some examples are given to verify the system.

  9. Problem-oriented stereo vision quality evaluation complex

    NASA Astrophysics Data System (ADS)

    Sidorchuk, D.; Gusamutdinova, N.; Konovalenko, I.; Ershov, E.

    2015-12-01

    We describe an original low cost hardware setting for efficient testing of stereo vision algorithms. The method uses a combination of a special hardware setup and mathematical model and is easy to construct, precise in applications of our interest. For a known scene we derive its analytical representation, called virtual scene. Using a four point correspondence between the scene and virtual one we compute extrinsic camera parameters, and project virtual scene on the image plane, which is the ground truth for depth map. Another result, presented in this paper, is a new depth map quality metric. Its main purpose is to tune stereo algorithms for particular problem, e.g. obstacle avoidance.

  10. Stereo vision for planetary rovers - Stochastic modeling to near real-time implementation

    NASA Technical Reports Server (NTRS)

    Matthies, Larry

    1991-01-01

    JPL has achieved the first autonomous cross-country robotic traverses to use stereo vision, with all computing onboard the vehicle. This paper describes the stereo vision system, including the underlying statistical model and the details of the implementation. It is argued that the overall approach provides a unifying paradigm for practical domain-independent stereo ranging.

  11. Classification of road sign type using mobile stereo vision

    NASA Astrophysics Data System (ADS)

    McLoughlin, Simon D.; Deegan, Catherine; Fitzgerald, Conor; Markham, Charles

    2005-06-01

    This paper presents a portable mobile stereo vision system designed for the assessment of road signage and delineation (lines and reflective pavement markers or "cat's eyes"). This novel system allows both geometric and photometric measurements to be made on objects in a scene. Global Positioning System technology provides important location data for any measurements made. Using the system it has been shown that road signs can be classfied by nature of their reflectivity. This is achieved by examining the changes in the reflected light intensity with changes in range (facilitated by stereo vision). Signs assessed include those made from retro-reflective materials, those made from diffuse reflective materials and those made from diffuse reflective matrials with local illumination. Field-testing results demonstrate the systems ability to classify objects in the scene based on their reflective properties. The paper includes a discussion of a physical model that supports the experimental data.

  12. Vision based flight procedure stereo display system

    NASA Astrophysics Data System (ADS)

    Shen, Xiaoyun; Wan, Di; Ma, Lan; He, Yuncheng

    2008-03-01

    A virtual reality flight procedure vision system is introduced in this paper. The digital flight map database is established based on the Geographic Information System (GIS) and high definitions satellite remote sensing photos. The flight approaching area database is established through computer 3D modeling system and GIS. The area texture is generated from the remote sensing photos and aerial photographs in various level of detail. According to the flight approaching procedure, the flight navigation information is linked to the database. The flight approaching area vision can be dynamic displayed according to the designed flight procedure. The flight approaching area images are rendered in 2 channels, one for left eye images and the others for right eye images. Through the polarized stereoscopic projection system, the pilots and aircrew can get the vivid 3D vision of the flight destination approaching area. Take the use of this system in pilots preflight preparation procedure, the aircrew can get more vivid information along the flight destination approaching area. This system can improve the aviator's self-confidence before he carries out the flight mission, accordingly, the flight safety is improved. This system is also useful in validate the visual flight procedure design, and it helps to the flight procedure design.

  13. Stereo vision based automated grasp planning

    SciTech Connect

    Wilhelmsen, K.; Huber, L.; Silva, D.; Grasz, E.; Cadapan, L.

    1995-02-01

    The Department of Energy has a need for treating existing nuclear waste. Hazardous waste stored in old warehouses needs to be sorted and treated to meet environmental regulations. Lawrence Livermore National Laboratory is currently experimenting with automated manipulations of unknown objects for sorting, treating, and detailed inspection. To accomplish these tasks, three existing technologies were expanded to meet the increasing requirements. First, a binocular vision range sensor was combined with a surface modeling system to make virtual images of unknown objects. Then, using the surface model information, stable grasp of the unknown shaped objects were planned algorithmically utilizing a limited set of robotic grippers. This paper is an expansion of previous work and will discuss the grasp planning algorithm.

  14. GPU-based real-time trinocular stereo vision

    NASA Astrophysics Data System (ADS)

    Yao, Yuanbin; Linton, R. J.; Padir, Taskin

    2013-01-01

    Most stereovision applications are binocular which uses information from a 2-camera array to perform stereo matching and compute the depth image. Trinocular stereovision with a 3-camera array has been proved to provide higher accuracy in stereo matching which could benefit applications like distance finding, object recognition, and detection. This paper presents a real-time stereovision algorithm implemented on a GPGPU (General-purpose graphics processing unit) using a trinocular stereovision camera array. Algorithm employs a winner-take-all method applied to perform fusion of disparities in different directions following various image processing techniques to obtain the depth information. The goal of the algorithm is to achieve real-time processing speed with the help of a GPGPU involving the use of Open Source Computer Vision Library (OpenCV) in C++ and NVidia CUDA GPGPU Solution. The results are compared in accuracy and speed to verify the improvement.

  15. Three-dimensional motion estimation using genetic algorithms from image sequence in an active stereo vision system

    NASA Astrophysics Data System (ADS)

    Dipanda, Albert; Ajot, Jerome; Woo, Sanghyuk

    2003-06-01

    This paper proposes a method for estimating 3D rigid motion parameters from an image sequence of a moving object. The 3D surface measurement is achieved using an active stereovision system composed of a camera and a light projector, which illuminates objects to be analyzed by a pyramid-shaped laser beam. By associating the laser rays and the spots in the 2D image, the 3D points corresponding to these spots are reconstructed. Each image of the sequence provides a set of 3D points, which is modeled by a B-spline surface. Therefore, estimating the motion between two images of the sequence boils down to matching two B-spline surfaces. We consider the matching environment as an optimization problem and find the optimal solution using Genetic Algorithms. A chromosome is encoded by concatenating six binary coded parameters, the three angles of rotation and the x-axis, y-axis and z-axis translations. We have defined an original fitness function to calculate the similarity measure between two surfaces. The matching process is performed iteratively: the number of points to be matched grows as the process advances and results are refined until convergence. Experimental results with a real image sequence are presented to show the effectiveness of the method.

  16. Stereo vision controlled bilateral telerobotic remote assembly station

    NASA Astrophysics Data System (ADS)

    Dewitt, Robert L.

    1992-05-01

    The objective of this project was to develop a bilateral six degree-of-freedom telerobotic component assembly station utilizing remote stereo vision assisted control. The component assembly station consists of two Unimation Puma 260 robot arms and their associated controls, two Panasonic miniature camera systems, and an air compressor. The operator controls the assembly station remotely via kinematically similar master controllers. A Zenith 386 personal computer acts as an interface and system control between the human operator's controls and the Val II computer controlling the arms. A series of tasks, ranging in complexity and difficulty, was utilized to assess and demonstrate the performance of the complete system.

  17. Vision-based stereo ranging as an optimal control problem

    NASA Technical Reports Server (NTRS)

    Menon, P. K. A.; Sridhar, B.; Chatterji, G. B.

    1992-01-01

    The recent interest in the use of machine vision for flight vehicle guidance is motivated by the need to automate the nap-of-the-earth flight regime of helicopters. Vision-based stereo ranging problem is cast as an optimal control problem in this paper. A quadratic performance index consisting of the integral of the error between observed image irradiances and those predicted by a Pade approximation of the correspondence hypothesis is then used to define an optimization problem. The necessary conditions for optimality yield a set of linear two-point boundary-value problems. These two-point boundary-value problems are solved in feedback form using a version of the backward sweep method. Application of the ranging algorithm is illustrated using a laboratory image pair.

  18. Incorporating polarization in stereo vision-based 3D perception of non-Lambertian scenes

    NASA Astrophysics Data System (ADS)

    Berger, Kai; Voorhies, Randolph; Matthies, Larry

    2016-05-01

    Surfaces with specular, non-Lambertian reflectance are common in urban areas. Robot perception systems for applications in urban environments need to function effectively in the presence of such materials; however, both passive and active 3-D perception systems have difficulties with them. In this paper, we develop an approach using a stereo pair of polarization cameras to improve passive 3-D perception of specular surfaces. We use a commercial stereo camera pair with rotatable polarization filters in front of each lens to capture images with multiple orientations of the polarization filter. From these images, we estimate the degree of linear polarization (DOLP) and the angle of polarization (AOP) at each pixel in at least one camera. The AOP constrains the corresponding surface normal in the scene to lie in the plane of the observed angle of polarization. We embody this constraint an energy functional for a regularization-based stereo vision algorithm. This paper describes the theory of polarization needed for this approach, describes the new stereo vision algorithm, and presents results on synthetic and real images to evaluate performance.

  19. Incorporating polarization in stereo vision-based 3D perception of non-Lambertian scenes

    NASA Astrophysics Data System (ADS)

    Berger, Kai; Voorhies, Randolph; Matthies, Larry

    2016-05-01

    Surfaces with specular, non-Lambertian reflectance are common in urban areas. Robot perception systems for applications in urban environments need to function effectively in the presence of such materials; however, both passive and active 3-D perception systems have difficulties with them. In this paper, we develop an approach using a stereo pair of polarization cameras to improve passive 3-D perception of specular surfaces. We use a commercial stereo camera pair with rotatable polarization filters in front of each lens to capture images with multiple orientations of the polarization filter. From these images, we estimate the degree of linear polarization (DOLP) and the angle of polarization (AOP) at each pixel in at least one camera. The AOP constrains the corresponding surface normal in the scene to lie in the plane of the observed angle of polarization. We embody this constraint an energy functional for a regularization-based stereo vision algorithm. This paper describes the theory of polarization needed for this approach, describes the new stereo vision algorithm, and presents results on synthetic and real images to evaluate performance.

  20. Trinocular stereo vision method based on mesh candidates

    NASA Astrophysics Data System (ADS)

    Liu, Bin; Xu, Gang; Li, Haibin

    2010-10-01

    One of the most interesting goals of machine vision is 3D structure recovery of the scenes. This recovery has many applications, such as object recognition, reverse engineering, automatic cartography, autonomous robot navigation, etc. To meet the demand of measuring the complex prototypes in reverse engineering, a trinocular stereo vision method based on mesh candidates was proposed. After calibration of the cameras, the joint field of view can be defined in the world coordinate system. Mesh grid is established along the coordinate axes, and the mesh nodes are considered as potential depth data of the object surface. By similarity measure of the correspondence pairs which are projected from a certain group of candidates, the depth data can be obtained readily. With mesh nodes optimization, the interval between the neighboring nodes in depth direction could be designed reasonably. The potential ambiguity can be eliminated efficiently in correspondence matching with the constraint of a third camera. The cameras can be treated as two independent pairs, left-right and left-centre. Due to multiple peaks of the correlation values, the binocular method may not satisfy the accuracy of the measurement. Another image pair is involved if the confidence coefficient is less than the preset threshold. The depth is determined by the highest sum of correlation of both camera pairs. The measurement system was simulated using 3DS MAX and Matlab software for reconstructing the surface of the object. The experimental result proved that the trinocular vision system has good performance in depth measurement.

  1. MARVEL: A system that recognizes world locations with stereo vision

    SciTech Connect

    Braunegg, D.J. . Artificial Intelligence Lab.)

    1993-06-01

    MARVEL is a system that supports autonomous navigation by building and maintaining its own models of world locations and using these models and stereo vision input to recognize its location in the world and its position and orientation within that location. The system emphasizes the use of simple, easily derivable features for recognition, whose aggregate identifies a location, instead of complex features that also require recognition. MARVEL is designed to be robust with respect to input errors and to respond to a gradually changing world by updating its world location models. In over 1,000 recognition tests using real-world data, MARVEL yielded a false negative rate under 10% with zero false positives.

  2. Visual tracking in stereo. [by computer vision system

    NASA Technical Reports Server (NTRS)

    Saund, E.

    1981-01-01

    A method is described for visual object tracking by a computer vision system using TV cameras and special low-level image processing hardware. The tracker maintains an internal model of the location, orientation, and velocity of the object in three-dimensional space. This model is used to predict where features of the object will lie on the two-dimensional images produced by stereo TV cameras. The differences in the locations of features in the two-dimensional images as predicted by the internal model and as actually seen create an error signal in the two-dimensional representation. This is multiplied by a generalized inverse Jacobian matrix to deduce the error in the internal model. The procedure repeats to update the internal model of the object's location, orientation and velocity continuously.

  3. Shape determination for large flexible satellites via stereo vision

    NASA Astrophysics Data System (ADS)

    Tse, D. N. C.; Heppler, G. R.

    1992-02-01

    The use of stereo vision to determine the deformed shape of an elastic plate is investigated. The quantization error associated with using discrete charge coupled device camera images for this purpose is examined. An upper bound on the error is derived in terms of the stationary configuration parameters. An expression for the average (root mean square) error is also developed. The issue of interpolating the shape of the plate through erroneous data is addressed. The vibratory mode shapes are used as interpolation functions and two cases are considered: the case when the number of interpolation points (targets) is the same as the number of modes used in the interpolation, and the case when the number of targets exceeds the number of the modes used. Error criteria are established for both cases and they provide a means of establishing the best fit to the measured data.

  4. A computer implementation of a theory of human stereo vision.

    PubMed

    Grimson, W E

    1981-05-12

    Recently, Marr & Poggio (1979) presented a theory of human stereo vision. An implementation of that theory is presented, and consists of five steps. (i) The left and right images are each filtered with masks of four sizes that increase with eccentricity; the shape of these masks is given by delta 2G, the Laplacian of a Gaussian function. (ii) Zero crossings in the filtered images are found along horizontal scan lines. (iii) For each mask size, matching takes place between zero crossings of the same sign and roughly the same orientation in the two images, for a range of disparities up to about the width of the mask's central region. Within this disparity range, it can be shown that false targets pose only a simple problem. (iv) The output of the wide masks can control vergence movements, thus causing small masks to come into correspondence. In this way, the matching process gradually moves from dealing with large disparities at a low resolution to dealing with small disparities at a high resolution. (v) When a correspondence is achieved, it is stored in a dynamic buffer, called the 2 1/2-dimensional sketch. To support the adequacy of the Marr-Poggio model of human stereo vision, the implementation was tested on a wide range of stereograms from the human stereopsis literature. The performance of the implementation is illustrated and compared with human perception. Also statistical assumptions made by Marr & Poggio are supported by comparison with statistics found in practice. Finally, the process of implementing the theory has led to the clarification and refinement of a number of details within the theory; these are discussed in detail.

  5. A stereo vision-based obstacle detection system in vehicles

    NASA Astrophysics Data System (ADS)

    Huh, Kunsoo; Park, Jaehak; Hwang, Junyeon; Hong, Daegun

    2008-02-01

    Obstacle detection is a crucial issue for driver assistance systems as well as for autonomous vehicle guidance function and it has to be performed with high reliability to avoid any potential collision with the front vehicle. The vision-based obstacle detection systems are regarded promising for this purpose because they require little infrastructure on a highway. However, the feasibility of these systems in passenger car requires accurate and robust sensing performance. In this paper, an obstacle detection system using stereo vision sensors is developed. This system utilizes feature matching, epipoplar constraint and feature aggregation in order to robustly detect the initial corresponding pairs. After the initial detection, the system executes the tracking algorithm for the obstacles. The proposed system can detect a front obstacle, a leading vehicle and a vehicle cutting into the lane. Then, the position parameters of the obstacles and leading vehicles can be obtained. The proposed obstacle detection system is implemented on a passenger car and its performance is verified experimentally.

  6. Simple and inexpensive stereo vision system for 3D data acquisition

    NASA Astrophysics Data System (ADS)

    Mermall, Samuel E.; Lindner, John F.

    2014-10-01

    We describe a simple stereo-vision system for tracking motion in three dimensions using a single ordinary camera. A simple mirror system divides the camera's field of view into left and right stereo pairs. We calibrate the system by tracking a point on a spinning wheel and demonstrate its use by tracking the corner of a flapping flag.

  7. Stereo vision tracking of multiple objects in complex indoor environments.

    PubMed

    Marrón-Romera, Marta; García, Juan C; Sotelo, Miguel A; Pizarro, Daniel; Mazo, Manuel; Cañas, José M; Losada, Cristina; Marcos, Alvaro

    2010-01-01

    This paper presents a novel system capable of solving the problem of tracking multiple targets in a crowded, complex and dynamic indoor environment, like those typical of mobile robot applications. The proposed solution is based on a stereo vision set in the acquisition step and a probabilistic algorithm in the obstacles position estimation process. The system obtains 3D position and speed information related to each object in the robot's environment; then it achieves a classification between building elements (ceiling, walls, columns and so on) and the rest of items in robot surroundings. All objects in robot surroundings, both dynamic and static, are considered to be obstacles but the structure of the environment itself. A combination of a Bayesian algorithm and a deterministic clustering process is used in order to obtain a multimodal representation of speed and position of detected obstacles. Performance of the final system has been tested against state of the art proposals; test results validate the authors' proposal. The designed algorithms and procedures provide a solution to those applications where similar multimodal data structures are found.

  8. A stereo-vision hazard-detection algorithm to increase planetary lander autonomy

    NASA Astrophysics Data System (ADS)

    Woicke, Svenja; Mooij, Erwin

    2016-05-01

    For future landings on any celestial body, increasing the lander autonomy as well as decreasing risk are primary objectives. Both risk reduction and an increase in autonomy can be achieved by including hazard detection and avoidance in the guidance, navigation, and control loop. One of the main challenges in hazard detection and avoidance is the reconstruction of accurate elevation models, as well as slope and roughness maps. Multiple methods for acquiring the inputs for hazard maps are available. The main distinction can be made between active and passive methods. Passive methods (cameras) have budgetary advantages compared to active sensors (radar, light detection and ranging). However, it is necessary to proof that these methods deliver sufficiently good maps. Therefore, this paper discusses hazard detection using stereo vision. To facilitate a successful landing not more than 1% wrong detections (hazards that are not identified) are allowed. Based on a sensitivity analysis it was found that using a stereo set-up at a baseline of ≤ 2 m is feasible at altitudes of ≤ 200 m defining false positives of less than 1%. It was thus shown that stereo-based hazard detection is an effective means to decrease the landing risk and increase the lander autonomy. In conclusion, the proposed algorithm is a promising candidate for future landers.

  9. The study of calibration and epipolar geometry for the stereo vision system built by fisheye lenses

    NASA Astrophysics Data System (ADS)

    Zhang, Baofeng; Lu, Chunfang; Röning, Juha; Feng, Weijia

    2015-01-01

    Fish-eye lens is a kind of short focal distance (f=6~16mm) camera. The field of view (FOV) of it is near or even exceeded 180×180 degrees. A lot of literatures show that the multiple view geometry system built by fish-eye lens will get larger stereo field than traditional stereo vision system which based on a pair of perspective projection images. Since a fish-eye camera usually has a wider-than-hemispherical FOV, the most of image processing approaches based on the pinhole camera model for the conventional stereo vision system are not satisfied to deal with the applications of this category of stereo vision which built by fish-eye lenses. This paper focuses on discussing the calibration and the epipolar rectification method for a novel machine vision system set up by four fish-eye lenses, which is called Special Stereo Vision System (SSVS). The characteristic of SSVS is that it can produce 3D coordinate information from the whole global observation space and acquiring no blind area 360º×360º panoramic image simultaneously just using single vision equipment with one time static shooting. Parameters calibration and epipolar rectification is the basic for SSVS to realize 3D reconstruction and panoramic image generation.

  10. Non-probabilistic cellular automata-enhanced stereo vision simultaneous localization and mapping

    NASA Astrophysics Data System (ADS)

    Nalpantidis, Lazaros; Sirakoulis, Georgios Ch; Gasteratos, Antonios

    2011-11-01

    In this paper, a visual non-probabilistic simultaneous localization and mapping (SLAM) algorithm suitable for area measurement applications is proposed. The algorithm uses stereo vision images as its only input and processes them calculating the depth of the scenery, detecting occupied areas and progressively building a map of the environment. The stereo vision-based SLAM algorithm embodies a stereo correspondence algorithm that is tolerant to illumination differentiations, the robust scale- and rotation-invariant feature detection and matching speeded-up robust features method, a computationally effective v-disparity image calculation scheme, a novel map-merging module, as well as a sophisticated cellular automata-based enhancement stage. A moving robot equipped with a stereo camera has been used to gather image sequences and the system has autonomously mapped and measured two different indoor areas.

  11. Application of Stereo Vision to the Reconnection Scaling Experiment

    SciTech Connect

    Klarenbeek, Johnny; Sears, Jason A.; Gao, Kevin W.; Intrator, Thomas P.; Weber, Thomas

    2012-08-14

    The measurement and simulation of the three-dimensional structure of magnetic reconnection in astrophysical and lab plasmas is a challenging problem. At Los Alamos National Laboratory we use the Reconnection Scaling Experiment (RSX) to model 3D magnetohydrodynamic (MHD) relaxation of plasma filled tubes. These magnetic flux tubes are called flux ropes. In RSX, the 3D structure of the flux ropes is explored with insertable probes. Stereo triangulation can be used to compute the 3D position of a probe from point correspondences in images from two calibrated cameras. While common applications of stereo triangulation include 3D scene reconstruction and robotics navigation, we will investigate the novel application of stereo triangulation in plasma physics to aid reconstruction of 3D data for RSX plasmas. Several challenges will be explored and addressed, such as minimizing 3D reconstruction errors in stereo camera systems and dealing with point correspondence problems.

  12. The wavelet transform and the suppression theory of binocular vision for stereo image compression

    SciTech Connect

    Reynolds, W.D. Jr; Kenyon, R.V.

    1996-08-01

    In this paper a method for compression of stereo images. The proposed scheme is a frequency domain approach based on the suppression theory of binocular vision. By using the information in the frequency domain, complex disparity estimation techniques can be avoided. The wavelet transform is used to obtain a multiresolution analysis of the stereo pair by which the subbands convey the necessary frequency domain information.

  13. Research on the feature set construction method for spherical stereo vision

    NASA Astrophysics Data System (ADS)

    Zhu, Junchao; Wan, Li; Röning, Juha; Feng, Weijia

    2015-01-01

    Spherical stereo vision is a kind of stereo vision system built by fish-eye lenses, which discussing the stereo algorithms conform to the spherical model. Epipolar geometry is the theory which describes the relationship of the two imaging plane in cameras for the stereo vision system based on perspective projection model. However, the epipolar in uncorrected fish-eye image will not be a line but an arc which intersects at the poles. It is polar curve. In this paper, the theory of nonlinear epipolar geometry will be explored and the method of nonlinear epipolar rectification will be proposed to eliminate the vertical parallax between two fish-eye images. Maximally Stable Extremal Region (MSER) utilizes grayscale as independent variables, and uses the local extremum of the area variation as the testing results. It is demonstrated in literatures that MSER is only depending on the gray variations of images, and not relating with local structural characteristics and resolution of image. Here, MSER will be combined with the nonlinear epipolar rectification method proposed in this paper. The intersection of the rectified epipolar and the corresponding MSER region is determined as the feature set of spherical stereo vision. Experiments show that this study achieved the expected results.

  14. [Stereoscopic vision with alternating presentation of stereo pairs (author's transl)].

    PubMed

    Herzau, V

    1976-07-26

    The threshold values for stereopsis with alternating monocular presentation of the corresponding member of a stereo pair was examined in relationship to the stimulus duration and the interocular delay. The maximum interocular delay for short stimulus durations was approximately 190 msec. The maximum stimulus duration was between 400-500 msec. At this stimulus duration the interocular delay must approach zero if stereopsis is to be maintained. In the threshold region the stereo effect diminished. One can possibly explain this phenomenon as a partial sensorial fusion. For normal appreciation of stereopsis, the critical frequency was about 2 Hz over the threshold values found with a given stimulus duration.

  15. Indoor and outdoor depth imaging of leaves with time-of-flight and stereo vision sensors: Analysis and comparison

    NASA Astrophysics Data System (ADS)

    Kazmi, Wajahat; Foix, Sergi; Alenyà, Guillem; Andersen, Hans Jørgen

    2014-02-01

    In this article we analyze the response of Time-of-Flight (ToF) cameras (active sensors) for close range imaging under three different illumination conditions and compare the results with stereo vision (passive) sensors. ToF cameras are sensitive to ambient light and have low resolution but deliver high frame rate accurate depth data under suitable conditions. We introduce metrics for performance evaluation over a small region of interest. Based on these metrics, we analyze and compare depth imaging of leaf under indoor (room) and outdoor (shadow and sunlight) conditions by varying exposure times of the sensors. Performance of three different ToF cameras (PMD CamBoard, PMD CamCube and SwissRanger SR4000) is compared against selected stereo-correspondence algorithms (local correlation and graph cuts). PMD CamCube has better cancelation of sunlight, followed by CamBoard, while SwissRanger SR4000 performs poorly under sunlight. Stereo vision is comparatively more robust to ambient illumination and provides high resolution depth data but is constrained by texture of the object along with computational efficiency. Graph cut based stereo correspondence algorithm can better retrieve the shape of the leaves but is computationally much more expensive as compared to local correlation. Finally, we propose a method to increase the dynamic range of ToF cameras for a scene involving both shadow and sunlight exposures at the same time by taking advantage of camera flags (PMD) or confidence matrix (SwissRanger).

  16. Measurement error analysis of three dimensional coordinates of tomatoes acquired using the binocular stereo vision

    NASA Astrophysics Data System (ADS)

    Xiang, Rong

    2014-09-01

    This study analyzes the measurement errors of three dimensional coordinates of binocular stereo vision for tomatoes based on three stereo matching methods, centroid-based matching, area-based matching, and combination matching to improve the localization accuracy of the binocular stereo vision system of tomato harvesting robots. Centroid-based matching was realized through the matching of the feature points of centroids of tomato regions. Area-based matching was realized based on the gray similarity between two neighborhoods of two pixels to be matched in stereo images. Combination matching was realized using the rough disparity acquired through centroid-based matching as the center of the dynamic disparity range which was used in area-based matching. After stereo matching, three dimensional coordinates of tomatoes were acquired using the triangle range finding principle. Test results based on 225 stereo images captured at the distances from 300 to 1000 mm of 3 tomatoes showed that the measurement errors of x coordinates were small, and can meet the need of harvesting robots. However, the measurement biases of y coordinates and depth values were large, and the measurement variation of depth values was also large. Therefore, the measurement biases of y coordinates and depth values, and the measurement variation of depth values should be corrected in the future researches.

  17. The Effects of Avatars, Stereo Vision and Display Size on Reaching and Motion Reproduction.

    PubMed

    Camporesi, Carlo; Kallmann, Marcelo

    2016-05-01

    Thanks to recent advances on motion capture devices and stereoscopic consumer displays, animated virtual characters can now realistically interact with users in a variety of applications. We investigate in this paper the effect of avatars, stereo vision and display size on task execution in immersive virtual environments. We report results obtained with three experiments in varied configurations that are commonly used in rehabilitation applications. The first experiment analyzes the accuracy of reaching tasks under different system configurations: with and without an avatar, with and without stereo vision, and employing a 2D desktop monitor versus a large multi-tile visualization display. The second experiment analyzes the use of avatars and user-perspective stereo vision on the ability to perceive and subsequently reproduce motions demonstrated by an autonomous virtual character. The third experiment evaluates the overall user experience with a complete immersive user interface for motion modeling by direct demonstration. Our experiments expose and quantify the benefits of using stereo vision and avatars, and show that the use of avatars improve the quality of produced motions and the resemblance of replicated msotions; however, direct interaction in user-perspective leads to tasks executed in less time and to targets more accurately reached. These and additional tradeoffs are important for the effective design of avatar-based training systems. PMID:27045914

  18. Stereo vision and CMM-integrated intelligent inspection system in reverse engineering

    NASA Astrophysics Data System (ADS)

    Fang, Yong; Chen, Kangning; Lin, Zhihang

    1998-10-01

    3D coordinates acquisition and 3D model generation for existing parts or prototypes are the critical techniques in reverse engineering. This paper presents an integrated intelligent inspection system of stereo vision and coordinate measurement machine which is fast, flexible and accurate for reverse engineering. It also emphatically discusses the principle, structure and key technique of the system.

  19. Vehicle detection by means of stereo vision-based obstacles features extraction and monocular pattern analysis.

    PubMed

    Toulminet, Gwenaëlle; Bertozzi, Massimo; Mousset, Stéphane; Bensrhair, Abdelaziz; Broggi, Alberto

    2006-08-01

    This paper presents a stereo vision system for the detection and distance computation of a preceding vehicle. It is divided in two major steps. Initially, a stereo vision-based algorithm is used to extract relevant three-dimensional (3-D) features in the scene, these features are investigated further in order to select the ones that belong to vertical objects only and not to the road or background. These 3-D vertical features are then used as a starting point for preceding vehicle detection; by using a symmetry operator, a match against a simplified model of a rear vehicle's shape is performed using a monocular vision-based approach that allows the identification of a preceding vehicle. In addition, using the 3-D information previously extracted, an accurate distance computation is performed.

  20. Camera calibration method of binocular stereo vision based on OpenCV

    NASA Astrophysics Data System (ADS)

    Zhong, Wanzhen; Dong, Xiaona

    2015-10-01

    Camera calibration, an important part of the binocular stereo vision research, is the essential foundation of 3D reconstruction of the spatial object. In this paper, the camera calibration method based on OpenCV (open source computer vision library) is submitted to make the process better as a result of obtaining higher precision and efficiency. First, the camera model in OpenCV and an algorithm of camera calibration are presented, especially considering the influence of camera lens radial distortion and decentering distortion. Then, camera calibration procedure is designed to compute those parameters of camera and calculate calibration errors. High-accurate profile extraction algorithm and a checkboard with 48 corners have also been used in this part. Finally, results of calibration program are presented, demonstrating the high efficiency and accuracy of the proposed approach. The results can reach the requirement of robot binocular stereo vision.

  1. Stereo-vision-based perception capabilities developed during the Robotics Collaborative Technology Alliances program

    NASA Astrophysics Data System (ADS)

    Rankin, Arturo; Bajracharya, Max; Huertas, Andres; Howard, Andrew; Moghaddam, Baback; Brennan, Shane; Ansar, Adnan; Tang, Benyang; Turmon, Michael; Matthies, Larry

    2010-04-01

    The Robotics Collaborative Technology Alliances (RCTA) program, which ran from 2001 to 2009, was funded by the U.S. Army Research Laboratory and managed by General Dynamics Robotic Systems. The alliance brought together a team of government, industrial, and academic institutions to address research and development required to enable the deployment of future military unmanned ground vehicle systems ranging in size from man-portables to ground combat vehicles. Under RCTA, three technology areas critical to the development of future autonomous unmanned systems were addressed: advanced perception, intelligent control architectures and tactical behaviors, and human-robot interaction. The Jet Propulsion Laboratory (JPL) participated as a member for the entire program, working four tasks in the advanced perception technology area: stereo improvements, terrain classification, pedestrian detection in dynamic environments, and long range terrain classification. Under the stereo task, significant improvements were made to the quality of stereo range data used as a front end to the other three tasks. Under the terrain classification task, a multi-cue water detector was developed that fuses cues from color, texture, and stereo range data, and three standalone water detectors were developed based on sky reflections, object reflections (such as trees), and color variation. In addition, a multi-sensor mud detector was developed that fuses cues from color stereo and polarization sensors. Under the long range terrain classification task, a classifier was implemented that uses unsupervised and self-supervised learning of traversability to extend the classification of terrain over which the vehicle drives to the far-field. Under the pedestrian detection task, stereo vision was used to identify regions-of-interest in an image, classify those regions based on shape, and track detected pedestrians in three-dimensional world coordinates. To improve the detectability of partially occluded

  2. SAD-Based Stereo Vision Machine on a System-on-Programmable-Chip (SoPC)

    PubMed Central

    Zhang, Xiang; Chen, Zhangwei

    2013-01-01

    This paper, proposes a novel solution for a stereo vision machine based on the System-on-Programmable-Chip (SoPC) architecture. The SOPC technology provides great convenience for accessing many hardware devices such as DDRII, SSRAM, Flash, etc., by IP reuse. The system hardware is implemented in a single FPGA chip involving a 32-bit Nios II microprocessor, which is a configurable soft IP core in charge of managing the image buffer and users' configuration data. The Sum of Absolute Differences (SAD) algorithm is used for dense disparity map computation. The circuits of the algorithmic module are modeled by the Matlab-based DSP Builder. With a set of configuration interfaces, the machine can process many different sizes of stereo pair images. The maximum image size is up to 512 K pixels. This machine is designed to focus on real time stereo vision applications. The stereo vision machine offers good performance and high efficiency in real time. Considering a hardware FPGA clock of 90 MHz, 23 frames of 640 × 480 disparity maps can be obtained in one second with 5 × 5 matching window and maximum 64 disparity pixels. PMID:23459385

  3. SAD-based stereo vision machine on a System-on-Programmable-Chip (SoPC).

    PubMed

    Zhang, Xiang; Chen, Zhangwei

    2013-03-04

    This paper, proposes a novel solution for a stereo vision machine based on the System-on-Programmable-Chip (SoPC) architecture. The SOPC technology provides great convenience for accessing many hardware devices such as DDRII, SSRAM, Flash, etc., by IP reuse. The system hardware is implemented in a single FPGA chip involving a 32-bit Nios II microprocessor, which is a configurable soft IP core in charge of managing the image buffer and users' configuration data. The Sum of Absolute Differences (SAD) algorithm is used for dense disparity map computation. The circuits of the algorithmic module are modeled by the Matlab-based DSP Builder. With a set of configuration interfaces, the machine can process many different sizes of stereo pair images. The maximum image size is up to 512 K pixels. This machine is designed to focus on real time stereo vision applications. The stereo vision machine offers good performance and high efficiency in real time. Considering a hardware FPGA clock of 90 MHz, 23 frames of 640 × 480 disparity maps can be obtained in one second with 5 × 5 matching window and maximum 64 disparity pixels.

  4. Lightweight camera head for robotic-based binocular stereo vision: an integrated engineering approach

    NASA Astrophysics Data System (ADS)

    Pretlove, John R. G.; Parker, Graham A.

    1992-03-01

    This paper presents the design and development of a real-time eye-in-hand stereo-vision system to aid robot guidance in a manufacturing environment. The stereo vision head comprises a novel camera arrangement with servo-vergence, focus, and aperture that continuously provides high-quality images to a dedicated image processing system and parallel processing array. The stereo head has four degrees of freedom but it relies on the robot end- effector for all remaining movement. This provides the robot with exploratory sensing abilities allowing it to undertake a wider variety of less constrained tasks. Unlike other stereo vision research heads, the overriding factor in the Surrey head has been a truly integrated engineering approach in an attempt to solve an extremely complex problem. The head is low cost, low weight, employs state-of-the-art motor technology, is highly controllable and occupies a small size envelope. Its intended applications include high-accuracy metrology, 3-D path following, object recognition and tracking, parts manipulation, and component inspection for the manufacturing industry.

  5. Development and modeling of a stereo vision focusing system for a field programmable gate array robot

    NASA Astrophysics Data System (ADS)

    Tickle, Andrew J.; Buckle, James; Grindley, Josef E.; Smith, Jeremy S.

    2010-10-01

    Stereo vision is a situation where an imaging system has two or more cameras in order to make it more robust by mimicking the human vision system. By using two inputs, knowledge of their own relative geometry can be exploited to derive depth information from the two views they receive. 3D co-ordinates of an object in an observed scene can be computed from the intersection of the two sets of rays. Presented here is the development of a stereo vision system to focus on an object at the centre of a baseline between two cameras at varying distances. This has been developed primarily for use on a Field Programmable Gate Array (FPGA) but an adaptation of this developed methodology is also presented for use with a PUMA 560 Robotic Manipulator with a single camera attachment. The two main vision systems considered here are a fixed baseline with an object moving at varying distances from this baseline, and a system with a fixed distance and a varying baseline. These two differing situations provide enough data so that the co-efficient variables that determine the system operation can be calibrated automatically with only the baseline value needing to be entered, the system performs all the required calculations for the user for use with a baseline of any distance. The limits of system with regards to the focusing accuracy obtained are also presented along with how the PUMA 560 controls its joints for the stereo vision and how it moves from one position to another to attend stereo vision compared to the two camera system for the FPGA. The benefits of such a system for range finding in mobile robotics are discussed and how this approach is more advantageous when compared against laser range finders or echolocation using ultrasonics.

  6. The non-parametric Parzen's window in stereo vision matching.

    PubMed

    Pajares, G; de la Cruz, J

    2002-01-01

    This paper presents an approach to the local stereovision matching problem using edge segments as features with four attributes. From these attributes we compute a matching probability between pairs of features of the stereo images. A correspondence is said true when such a probability is maximum. We introduce a nonparametric strategy based on Parzen's window (1962) to estimate a probability density function (PDF) which is used to obtain the matching probability. This is the main finding of the paper. A comparative analysis of other recent matching methods is included to show that this finding can be justified theoretically. A generalization of the proposed method is made in order to give guidelines about its use with the similarity constraint and also in different environments where other features and attributes are more suitable. PMID:18238122

  7. Design of a Trinocular-Stereo-Vision VLSI Processor Based on Optimal Scheduling

    NASA Astrophysics Data System (ADS)

    Hariyama, Masanori; Yokoyama, Naoto; Kameyama, Michitaka

    This paper presents a processor architecture for high-speed and reliable tinocular stereo matching based on adaptive window-size control of SAD (Sum of Absolute Differences) computation. To reduce its computational complexity, SADs are computed using images divided into non-overlapping regions, and the matching result is iteratively refined by reducing a window size. Window-parallel-and-pixel-parallel architecture is also proposed to achieve to fully exploit the potential parallelism of the algorithm. The architecture also reduces the complexity of an interconnection network between memory and functional units based on regularity of reference pixels. The stereo matching processor is designed in a 0.18μm CMOS technology. The processing time is 83.2μs@100MHz. By using optimal scheduling, the increases in area and processing time is only 5% and 3% respectively compared to binocular stereo vision although the computational amount is double.

  8. Artificial-vision stereo system as a source of visual information for preventing the collision of vehicles

    SciTech Connect

    Machtovoi, I.A.

    1994-10-01

    This paper explains the principle of automatically determining the position of extended and point objects in 2-D space of recognizing them by means of an artificial-vision stereo system from the measured coordinates of conjugate points in stereo pairs, and also analyzes methods of identifying these points.

  9. Recovering stereo vision by squashing virtual bugs in a virtual reality environment.

    PubMed

    Vedamurthy, Indu; Knill, David C; Huang, Samuel J; Yung, Amanda; Ding, Jian; Kwon, Oh-Sang; Bavelier, Daphne; Levi, Dennis M

    2016-06-19

    Stereopsis is the rich impression of three-dimensionality, based on binocular disparity-the differences between the two retinal images of the same world. However, a substantial proportion of the population is stereo-deficient, and relies mostly on monocular cues to judge the relative depth or distance of objects in the environment. Here we trained adults who were stereo blind or stereo-deficient owing to strabismus and/or amblyopia in a natural visuomotor task-a 'bug squashing' game-in a virtual reality environment. The subjects' task was to squash a virtual dichoptic bug on a slanted surface, by hitting it with a physical cylinder they held in their hand. The perceived surface slant was determined by monocular texture and stereoscopic cues, with these cues being either consistent or in conflict, allowing us to track the relative weighting of monocular versus stereoscopic cues as training in the task progressed. Following training most participants showed greater reliance on stereoscopic cues, reduced suppression and improved stereoacuity. Importantly, the training-induced changes in relative stereo weights were significant predictors of the improvements in stereoacuity. We conclude that some adults deprived of normal binocular vision and insensitive to the disparity information can, with appropriate experience, recover access to more reliable stereoscopic information.This article is part of the themed issue 'Vision in our three-dimensional world'.

  10. Recovering stereo vision by squashing virtual bugs in a virtual reality environment.

    PubMed

    Vedamurthy, Indu; Knill, David C; Huang, Samuel J; Yung, Amanda; Ding, Jian; Kwon, Oh-Sang; Bavelier, Daphne; Levi, Dennis M

    2016-06-19

    Stereopsis is the rich impression of three-dimensionality, based on binocular disparity-the differences between the two retinal images of the same world. However, a substantial proportion of the population is stereo-deficient, and relies mostly on monocular cues to judge the relative depth or distance of objects in the environment. Here we trained adults who were stereo blind or stereo-deficient owing to strabismus and/or amblyopia in a natural visuomotor task-a 'bug squashing' game-in a virtual reality environment. The subjects' task was to squash a virtual dichoptic bug on a slanted surface, by hitting it with a physical cylinder they held in their hand. The perceived surface slant was determined by monocular texture and stereoscopic cues, with these cues being either consistent or in conflict, allowing us to track the relative weighting of monocular versus stereoscopic cues as training in the task progressed. Following training most participants showed greater reliance on stereoscopic cues, reduced suppression and improved stereoacuity. Importantly, the training-induced changes in relative stereo weights were significant predictors of the improvements in stereoacuity. We conclude that some adults deprived of normal binocular vision and insensitive to the disparity information can, with appropriate experience, recover access to more reliable stereoscopic information.This article is part of the themed issue 'Vision in our three-dimensional world'. PMID:27269607

  11. The VirtualwindoW: A Reconfigurable, Modular, Stereo Vision System

    SciTech Connect

    Kinoshita, Robert Arthur; Anderson, Matthew Oley; Mckay, Mark D; Willis, Walter David

    1999-04-01

    An important need while using unmanned vehicles is the ability for the remote operator or observer to easily and accurately perceive the operating environment. A classic problem in providing a complete representation of the remote work area is sensory overload or excessive complexity in the human-machine interface. In addition, remote operations often benefit from depth perception capability while viewing or manipulating objects. Thus, there is an on going effort within the remote and teleoperated robotic field to develop better human-machine interfaces. The Department of Energy's Idaho National Engineering and Environmental Laboratory (INEEL) has been researching methods to simplify the human-machine interface using atypical operator techniques. Initial telepresence research conducted at the INEEL developed and implemented a concept called the VirtualwindoW. This system minimized the complexity of remote stereo viewing controls and provided the operator the "feel" of viewing the environment, including depth perception, in a natural setting. The VirtualwindoW has shown that the human-machine interface can be simplified while increasing operator performance. This paper deals with the continuing research and development of the VirtualwindoW to provide a reconfigurable, modular system that easily utilizes commercially available off the shelf components. This adaptability is well suited to several aspects of unmanned vehicle applications, most notably environmental perception and vehicle control.

  12. The VirtualwindoW: A Reconfigurable, Modular, Stereo Vision System

    SciTech Connect

    M. D. McKay; M. O. Anderson; R. A. Kinoshita; W. D. Willis

    1999-02-01

    An important need while using unmanned vehicles is the ability for the remote operator or observer to easily and accurately perceive the operating environment. A classic problem in providing a complete representation of the remote work area is sensory overload or excessive complexity in the human-machine interface. In addition, remote operations often benefit from depth perception capability while viewing or manipulating objects. Thus, there is an ongoing effort within the remote and teleoperated robotic field to develop better human-machine interfaces. The Department of Energy's Idaho National Engineering and Environmental Laboratory (INEEL) has been researching methods to simplify the human-machine interface using atypical operator techniques. Initial telepresence research conducted at the INEEL developed and implemented a concept called the VirtualwindoW. This system minimized the complexity of remote stereo viewing controls and provided the operator the ''feel'' of viewing the environment, including depth perception, in a natural setting. The VirtualwindoW has shown that the human-machine interface can be simplified while increasing operator performance. This paper deals with the continuing research and development of the VirtualwindoW to provide a reconfigurable, modular system that easily utilizes commercially available off the shelf components. This adaptability is well suited to several aspects of unmanned vehicle applications, most notably environmental perception and vehicle control.

  13. Single camera stereo vision coordinate measurement in parts pose recognization on CMM

    NASA Astrophysics Data System (ADS)

    Wang, Chun-mei; Huang, Feng-shan; Wang, Xue-sha; Chen, Li

    2014-11-01

    In order to recognize parts' pose on Coordinate Measuring Machine (CMM) correctly and fast, based on the translation of CMM, A single camera stereo vision measurement method for feature points' 3D coordinate on the measured parts is proposed. According to the double cameras stereo vision principle, a image of the part to be measured is captured by A CCD camera, which is driven by CMM along its X or Y axis, on two different position correspondly. Thus, the part's single camera stereo vision measurement is realized with the proposed image matching method, which is based on the centroid offset of image edge, on two images of the same feature point on the part, and each feature point's 3D coordinate in the camera coordinate system can be obtained. The measuring system is set up, and the experiment is conducted. The feature point's coordinate measuring time is 1.818s, and the difference value, which is between feature points' 3D coordinate calculated with the experiment result and that measured by CMM in the machine coordinate system, is less than 0.3mm. This measuring result can meet parts' pose real-time recognization requirement on the intelligent CMM, and also show that the method proposed in this paper is feasible.

  14. Plant phenotyping using multi-view stereo vision with structured lights

    NASA Astrophysics Data System (ADS)

    Nguyen, Thuy Tuong; Slaughter, David C.; Maloof, Julin N.; Sinha, Neelima

    2016-05-01

    A multi-view stereo vision system for true 3D reconstruction, modeling and phenotyping of plants was created that successfully resolves many of the shortcomings of traditional camera-based 3D plant phenotyping systems. This novel system incorporates several features including: computer algorithms, including camera calibration, excessive-green based plant segmentation, semi-global stereo block matching, disparity bilateral filtering, 3D point cloud processing, and 3D feature extraction, and hardware consisting of a hemispherical superstructure designed to hold five stereo pairs of cameras and a custom designed structured light pattern illumination system. This system is nondestructive and can extract 3D features of whole plants modeled from multiple pairs of stereo images taken at different view angles. The study characterizes the systems phenotyping performance for 3D plant features: plant height, total leaf area, and total leaf shading area. For plants having specified leaf spacing and size, the algorithms used in our system yielded satisfactory experimental results and demonstrated the ability to study plant development where the same plants were repeatedly imaged and phenotyped over the time.

  15. Laser cutting of irregular shape object based on stereo vision laser galvanometric scanning system

    NASA Astrophysics Data System (ADS)

    Qi, Li; Zhang, Yixin; Wang, Shun; Tang, Zhiqiang; Yang, Huan; Zhang, Xuping

    2015-05-01

    Irregular shape objects with different 3-dimensional (3D) appearances are difficult to be shaped into customized uniform pattern by current laser machining approaches. A laser galvanometric scanning system (LGS) could be a potential candidate since it can easily achieve path-adjustable laser shaping. However, without knowing the actual 3D topography of the object, the processing result may still suffer from 3D shape distortion. It is desirable to have a versatile auxiliary tool that is capable of generating 3D-adjusted laser processing path by measuring the 3D geometry of those irregular shape objects. This paper proposed the stereo vision laser galvanometric scanning system (SLGS), which takes the advantages of both the stereo vision solution and conventional LGS system. The 3D geometry of the object obtained by the stereo cameras is used to guide the scanning galvanometers for 3D-shape-adjusted laser processing. In order to achieve precise visual-servoed laser fabrication, these two independent components are integrated through a system calibration method using plastic thin film target. The flexibility of SLGS has been experimentally demonstrated by cutting duck feathers for badminton shuttle manufacture.

  16. Calibration for stereo vision system based on phase matching and bundle adjustment algorithm

    NASA Astrophysics Data System (ADS)

    Zhao, Huijie; Wang, Zhen; Jiang, Hongzhi; Xu, Yang; Dong, Chao

    2015-05-01

    Calibration for stereo vision system plays an important role in the field of machine vision applications. The existing accurate calibration methods are usually carried out by capturing a high-accuracy calibration target with the same size as the measurement view. In in-situ 3D measurement and in large field of view measurement, the extrinsic parameters of the system usually need to be calibrated in real-time. Furthermore, the large high-accuracy calibration target in the field is a big challenge for manufacturing. Therefore, an accurate and rapid calibration method in the in-situ measurement is needed. In this paper, a novel calibration method for stereo vision system is proposed based on phase-based matching method and the bundle adjustment algorithm. As the camera is usually mechanically locked once adjusted appropriately after calibrated in lab, the intrinsic parameters are usually stable. We emphasize on the extrinsic parameters calibration in the measurement field. Firstly, the matching method based on heterodyne multi-frequency phase-shifting technique is applied to find thousands of pairs of corresponding points between images of two cameras. The large amount of pairs of corresponding points can help improve the accuracy of the calibration. Then the method of bundle adjustment in photogrammetry is used to optimize the extrinsic parameters and the 3D coordinates of the measured objects. Finally, the quantity traceability is carried out to transform the optimized extrinsic parameters from the 3D metric coordinate system into Euclid coordinate system to obtain the ultimate optimal extrinsic parameters. Experiment results show that the procedure of calibration takes less than 3 s. And, based on the stereo vision system calibrated by the proposed method, the measurement RMS (Root Mean Square) error can reach 0.025 mm when measuring the calibrated gauge with nominal length of 999.576 mm.

  17. Development of a stereo vision measurement system for a 3D three-axial pneumatic parallel mechanism robot arm.

    PubMed

    Chiang, Mao-Hsiung; Lin, Hao-Ting; Hou, Chien-Lun

    2011-01-01

    In this paper, a stereo vision 3D position measurement system for a three-axial pneumatic parallel mechanism robot arm is presented. The stereo vision 3D position measurement system aims to measure the 3D trajectories of the end-effector of the robot arm. To track the end-effector of the robot arm, the circle detection algorithm is used to detect the desired target and the SAD algorithm is used to track the moving target and to search the corresponding target location along the conjugate epipolar line in the stereo pair. After camera calibration, both intrinsic and extrinsic parameters of the stereo rig can be obtained, so images can be rectified according to the camera parameters. Thus, through the epipolar rectification, the stereo matching process is reduced to a horizontal search along the conjugate epipolar line. Finally, 3D trajectories of the end-effector are computed by stereo triangulation. The experimental results show that the stereo vision 3D position measurement system proposed in this paper can successfully track and measure the fifth-order polynomial trajectory and sinusoidal trajectory of the end-effector of the three- axial pneumatic parallel mechanism robot arm.

  18. Study on clear stereo image pair acquisition method for small objects with big vertical size in SLM vision system.

    PubMed

    Wang, Yuezong; Jin, Yan; Wang, Lika; Geng, Benliang

    2016-05-01

    Microscopic vision system with stereo light microscope (SLM) has been applied to surface profile measurement. If the vertical size of a small object exceeds the range of depth, its images will contain clear and fuzzy image regions. Hence, in order to obtain clear stereo images, we propose a microscopic sequence image fusion method which is suitable for SLM vision system. First, a solution to capture and align image sequence is designed, which outputs an aligning stereo images. Second, we decompose stereo image sequence by wavelet analysis theory, and obtain a series of high and low frequency coefficients with different resolutions. Then fused stereo images are output based on the high and low frequency coefficient fusion rules proposed in this article. The results show that Δw1 (Δw2 ) and ΔZ of stereo images in a sequence have linear relationship. Hence, a procedure for image alignment is necessary before image fusion. In contrast with other image fusion methods, our method can output clear fused stereo images with better performance, which is suitable for SLM vision system, and very helpful for avoiding image fuzzy caused by big vertical size of small objects.

  19. Study on clear stereo image pair acquisition method for small objects with big vertical size in SLM vision system.

    PubMed

    Wang, Yuezong; Jin, Yan; Wang, Lika; Geng, Benliang

    2016-05-01

    Microscopic vision system with stereo light microscope (SLM) has been applied to surface profile measurement. If the vertical size of a small object exceeds the range of depth, its images will contain clear and fuzzy image regions. Hence, in order to obtain clear stereo images, we propose a microscopic sequence image fusion method which is suitable for SLM vision system. First, a solution to capture and align image sequence is designed, which outputs an aligning stereo images. Second, we decompose stereo image sequence by wavelet analysis theory, and obtain a series of high and low frequency coefficients with different resolutions. Then fused stereo images are output based on the high and low frequency coefficient fusion rules proposed in this article. The results show that Δw1 (Δw2 ) and ΔZ of stereo images in a sequence have linear relationship. Hence, a procedure for image alignment is necessary before image fusion. In contrast with other image fusion methods, our method can output clear fused stereo images with better performance, which is suitable for SLM vision system, and very helpful for avoiding image fuzzy caused by big vertical size of small objects. PMID:26970109

  20. Spatial light modulation for improved microscope stereo vision and 3D tracking

    NASA Astrophysics Data System (ADS)

    Lee, Michael P.; Gibson, Graham; Tassieri, Manlio; Phillips, Dave; Bernet, Stefan; Ritsh-Marte, Monika; Padgett, Miles J.

    2013-09-01

    We present a new type of stereo microscopy which can be used for tracking in 3D over an extended depth. The use of Spatial Light Modulators (SLMs) in the Fourier plane of a microscope sample is a common technique in Holographic Optical Tweezers (HOT). This set up is readily transferable from a tweezer system to an imaging system, where the tweezing laser is replaced with a camera. Just as a HOT system can diffract many traps of different types, in the imaging system many different imaging types can be diffracted with the SLM. The type of imaging we have developed is stereo imaging combined with lens correction. This approach has similarities with human vision where each eye has a lens, and it also extends the depth over which we can accurately track particles.

  1. 3D Vision on Mars: Stereo processing and visualizations for NASA and ESA rover missions

    NASA Astrophysics Data System (ADS)

    Huber, Ben

    2016-07-01

    Three dimensional (3D) vision processing is an essential component of planetary rover mission planning and scientific data analysis. Standard ground vision processing products are digital terrain maps, panoramas, and virtual views of the environment. Such processing is currently developed for the PanCam instrument of ESA's ExoMars Rover mission by the PanCam 3D Vision Team under JOANNEUM RESEARCH coordination. Camera calibration, quality estimation of the expected results and the interfaces to other mission elements such as operations planning, rover navigation system and global Mars mapping are a specific focus of the current work. The main goals of the 3D Vision team in this context are: instrument design support & calibration processing: Development of 3D vision functionality Visualization: development of a 3D visualization tool for scientific data analysis. 3D reconstructions from stereo image data during the mission Support for 3D scientific exploitation to characterize the overall landscape geomorphology, processes, and the nature of the geologic record using the reconstructed 3D models. The developed processing framework PRoViP establishes an extensible framework for 3D vision processing in planetary robotic missions. Examples of processing products and capabilities are: Digital Terrain Models, Ortho images, 3D meshes, occlusion, solar illumination-, slope-, roughness-, and hazard-maps. Another important processing capability is the fusion of rover and orbiter based images with the support of multiple missions and sensors (e.g. MSL Mastcam stereo processing). For 3D visualization a tool called PRo3D has been developed to analyze and directly interpret digital outcrop models. Stereo image products derived from Mars rover data can be rendered in PRo3D, enabling the user to zoom, rotate and translate the generated 3D outcrop models. Interpretations can be digitized directly onto the 3D surface, and simple measurements of the outcrop and sedimentary features

  2. Stereo and regioselectivity in ''Activated'' tritium reactions

    SciTech Connect

    Ehrenkaufer, R.L.E.; Hembree, W.C.; Wolf, A.P.

    1988-01-01

    To investigate the stereo and positional selectivity of the microwave discharge activation (MDA) method, the tritium labeling of several amino acids was undertaken. The labeling of L-valine and the diastereomeric pair L-isoleucine and L-alloisoleucine showed less than statistical labeling at the ..cap alpha..-amino C-H position mostly with retention of configuration. Labeling predominated at the single ..beta.. C-H tertiary (methyne) position. The labeling of L-valine and L-proline with and without positive charge on the ..cap alpha..-amino group resulted in large increases in specific activity (greater than 10-fold) when positive charge was removed by labeling them as their sodium carboxylate salts. Tritium NMR of L-proline labeled both as its zwitterion and sodium salt showed also large differences in the tritium distribution within the molecule. The distribution preferences in each of the charge states are suggestive of labeling by an electrophilic like tritium species(s). 16 refs., 5 tabs.

  3. Characterization of Stereo Vision Performance for Roving at the Lunar Poles

    NASA Technical Reports Server (NTRS)

    Wong, Uland; Nefian, Ara; Edwards, Larry; Furlong, Michael; Bouyssounouse, Xavier; To, Vinh; Deans, Matthew; Cannon, Howard; Fong, Terry

    2016-01-01

    Surface rover operations at the polar regions of airless bodies, particularly the Moon, are of particular interest to future NASA science missions such as Resource Prospector (RP). Polar optical conditions present challenges to conventional imaging techniques, with repercussions to driving, safeguarding and science. High dynamic range, long cast shadows, opposition and white out conditions are all significant factors in appearance. RP is currently undertaking an effort to characterize stereo vision performance in polar conditions through physical laboratory experimentation with regolith simulants, obstacle distributions and oblique lighting.

  4. On-site calibration method for outdoor binocular stereo vision sensors

    NASA Astrophysics Data System (ADS)

    Liu, Zhen; Yin, Yang; Wu, Qun; Li, Xiaojing; Zhang, Guangjun

    2016-11-01

    Using existing calibration methods for binocular stereo vision sensors (BSVS), it is very difficult to extract target characteristic points in outdoor environments under complex light conditions. To solve the problem, an online calibration method for BSVS based a double parallel cylindrical target and a line laser projector is proposed in this paper. The intrinsic parameters of two cameras are calibrated offline. Laser strips on the double parallel cylindrical target are mediated to calibrate the configuration parameters of BSVS. The proposed method only requires images of laser strips on the target and is suitable for the calibration of BSVS in outdoor environments. The effectiveness of the proposed method is validated through physical experiments.

  5. On the use of orientation filters for 3D reconstruction in event-driven stereo vision

    PubMed Central

    Camuñas-Mesa, Luis A.; Serrano-Gotarredona, Teresa; Ieng, Sio H.; Benosman, Ryad B.; Linares-Barranco, Bernabe

    2014-01-01

    The recently developed Dynamic Vision Sensors (DVS) sense visual information asynchronously and code it into trains of events with sub-micro second temporal resolution. This high temporal precision makes the output of these sensors especially suited for dynamic 3D visual reconstruction, by matching corresponding events generated by two different sensors in a stereo setup. This paper explores the use of Gabor filters to extract information about the orientation of the object edges that produce the events, therefore increasing the number of constraints applied to the matching algorithm. This strategy provides more reliably matched pairs of events, improving the final 3D reconstruction. PMID:24744694

  6. Computed tomography as ground truth for stereo vision measurements of skin.

    PubMed

    Vanberlo, Amy M; Campbell, Aaron R; Ellis, Randy E

    2011-01-01

    Although dysesthesia is a common surgical complication, there is no accepted method for quantitatively tracking its progression. To address this, two types of computer vision technologies were tested in a total of four configurations. Surface regions on plastic models of limbs were delineated with colored tape, imaged, and compared with computed tomography scans. The most accurate system used visually projected texture captured by a binocular stereo camera, capable of measuring areas to within 3.4% of the ground-truth areas. This simple, inexpensive technology shows promise for postoperative monitoring of dysesthesia surrounding surgical scars.

  7. Relative stereo 3-D vision sensor and its application for nursery plant transplanting

    NASA Astrophysics Data System (ADS)

    Hata, Seiji; Hayashi, Junichiro; Takahashi, Satoru; Hojo, Hirotaka

    2007-10-01

    Clone nursery plants production is one of the important applications of bio-technology. Most of the production processes of bio-production are highly automated, but the transplanting process of the small nursery plants cannot be automated because the figures of small nursery plants are not stable. In this research, a transplanting robot system for clone nursery plants production is under development. 3-D vision system using relative stereo method detects the shapes and positions of small nursery plants through transparent vessels. A force controlled robot picks up the plants and transplants into a vessels with artificial soil.

  8. Autonomous Hovering and Landing of a Quad-rotor Micro Aerial Vehicle by Means of on Ground Stereo Vision System

    NASA Astrophysics Data System (ADS)

    Pebrianti, Dwi; Kendoul, Farid; Azrad, Syaril; Wang, Wei; Nonami, Kenzo

    On ground stereo vision system is used for autonomous hovering and landing of a quadrotor Micro Aerial Vehicle (MAV). This kind of system has an advantage to support embedded vision system for autonomous hovering and landing, since an embedded vision system occasionally gives inaccurate distance calculation due to either vibration problem or unknown geometry of the landing target. Color based object tracking by using Continuously Adaptive Mean Shift (CAMSHIFT) algorithm was examined. Nonlinear model of quad-rotor MAV and a PID controller were used for autonomous hovering and landing. The result shows that the Camshift based object tracking algorithm has good performance. Additionally, the comparison between the stereo vision system based and GPS based autonomous hovering of a quad-rotor MAV shows that stereo vision system has better performance. The accuracy of the stereo vision system is about 1 meter in the longitudinal and lateral direction when the quad-rotor flies in 6 meters of altitude. In the same experimental condition, the GPS based system accuracy is about 3 meters. Additionally, experiment on autonomous landing gives a reliable result.

  9. Stereo vision-based pedestrian detection using multiple features for automotive application

    NASA Astrophysics Data System (ADS)

    Lee, Chung-Hee; Kim, Dongyoung

    2015-12-01

    In this paper, we propose a stereo vision-based pedestrian detection using multiple features for automotive application. The disparity map from stereo vision system and multiple features are utilized to enhance the pedestrian detection performance. Because the disparity map offers us 3D information, which enable to detect obstacles easily and reduce the overall detection time by removing unnecessary backgrounds. The road feature is extracted from the v-disparity map calculated by the disparity map. The road feature is a decision criterion to determine the presence or absence of obstacles on the road. The obstacle detection is performed by comparing the road feature with all columns in the disparity. The result of obstacle detection is segmented by the bird's-eye-view mapping to separate the obstacle area which has multiple objects into single obstacle area. The histogram-based clustering is performed in the bird's-eye-view map. Each segmented result is verified by the classifier with the training model. To enhance the pedestrian recognition performance, multiple features such as HOG, CSS, symmetry features are utilized. In particular, the symmetry feature is proper to represent the pedestrian standing or walking. The block-based symmetry feature is utilized to minimize the type of image and the best feature among the three symmetry features of H-S-V image is selected as the symmetry feature in each pixel. ETH database is utilized to verify our pedestrian detection algorithm.

  10. Performance Evaluation and Software Design for EVA Robotic Assistant Stereo Vision Heads

    NASA Technical Reports Server (NTRS)

    DiPaolo, Daniel

    2003-01-01

    The purpose of this project was to aid the EVA Robotic Assistant project by evaluating and designing the necessary interfaces for two stereo vision heads - the TracLabs Biclops pan-tilt-verge head, and the Helpmate Zebra pan-tilt-verge head. The first half of the project consisted of designing the necessary software interface so that the other modules of the EVA Robotic Assistant had proper access to all of the functionalities offered by each of the stereovision heads. This half took most of the project time, due to a lack of ready-made CORBA drivers for either of the heads. Once this was overcome, the evaluation stage of the project began. The second half of the project was to take these interfaces and to evaluate each of the stereo vision heads in terms of usefulness to the project. In the key project areas such as stability and reliability, the Zebra pan-tilt-verge head came out on top. However, the Biclops did have many more advantages over the Zebra, such as: lower power consumption, faster communications, and a simpler, cleaner API. Overall, the Biclops pan-tilt-verge head outperformed the Zebra pan-tilt-verge head.

  11. Extrinsic parameter calibration of stereo vision sensors using spot laser projector.

    PubMed

    Liu, Zhen; Yin, Yang; Liu, Shaopeng; Chen, Xu

    2016-09-01

    The on-site calibration of stereo vision sensors plays an important role in the measurement field. Image coordinate extraction of feature points of existing targets is difficult under complex light conditions in outdoor environments, such as strong light and backlight. This paper proposes an on-site calibration method for stereo vision sensors based on a spot laser projector for solving the above-mentioned problem. The proposed method is used to mediate the laser spots on the parallel planes for the purpose of calibrating the coordinate transformation matrix between two cameras. The optimal solution of a coordinate transformation matrix is then solved by nonlinear optimization. Simulation experiments and physical experiments are conducted to validate the performance of the proposed method. Under the condition that the field of view is approximately 400  mm×300  mm, the proposed method can reach a calibration accuracy of 0.02 mm. This accuracy value is comparable to that of the method using a planar target.

  12. A novel method of robot location using RFID and stereo vision

    NASA Astrophysics Data System (ADS)

    Chen, Diansheng; Zhang, Guanxin; Li, Zhen

    2012-04-01

    This paper proposed a new global localization method for mobile robot based on RFID (Radio Frequency Identification Devices) and stereo vision, which makes the robot obtain global coordinates with good accuracy when quickly adapting to unfamiliar and new environment. This method uses RFID tags as artificial landmarks, the 3D coordinate of the tags under the global coordinate system is written in the IC memory. The robot can read it through RFID reader; meanwhile, using stereo vision, the 3D coordinate of the tags under the robot coordinate system is measured. Combined with the robot's attitude coordinate system transformation matrix from the pose measuring system, the translation of the robot coordinate system to the global coordinate system is obtained, which is also the coordinate of the robot's current location under the global coordinate system. The average error of our method is 0.11m in experience conducted in a 7m×7m lobby, the result is much more accurate than other location method.

  13. Extrinsic parameter calibration of stereo vision sensors using spot laser projector.

    PubMed

    Liu, Zhen; Yin, Yang; Liu, Shaopeng; Chen, Xu

    2016-09-01

    The on-site calibration of stereo vision sensors plays an important role in the measurement field. Image coordinate extraction of feature points of existing targets is difficult under complex light conditions in outdoor environments, such as strong light and backlight. This paper proposes an on-site calibration method for stereo vision sensors based on a spot laser projector for solving the above-mentioned problem. The proposed method is used to mediate the laser spots on the parallel planes for the purpose of calibrating the coordinate transformation matrix between two cameras. The optimal solution of a coordinate transformation matrix is then solved by nonlinear optimization. Simulation experiments and physical experiments are conducted to validate the performance of the proposed method. Under the condition that the field of view is approximately 400  mm×300  mm, the proposed method can reach a calibration accuracy of 0.02 mm. This accuracy value is comparable to that of the method using a planar target. PMID:27607287

  14. Stereo Vision Based Terrain Mapping for Off-Road Autonomous Navigation

    NASA Technical Reports Server (NTRS)

    Rankin, Arturo L.; Huertas, Andres; Matthies, Larry H.

    2009-01-01

    Successful off-road autonomous navigation by an unmanned ground vehicle (UGV) requires reliable perception and representation of natural terrain. While perception algorithms are used to detect driving hazards, terrain mapping algorithms are used to represent the detected hazards in a world model a UGV can use to plan safe paths. There are two primary ways to detect driving hazards with perception sensors mounted to a UGV: binary obstacle detection and traversability cost analysis. Binary obstacle detectors label terrain as either traversable or non-traversable, whereas, traversability cost analysis assigns a cost to driving over a discrete patch of terrain. In uncluttered environments where the non-obstacle terrain is equally traversable, binary obstacle detection is sufficient. However, in cluttered environments, some form of traversability cost analysis is necessary. The Jet Propulsion Laboratory (JPL) has explored both approaches using stereo vision systems. A set of binary detectors has been implemented that detect positive obstacles, negative obstacles, tree trunks, tree lines, excessive slope, low overhangs, and water bodies. A compact terrain map is built from each frame of stereo images. The mapping algorithm labels cells that contain obstacles as no-go regions, and encodes terrain elevation, terrain classification, terrain roughness, traversability cost, and a confidence value. The single frame maps are merged into a world map where temporal filtering is applied. In previous papers, we have described our perception algorithms that perform binary obstacle detection. In this paper, we summarize the terrain mapping capabilities that JPL has implemented during several UGV programs over the last decade and discuss some challenges to building terrain maps with stereo range data.

  15. Error analysis in a stereo vision-based pedestrian detection sensor for collision avoidance applications.

    PubMed

    Llorca, David F; Sotelo, Miguel A; Parra, Ignacio; Ocaña, Manuel; Bergasa, Luis M

    2010-01-01

    This paper presents an analytical study of the depth estimation error of a stereo vision-based pedestrian detection sensor for automotive applications such as pedestrian collision avoidance and/or mitigation. The sensor comprises two synchronized and calibrated low-cost cameras. Pedestrians are detected by combining a 3D clustering method with Support Vector Machine-based (SVM) classification. The influence of the sensor parameters in the stereo quantization errors is analyzed in detail providing a point of reference for choosing the sensor setup according to the application requirements. The sensor is then validated in real experiments. Collision avoidance maneuvers by steering are carried out by manual driving. A real time kinematic differential global positioning system (RTK-DGPS) is used to provide ground truth data corresponding to both the pedestrian and the host vehicle locations. The performed field test provided encouraging results and proved the validity of the proposed sensor for being used in the automotive sector towards applications such as autonomous pedestrian collision avoidance. PMID:22319323

  16. Stereo vision-based obstacle avoidance for micro air vehicles using an egocylindrical image space representation

    NASA Astrophysics Data System (ADS)

    Brockers, R.; Fragoso, A.; Matthies, L.

    2016-05-01

    Micro air vehicles which operate autonomously at low altitude in cluttered environments require a method for onboard obstacle avoidance for safe operation. Previous methods deploy either purely reactive approaches, mapping low-level visual features directly to actuator inputs to maneuver the vehicle around the obstacle, or deliberative methods that use on-board 3-D sensors to create a 3-D, voxel-based world model, which is then used to generate collision free 3-D trajectories. In this paper, we use forward-looking stereo vision with a large horizontal and vertical field of view and project range from stereo into a novel robot-centered, cylindrical, inverse range map we call an egocylinder. With this implementation we reduce the complexity of our world representation from a 3D map to a 2.5D image-space representation, which supports very efficient motion planning and collision checking, and allows to implement configuration space expansion as an image processing function directly on the egocylinder. Deploying a fast reactive motion planner directly on the configuration space expanded egocylinder image, we demonstrate the effectiveness of this new approach experimentally in an indoor environment.

  17. Error analysis in a stereo vision-based pedestrian detection sensor for collision avoidance applications.

    PubMed

    Llorca, David F; Sotelo, Miguel A; Parra, Ignacio; Ocaña, Manuel; Bergasa, Luis M

    2010-01-01

    This paper presents an analytical study of the depth estimation error of a stereo vision-based pedestrian detection sensor for automotive applications such as pedestrian collision avoidance and/or mitigation. The sensor comprises two synchronized and calibrated low-cost cameras. Pedestrians are detected by combining a 3D clustering method with Support Vector Machine-based (SVM) classification. The influence of the sensor parameters in the stereo quantization errors is analyzed in detail providing a point of reference for choosing the sensor setup according to the application requirements. The sensor is then validated in real experiments. Collision avoidance maneuvers by steering are carried out by manual driving. A real time kinematic differential global positioning system (RTK-DGPS) is used to provide ground truth data corresponding to both the pedestrian and the host vehicle locations. The performed field test provided encouraging results and proved the validity of the proposed sensor for being used in the automotive sector towards applications such as autonomous pedestrian collision avoidance.

  18. Three-dimensional location of tomato based on binocular stereo vision for tomato harvesting robot

    NASA Astrophysics Data System (ADS)

    Xiang, Rong; Ying, Yibin; Jiang, Huanyu; Peng, Yongshi

    2010-10-01

    Accurate harvesting depends on the order of the accuracy of 3D location for harvesting robot. The precision of location is lower when the distance between fruit and camera is larger than 0.8 m for the method based on binocular stereo vision. This is a big problem. In order to improve the precision of depth measurement for ripe tomato, two stereo matching methods were analyzed comparatively which were centroid-based matching and area-based matching. Their performances in depth measurement were also compared. Experiments showed that the relationship between distance and measurement was linear. Then, models of unitary linear regression (ULR) were used to improve the results of depth measurement. After correction by these models, the depth errors were in a range of -28 mm to 25 mm for centroid-based matching method and -8 mm to 15 mm for area-based matching method at a distance of 0.6 m to 1.15 m. It can be concluded that costs of computation can be decreased with the promise of good precision when the parallax of centroid which is acquired through centroid-based matching method is used to set the range of parallax for area-based matching method.

  19. Man-machine stereo-TV computer vision system for noncontact measurement

    NASA Astrophysics Data System (ADS)

    Petuchov, Sergey V.; Vasiliev, Vadim F.; Ivaniugin, Victor M.

    1998-07-01

    The structural description of the scene and/or the geometrical performances of the scene objects are insufficient for many tasks of robot control. The complexity of natural scenes as well as the great variety of tasks makes the human operator indispensable for images interpretation. His responsibility lies in indicating the interesting regions (objects) and in helping to establish a good hypothesis about the location of the object in the case of difficult identifying situations. The man-machine computer vision stereo-measurement system (CVMS) allows to create the systems for navigation and control by mobile and manipulating teleoperated robots in a new fashion to make them more adaptive to changes of the external conditions. This paper gives a description of CVMS for non- contact measurements. Three-dimensional coordinates of object points are defined after ones indicating by mouse of the human operator, by indicating with mouse. The measuring points are indicated in monocular image, therefore specified glasses are not required for stereo scope. The system baseline may be increased as compared with distance between human eyes, then measurement accuracy may be also increased. The CVMS contains the one or two TV-cameras and personal computer equipped by input/output board of images. The system breadboard was tested on remote control transport robot.

  20. Three-dimensional measurement of small mechanical parts under a complicated background based on stereo vision

    SciTech Connect

    Ren Zhiguo; Liao Jiarui; Cai Lilong

    2010-04-01

    We present an effective method for the accurate three-dimensional (3D) measurement of small industrial parts under a complicated noisy background, based on stereo vision. To effectively extract the nonlinear features of desired curves of the measured parts in the images, a strategy from coarse to fine extraction is employed, based on a virtual motion control system. By using the multiscale decomposition of gray images and virtual beam chains, the nonlinear features can be accurately extracted. By analyzing the generation of geometric errors, the refined feature points of the desired curves are extracted. Then the 3D structure of the measured parts can be accurately reconstructed and measured with least squares errors. Experimental results show that the presented method can accurately measure industrial parts that are represented by various line segments and curves.

  1. Omnidirectional stereo vision sensor based on single camera and catoptric system.

    PubMed

    Zhou, Fuqiang; Chai, Xinghua; Chen, Xin; Song, Ya

    2016-09-01

    An omnidirectional stereo vision sensor based on one single camera and catoptric system is proposed. As crucial components, one camera and two pyramid mirrors are used for imaging. The omnidirectional measurement towards different directions in the horizontal field can be performed by four pairs of virtual cameras, with a consummate synchronism and an improved compactness. Moreover, the perspective projection invariance is ensured in the imaging process, which avoids the imaging distortion reflected by the curved mirrors. In this paper, the structure model of the sensor was established and a sensor prototype was designed. The influences of the structural parameters on the field of view and the measurement accuracy were also discussed. In addition, real experiments and analyses were performed to evaluate the performance of the proposed sensor in the measurement application. The results proved the feasibility of the sensor, and exhibited a considerable accuracy in 3D coordinate reconstruction. PMID:27607253

  2. Error analysis and compensation of binocular-stereo-vision measurement system

    NASA Astrophysics Data System (ADS)

    Zhang, Tao; Guo, Junjie

    2008-09-01

    Measurement errors in binocular stereo vision are analyzed. It is proved that multi-stage calibration can efficiently reduce systematic errors due to depth of field. Furthermore, for difficulty in carry-out of multi-stage calibration, the compensation methods of errors are presented in this paper. First, using standard plane template, system calibration is completed. Then, moving the cameras to different depths, multiple views are taken and 3d coordinates of special points on template are calculated. Finally, error compensation model in depth is established with least square fitting. Experiment based on CMM indicates the relative error of measurement is reduced by 5.1% with the proposed method in this paper. This is of practical value in expanding measurement range in depth and improving measurement accuracy.

  3. Pose Self-Calibration of Stereo Vision Systems for Autonomous Vehicle Applications.

    PubMed

    Musleh, Basam; Martín, David; Armingol, José María; de la Escalera, Arturo

    2016-09-14

    Nowadays, intelligent systems applied to vehicles have grown very rapidly; their goal is not only the improvement of safety, but also making autonomous driving possible. Many of these intelligent systems are based on making use of computer vision in order to know the environment and act accordingly. It is of great importance to be able to estimate the pose of the vision system because the measurement matching between the perception system (pixels) and the vehicle environment (meters) depends on the relative position between the perception system and the environment. A new method of camera pose estimation for stereo systems is presented in this paper, whose main contribution regarding the state of the art on the subject is the estimation of the pitch angle without being affected by the roll angle. The validation of the self-calibration method is accomplished by comparing it with relevant methods of camera pose estimation, where a synthetic sequence is used in order to measure the continuous error with a ground truth. This validation is enriched by the experimental results of the method in real traffic environments.

  4. Occupancy Grid Mapping in Urban Environments from a Moving On-Board Stereo-Vision System

    PubMed Central

    Li, You; Ruichek, Yassine

    2014-01-01

    Occupancy grid map is a popular tool for representing the surrounding environments of mobile robots/intelligent vehicles. Its applications can be dated back to the 1980s, when researchers utilized sonar or LiDAR to illustrate environments by occupancy grids. However, in the literature, research on vision-based occupancy grid mapping is scant. Furthermore, when moving in a real dynamic world, traditional occupancy grid mapping is required not only with the ability to detect occupied areas, but also with the capability to understand dynamic environments. The paper addresses this issue by presenting a stereo-vision-based framework to create a dynamic occupancy grid map, which is applied in an intelligent vehicle driving in an urban scenario. Besides representing the surroundings as occupancy grids, dynamic occupancy grid mapping could provide the motion information of the grids. The proposed framework consists of two components. The first is motion estimation for the moving vehicle itself and independent moving objects. The second is dynamic occupancy grid mapping, which is based on the estimated motion information and the dense disparity map. The main benefit of the proposed framework is the ability of mapping occupied areas and moving objects at the same time. This is very practical in real applications. The proposed method is evaluated using real data acquired by our intelligent vehicle platform “SeTCar” in urban environments. PMID:24932866

  5. Occupancy grid mapping in urban environments from a moving on-board stereo-vision system.

    PubMed

    Li, You; Ruichek, Yassine

    2014-01-01

    Occupancy grid map is a popular tool for representing the surrounding environments of mobile robots/intelligent vehicles. Its applications can be dated back to the 1980s, when researchers utilized sonar or LiDAR to illustrate environments by occupancy grids. However, in the literature, research on vision-based occupancy grid mapping is scant. Furthermore, when moving in a real dynamic world, traditional occupancy grid mapping is required not only with the ability to detect occupied areas, but also with the capability to understand dynamic environments. The paper addresses this issue by presenting a stereo-vision-based framework to create a dynamic occupancy grid map, which is applied in an intelligent vehicle driving in an urban scenario. Besides representing the surroundings as occupancy grids, dynamic occupancy grid mapping could provide the motion information of the grids. The proposed framework consists of two components. The first is motion estimation for the moving vehicle itself and independent moving objects. The second is dynamic occupancy grid mapping, which is based on the estimated motion information and the dense disparity map. The main benefit of the proposed framework is the ability of mapping occupied areas and moving objects at the same time. This is very practical in real applications. The proposed method is evaluated using real data acquired by our intelligent vehicle platform "SeTCar" in urban environments.

  6. Pose Self-Calibration of Stereo Vision Systems for Autonomous Vehicle Applications

    PubMed Central

    Musleh, Basam; Martín, David; Armingol, José María; de la Escalera, Arturo

    2016-01-01

    Nowadays, intelligent systems applied to vehicles have grown very rapidly; their goal is not only the improvement of safety, but also making autonomous driving possible. Many of these intelligent systems are based on making use of computer vision in order to know the environment and act accordingly. It is of great importance to be able to estimate the pose of the vision system because the measurement matching between the perception system (pixels) and the vehicle environment (meters) depends on the relative position between the perception system and the environment. A new method of camera pose estimation for stereo systems is presented in this paper, whose main contribution regarding the state of the art on the subject is the estimation of the pitch angle without being affected by the roll angle. The validation of the self-calibration method is accomplished by comparing it with relevant methods of camera pose estimation, where a synthetic sequence is used in order to measure the continuous error with a ground truth. This validation is enriched by the experimental results of the method in real traffic environments. PMID:27649178

  7. Pose Self-Calibration of Stereo Vision Systems for Autonomous Vehicle Applications.

    PubMed

    Musleh, Basam; Martín, David; Armingol, José María; de la Escalera, Arturo

    2016-01-01

    Nowadays, intelligent systems applied to vehicles have grown very rapidly; their goal is not only the improvement of safety, but also making autonomous driving possible. Many of these intelligent systems are based on making use of computer vision in order to know the environment and act accordingly. It is of great importance to be able to estimate the pose of the vision system because the measurement matching between the perception system (pixels) and the vehicle environment (meters) depends on the relative position between the perception system and the environment. A new method of camera pose estimation for stereo systems is presented in this paper, whose main contribution regarding the state of the art on the subject is the estimation of the pitch angle without being affected by the roll angle. The validation of the self-calibration method is accomplished by comparing it with relevant methods of camera pose estimation, where a synthetic sequence is used in order to measure the continuous error with a ground truth. This validation is enriched by the experimental results of the method in real traffic environments. PMID:27649178

  8. Stereo-vision-based cooperative-vehicle positioning using OCC and neural networks

    NASA Astrophysics Data System (ADS)

    Ifthekhar, Md. Shareef; Saha, Nirzhar; Jang, Yeong Min

    2015-10-01

    Vehicle positioning has been subjected to extensive research regarding driving safety measures and assistance as well as autonomous navigation. The most common positioning technique used in automotive positioning is the global positioning system (GPS). However, GPS is not reliably accurate because of signal blockage caused by high-rise buildings. In addition, GPS is error prone when a vehicle is inside a tunnel. Moreover, GPS and other radio-frequency-based approaches cannot provide orientation information or the position of neighboring vehicles. In this study, we propose a cooperative-vehicle positioning (CVP) technique by using the newly developed optical camera communications (OCC). The OCC technique utilizes image sensors and cameras to receive and decode light-modulated information from light-emitting diodes (LEDs). A vehicle equipped with an OCC transceiver can receive positioning and other information such as speed, lane change, driver's condition, etc., through optical wireless links of neighboring vehicles. Thus, the target vehicle position that is too far away to establish an OCC link can be determined by a computer-vision-based technique combined with the cooperation of neighboring vehicles. In addition, we have devised a back-propagation (BP) neural-network learning method for positioning and range estimation for CVP. The proposed neural-network-based technique can estimate target vehicle position from only two image points of target vehicles using stereo vision. For this, we use rear LEDs on target vehicles as image points. We show from simulation results that our neural-network-based method achieves better accuracy than that of the computer-vision method.

  9. Accuracy aspects of stereo side-looking radar. [analysis of its visual perception and binocular vision

    NASA Technical Reports Server (NTRS)

    Leberl, F. W.

    1979-01-01

    The geometry of the radar stereo model and factors affecting visual radar stereo perception are reviewed. Limits to the vertical exaggeration factor of stereo radar are defined. Radar stereo model accuracies are analyzed with respect to coordinate errors caused by errors of radar sensor position and of range, and with respect to errors of coordinate differences, i.e., cross-track distances and height differences.

  10. Microscopic vision modeling method by direct mapping analysis for micro-gripping system with stereo light microscope.

    PubMed

    Wang, Yuezong; Zhao, Zhizhong; Wang, Junshuai

    2016-04-01

    We present a novel and high-precision microscopic vision modeling method, which can be used for 3D data reconstruction in micro-gripping system with stereo light microscope. This method consists of four parts: image distortion correction, disparity distortion correction, initial vision model and residual compensation model. First, the method of image distortion correction is proposed. Image data required by image distortion correction comes from stereo images of calibration sample. The geometric features of image distortions can be predicted though the shape deformation of lines constructed by grid points in stereo images. Linear and polynomial fitting methods are applied to correct image distortions. Second, shape deformation features of disparity distribution are discussed. The method of disparity distortion correction is proposed. Polynomial fitting method is applied to correct disparity distortion. Third, a microscopic vision model is derived, which consists of two models, i.e., initial vision model and residual compensation model. We derive initial vision model by the analysis of direct mapping relationship between object and image points. Residual compensation model is derived based on the residual analysis of initial vision model. The results show that with maximum reconstruction distance of 4.1mm in X direction, 2.9mm in Y direction and 2.25mm in Z direction, our model achieves a precision of 0.01mm in X and Y directions and 0.015mm in Z direction. Comparison of our model with traditional pinhole camera model shows that two kinds of models have a similar reconstruction precision of X coordinates. However, traditional pinhole camera model has a lower precision of Y and Z coordinates than our model. The method proposed in this paper is very helpful for the micro-gripping system based on SLM microscopic vision.

  11. Microscopic vision modeling method by direct mapping analysis for micro-gripping system with stereo light microscope.

    PubMed

    Wang, Yuezong; Zhao, Zhizhong; Wang, Junshuai

    2016-04-01

    We present a novel and high-precision microscopic vision modeling method, which can be used for 3D data reconstruction in micro-gripping system with stereo light microscope. This method consists of four parts: image distortion correction, disparity distortion correction, initial vision model and residual compensation model. First, the method of image distortion correction is proposed. Image data required by image distortion correction comes from stereo images of calibration sample. The geometric features of image distortions can be predicted though the shape deformation of lines constructed by grid points in stereo images. Linear and polynomial fitting methods are applied to correct image distortions. Second, shape deformation features of disparity distribution are discussed. The method of disparity distortion correction is proposed. Polynomial fitting method is applied to correct disparity distortion. Third, a microscopic vision model is derived, which consists of two models, i.e., initial vision model and residual compensation model. We derive initial vision model by the analysis of direct mapping relationship between object and image points. Residual compensation model is derived based on the residual analysis of initial vision model. The results show that with maximum reconstruction distance of 4.1mm in X direction, 2.9mm in Y direction and 2.25mm in Z direction, our model achieves a precision of 0.01mm in X and Y directions and 0.015mm in Z direction. Comparison of our model with traditional pinhole camera model shows that two kinds of models have a similar reconstruction precision of X coordinates. However, traditional pinhole camera model has a lower precision of Y and Z coordinates than our model. The method proposed in this paper is very helpful for the micro-gripping system based on SLM microscopic vision. PMID:26924646

  12. Parallel algorithm for dominant points correspondences in robot binocular stereo vision

    NASA Technical Reports Server (NTRS)

    Al-Tammami, A.; Singh, B.

    1993-01-01

    This paper presents an algorithm to find the correspondences of points representing dominant feature in robot stereo vision. The algorithm consists of two main steps: dominant point extraction and dominant point matching. In the feature extraction phase, the algorithm utilizes the widely used Moravec Interest Operator and two other operators: the Prewitt Operator and a new operator called Gradient Angle Variance Operator. The Interest Operator in the Moravec algorithm was used to exclude featureless areas and simple edges which are oriented in the vertical, horizontal, and two diagonals. It was incorrectly detecting points on edges which are not on the four main directions (vertical, horizontal, and two diagonals). The new algorithm uses the Prewitt operator to exclude featureless areas, so that the Interest Operator is applied only on the edges to exclude simple edges and to leave interesting points. This modification speeds-up the extraction process by approximately 5 times. The Gradient Angle Variance (GAV), an operator which calculates the variance of the gradient angle in a window around the point under concern, is then applied on the interesting points to exclude the redundant ones and leave the actual dominant ones. The matching phase is performed after the extraction of the dominant points in both stereo images. The matching starts with dominant points in the left image and does a local search, looking for corresponding dominant points in the right image. The search is geometrically constrained the epipolar line of the parallel-axes stereo geometry and the maximum disparity of the application environment. If one dominant point in the right image lies in the search areas, then it is the corresponding point of the reference dominant point in the left image. A parameter provided by the GAV is thresholded and used as a rough similarity measure to select the corresponding dominant point if there is more than one point the search area. The correlation is used as

  13. Error analysis and system implementation for structured light stereo vision 3D geometric detection in large scale condition

    NASA Astrophysics Data System (ADS)

    Qi, Li; Zhang, Xuping; Wang, Jiaqi; Zhang, Yixin; Wang, Shun; Zhu, Fan

    2012-11-01

    Stereo vision based 3D metrology technique is an effective approach for relatively large scale object's 3D geometric detection. In this paper, we present a specified image capture system, which implements LVDS interface embedded CMOS sensor and CAN bus to ensure synchronous trigger and exposure. We made an error analysis for structured light vision measurement in large scale condition, based on which we built and tested the system prototype both indoor and outfield. The result shows that the system is very suitable for large scale metrology applications.

  14. Estimation of 3D reconstruction errors in a stereo-vision system

    NASA Astrophysics Data System (ADS)

    Belhaoua, A.; Kohler, S.; Hirsch, E.

    2009-06-01

    The paper presents an approach for error estimation for the various steps of an automated 3D vision-based reconstruction procedure of manufactured workpieces. The process is based on a priori planning of the task and built around a cognitive intelligent sensory system using so-called Situation Graph Trees (SGT) as a planning tool. Such an automated quality control system requires the coordination of a set of complex processes performing sequentially data acquisition, its quantitative evaluation and the comparison with a reference model (e.g., CAD object model) in order to evaluate quantitatively the object. To ensure efficient quality control, the aim is to be able to state if reconstruction results fulfill tolerance rules or not. Thus, the goal is to evaluate independently the error for each step of the stereo-vision based 3D reconstruction (e.g., for calibration, contour segmentation, matching and reconstruction) and then to estimate the error for the whole system. In this contribution, we analyze particularly the segmentation error due to localization errors for extracted edge points supposed to belong to lines and curves composing the outline of the workpiece under evaluation. The fitting parameters describing these geometric features are used as quality measure to determine confidence intervals and finally to estimate the segmentation errors. These errors are then propagated through the whole reconstruction procedure, enabling to evaluate their effect on the final 3D reconstruction result, specifically on position uncertainties. Lastly, analysis of these error estimates enables to evaluate the quality of the 3D reconstruction, as illustrated by the shown experimental results.

  15. Comparative morphometry of facial surface models obtained from a stereo vision system in a healthy population

    NASA Astrophysics Data System (ADS)

    López, Leticia; Gastélum, Alfonso; Chan, Yuk Hin; Delmas, Patrice; Escorcia, Lilia; Márquez, Jorge

    2014-11-01

    Our goal is to obtain three-dimensional measurements of craniofacial morphology in a healthy population, using standard landmarks established by a physical-anthropology specialist and picked from computer reconstructions of the face of each subject. To do this, we designed a multi-stereo vision system that will be used to create a data base of human faces surfaces from a healthy population, for eventual applications in medicine, forensic sciences and anthropology. The acquisition process consists of obtaining the depth map information from three points of views, each depth map is obtained from a calibrated pair of cameras. The depth maps are used to build a complete, frontal, triangular-surface representation of the subject face. The triangular surface is used to locate the landmarks and the measurements are analyzed with a MATLAB script. The classification of the subjects was done with the aid of a specialist anthropologist that defines specific subject indices, according to the lengths, areas, ratios, etc., of the different structures and the relationships among facial features. We studied a healthy population and the indices from this population will be used to obtain representative averages that later help with the study and classification of possible pathologies.

  16. Structural Parameters Calibration for Binocular Stereo Vision Sensors Using a Double-Sphere Target.

    PubMed

    Wei, Zhenzhong; Zhao, Kai

    2016-01-01

    Structural parameter calibration for the binocular stereo vision sensor (BSVS) is an important guarantee for high-precision measurements. We propose a method to calibrate the structural parameters of BSVS based on a double-sphere target. The target, consisting of two identical spheres with a known fixed distance, is freely placed in different positions and orientations. Any three non-collinear sphere centres determine a spatial plane whose normal vector under the two camera-coordinate-frames is obtained by means of an intermediate parallel plane calculated by the image points of sphere centres and the depth-scale factors. Hence, the rotation matrix R is solved. The translation vector T is determined using a linear method derived from the epipolar geometry. Furthermore, R and T are refined by nonlinear optimization. We also provide theoretical analysis on the error propagation related to the positional deviation of the sphere image and an approach to mitigate its effect. Computer simulations are conducted to test the performance of the proposed method with respect to the image noise level, target placement times and the depth-scale factor. Experimental results on real data show that the accuracy of measurement is higher than 0.9‰, with a distance of 800 mm and a view field of 250 × 200 mm². PMID:27420063

  17. Stereo vision-based vehicle detection using a road feature and disparity histogram

    NASA Astrophysics Data System (ADS)

    Lee, Chung-Hee; Lim, Young-Chul; Kwon, Soon; Lee, Jong-Hun

    2011-02-01

    This paper presents stereo vision-based vehicle detection approach on the road using a road feature and disparity histogram. It is not easy to detect only vehicles robustly on the road in various traffic situations, for example, a nonflat road or a multiple-obstacle situation. This paper focuses on the improvement of vehicle detection performance in various real traffic situations. The approach consists of three steps, namely obstacle localization, obstacle segmentation, and vehicle verification. First, we extract a road feature from v-disparity maps binarized using the most frequent values in each row and column, and adopt the extracted road feature as an obstacle criterion in column detection. However, many obstacles still coexist in each localized obstacle area. Thus, we divide the localized obstacle area into multiple obstacles using a disparity histogram and remerge the divided obstacles using four criteria parameters, namely the obstacle size, distance, and angle between the divided obstacles, and the difference of disparity values. Finally, we verify the vehicles using a depth map and gray image to improve the performance. We verify the performance of our proposed method by conducting experiments in various real traffic situations. The average recall rate of vehicle detection is 95.5%.

  18. Automated stereo vision instrument tracking for intraoperative OCT guided anterior segment ophthalmic surgical maneuvers

    PubMed Central

    El-Haddad, Mohamed T.; Tao, Yuankai K.

    2015-01-01

    Microscope-integrated intraoperative OCT (iOCT) enables imaging of tissue cross-sections concurrent with ophthalmic surgical maneuvers. However, limited acquisition rates and complex three-dimensional visualization methods preclude real-time surgical guidance using iOCT. We present an automated stereo vision surgical instrument tracking system integrated with a prototype iOCT system. We demonstrate, for the first time, automatically tracked video-rate cross-sectional iOCT imaging of instrument-tissue interactions during ophthalmic surgical maneuvers. The iOCT scan-field is automatically centered on the surgical instrument tip, ensuring continuous visualization of instrument positions relative to the underlying tissue over a 2500 mm2 field with sub-millimeter positional resolution and <1° angular resolution. Automated instrument tracking has the added advantage of providing feedback on surgical dynamics during precision tissue manipulations because it makes it possible to use only two cross-sectional iOCT images, aligned parallel and perpendicular to the surgical instrument, which also reduces both system complexity and data throughput requirements. Our current implementation is suitable for anterior segment surgery. Further system modifications are proposed for applications in posterior segment surgery. Finally, the instrument tracking system described is modular and system agnostic, making it compatible with different commercial and research OCT and surgical microscopy systems and surgical instrumentations. These advances address critical barriers to the development of iOCT-guided surgical maneuvers and may also be translatable to applications in microsurgery outside of ophthalmology. PMID:26309764

  19. Field calibration of binocular stereo vision based on fast reconstruction of 3D control field

    NASA Astrophysics Data System (ADS)

    Zhang, Haijun; Liu, Changjie; Fu, Luhua; Guo, Yin

    2015-08-01

    Construction of high-speed railway in China has entered a period of rapid growth. To accurately and quickly obtain the dynamic envelope curve of high-speed vehicle is an important guarantee for safe driving. The measuring system is based on binocular stereo vision. Considering the difficulties in field calibration such as environmental changes and time limits, carried out a field calibration method based on fast reconstruction of three-dimensional control field. With the rapid assembly of pre-calibrated three-dimensional control field, whose coordinate accuracy is guaranteed by manufacture accuracy and calibrated by V-STARS, two cameras take a quick shot of it at the same time. The field calibration parameters are then solved by the method combining linear solution with nonlinear optimization. Experimental results showed that the measurement accuracy can reach up to +/- 0.5mm, and more importantly, in the premise of guaranteeing accuracy, the speed of the calibration and the portability of the devices have been improved considerably.

  20. Automated stereo vision instrument tracking for intraoperative OCT guided anterior segment ophthalmic surgical maneuvers.

    PubMed

    El-Haddad, Mohamed T; Tao, Yuankai K

    2015-08-01

    Microscope-integrated intraoperative OCT (iOCT) enables imaging of tissue cross-sections concurrent with ophthalmic surgical maneuvers. However, limited acquisition rates and complex three-dimensional visualization methods preclude real-time surgical guidance using iOCT. We present an automated stereo vision surgical instrument tracking system integrated with a prototype iOCT system. We demonstrate, for the first time, automatically tracked video-rate cross-sectional iOCT imaging of instrument-tissue interactions during ophthalmic surgical maneuvers. The iOCT scan-field is automatically centered on the surgical instrument tip, ensuring continuous visualization of instrument positions relative to the underlying tissue over a 2500 mm(2) field with sub-millimeter positional resolution and <1° angular resolution. Automated instrument tracking has the added advantage of providing feedback on surgical dynamics during precision tissue manipulations because it makes it possible to use only two cross-sectional iOCT images, aligned parallel and perpendicular to the surgical instrument, which also reduces both system complexity and data throughput requirements. Our current implementation is suitable for anterior segment surgery. Further system modifications are proposed for applications in posterior segment surgery. Finally, the instrument tracking system described is modular and system agnostic, making it compatible with different commercial and research OCT and surgical microscopy systems and surgical instrumentations. These advances address critical barriers to the development of iOCT-guided surgical maneuvers and may also be translatable to applications in microsurgery outside of ophthalmology.

  1. Stereo-vision framework for autonomous vehicle guidance and collision avoidance

    NASA Astrophysics Data System (ADS)

    Scott, Douglas A.

    2003-08-01

    During a pre-programmed course to a particular destination, an autonomous vehicle may potentially encounter environments that are unknown at the time of operation. Some regions may contain objects or vehicles that were not anticipated during the mission-planning phase. Often user-intervention is not possible or desirable under these circumstances. Thus it is required for the onboard navigation system to automatically make short-term adjustments to the flight plan and to apply the necessary course corrections. A suitable path is visually navigated through the environment to reliably avoid obstacles without significant deviations from the original course. This paper describes a general low-cost stereo-vision sensor framework, for passively estimating the range-map between a forward-looking autonomous vehicle and its environment. Typical vehicles may be either unmanned ground or airborne vehicles. The range-map image describes a relative distance from the vehicle to the observed environment and contains information that could be used to compute a navigable flight plan, and also visual and geometric detail about the environment for other onboard processes or future missions. Aspects relating to information flow through the framework are discussed, along with issues such as robustness, implementation and other advantages and disadvantages of the framework. An outline of the physical structure of the system is presented and an overview of the algorithms and applications of the framework are given.

  2. Structural Parameters Calibration for Binocular Stereo Vision Sensors Using a Double-Sphere Target

    PubMed Central

    Wei, Zhenzhong; Zhao, Kai

    2016-01-01

    Structural parameter calibration for the binocular stereo vision sensor (BSVS) is an important guarantee for high-precision measurements. We propose a method to calibrate the structural parameters of BSVS based on a double-sphere target. The target, consisting of two identical spheres with a known fixed distance, is freely placed in different positions and orientations. Any three non-collinear sphere centres determine a spatial plane whose normal vector under the two camera-coordinate-frames is obtained by means of an intermediate parallel plane calculated by the image points of sphere centres and the depth-scale factors. Hence, the rotation matrix R is solved. The translation vector T is determined using a linear method derived from the epipolar geometry. Furthermore, R and T are refined by nonlinear optimization. We also provide theoretical analysis on the error propagation related to the positional deviation of the sphere image and an approach to mitigate its effect. Computer simulations are conducted to test the performance of the proposed method with respect to the image noise level, target placement times and the depth-scale factor. Experimental results on real data show that the accuracy of measurement is higher than 0.9‰, with a distance of 800 mm and a view field of 250 × 200 mm2. PMID:27420063

  3. Measuring and tracking eye movements of a behaving archer fish by real-time stereo vision.

    PubMed

    Ben-Simon, Avi; Ben-Shahar, Ohad; Segev, Ronen

    2009-11-15

    The archer fish (Toxotes chatareus) exhibits unique visual behavior in that it is able to aim at and shoot down with a squirt of water insects resting on the foliage above water level and then feed on them. This extreme behavior requires excellent visual acuity, learning, and tight synchronization between the visual system and body motion. This behavior also raises many important questions, such as the fish's ability to compensate for air-water refraction and the neural mechanisms underlying target acquisition. While many such questions remain open, significant insights towards solving them can be obtained by tracking the eye and body movements of freely behaving fish. Unfortunately, existing tracking methods suffer from either a high level of invasiveness or low resolution. Here, we present a video-based eye tracking method for accurately and remotely measuring the eye and body movements of a freely moving behaving fish. Based on a stereo vision system and a unique triangulation method that corrects for air-glass-water refraction, we are able to measure a full three-dimensional pose of the fish eye and body with high temporal and spatial resolution. Our method, being generic, can be applied to studying the behavior of marine animals in general. We demonstrate how data collected by our method may be used to show that the hunting behavior of the archer fish is composed of surfacing concomitant with rotating the body around the direction of the fish's fixed gaze towards the target, until the snout reaches in the correct shooting position at water level.

  4. Measuring and tracking eye movements of a behaving archer fish by real-time stereo vision.

    PubMed

    Ben-Simon, Avi; Ben-Shahar, Ohad; Segev, Ronen

    2009-11-15

    The archer fish (Toxotes chatareus) exhibits unique visual behavior in that it is able to aim at and shoot down with a squirt of water insects resting on the foliage above water level and then feed on them. This extreme behavior requires excellent visual acuity, learning, and tight synchronization between the visual system and body motion. This behavior also raises many important questions, such as the fish's ability to compensate for air-water refraction and the neural mechanisms underlying target acquisition. While many such questions remain open, significant insights towards solving them can be obtained by tracking the eye and body movements of freely behaving fish. Unfortunately, existing tracking methods suffer from either a high level of invasiveness or low resolution. Here, we present a video-based eye tracking method for accurately and remotely measuring the eye and body movements of a freely moving behaving fish. Based on a stereo vision system and a unique triangulation method that corrects for air-glass-water refraction, we are able to measure a full three-dimensional pose of the fish eye and body with high temporal and spatial resolution. Our method, being generic, can be applied to studying the behavior of marine animals in general. We demonstrate how data collected by our method may be used to show that the hunting behavior of the archer fish is composed of surfacing concomitant with rotating the body around the direction of the fish's fixed gaze towards the target, until the snout reaches in the correct shooting position at water level. PMID:19698749

  5. Dense range map reconstruction from a versatile robotic sensor system with an active trinocular vision and a passive binocular vision

    NASA Astrophysics Data System (ADS)

    Kim, Min Young; Lee, Hyunkee; Cho, Hyungsuck

    2008-04-01

    One major research issue associated with 3D perception by robotic systems is the creation of efficient sensor systems that can generate dense range maps reliably. A visual sensor system for robotic applications is developed that is inherently equipped with two types of sensor, an active trinocular vision and a passive stereo vision. Unlike in conventional active vision systems that use a large number of images with variations of projected patterns for dense range map acquisition or from conventional passive vision systems that work well on specific environments with sufficient feature information, a cooperative bidirectional sensor fusion method for this visual sensor system enables us to acquire a reliable dense range map using active and passive information simultaneously. The fusion algorithms are composed of two parts, one in which the passive stereo vision helps active vision and the other in which the active trinocular vision helps the passive one. The first part matches the laser patterns in stereo laser images with the help of intensity images; the second part utilizes an information fusion technique using the dynamic programming method in which image regions between laser patterns are matched pixel-by-pixel with help of the fusion results obtained in the first part. To determine how the proposed sensor system and fusion algorithms can work in real applications, the sensor system is implemented on a robotic system, and the proposed algorithms are applied. A series of experimental tests is performed for a variety of configurations of robot and environments. The performance of the sensor system is discussed in detail.

  6. Dense range map reconstruction from a versatile robotic sensor system with an active trinocular vision and a passive binocular vision.

    PubMed

    Kim, Min Young; Lee, Hyunkee; Cho, Hyungsuck

    2008-04-10

    One major research issue associated with 3D perception by robotic systems is the creation of efficient sensor systems that can generate dense range maps reliably. A visual sensor system for robotic applications is developed that is inherently equipped with two types of sensor, an active trinocular vision and a passive stereo vision. Unlike in conventional active vision systems that use a large number of images with variations of projected patterns for dense range map acquisition or from conventional passive vision systems that work well on specific environments with sufficient feature information, a cooperative bidirectional sensor fusion method for this visual sensor system enables us to acquire a reliable dense range map using active and passive information simultaneously. The fusion algorithms are composed of two parts, one in which the passive stereo vision helps active vision and the other in which the active trinocular vision helps the passive one. The first part matches the laser patterns in stereo laser images with the help of intensity images; the second part utilizes an information fusion technique using the dynamic programming method in which image regions between laser patterns are matched pixel-by-pixel with help of the fusion results obtained in the first part. To determine how the proposed sensor system and fusion algorithms can work in real applications, the sensor system is implemented on a robotic system, and the proposed algorithms are applied. A series of experimental tests is performed for a variety of configurations of robot and environments. The performance of the sensor system is discussed in detail.

  7. Stereo-vision system for finger tracking in breast self-examination

    NASA Astrophysics Data System (ADS)

    Zeng, Jianchao; Wang, Yue J.; Freedman, Matthew T.; Mun, Seong K.

    1997-05-01

    minor changes in illumination. Neighbor search is employed to ensure real time performance, and a three-finger pattern topology is always checked for extracted features to avoid any possible false features. After detecting the features in the images, 3D position parameters of the colored fingers are calculated using the stereo vision principle. In the experiments, a 15 frames/second performance is obtained using an image size of 160 X 120 and an SGI Indy MIPS R4000 workstation. The system is robust and accurate, which confirms the performance and effectiveness of the proposed approach. The system is robust and accurate, which confirms the performance and effectiveness of the proposed approach. The system can be used to quantify search strategy of the palpation and its documentation. With real-time visual feedback, it can be used to train both patients and new physicians to improve their performance of palpation and thus visual feedback, it can be used to train both patients and new physicians to improve their performance of palpation and thus improve the rate of breast tumor detection.

  8. Stereo and motion parallax cues in human 3D vision: can they vanish without a trace?

    PubMed

    Rauschecker, Andreas M; Solomon, Samuel G; Glennerster, Andrew

    2006-01-01

    In an immersive virtual reality environment, subjects fail to notice when a scene expands or contracts around them, despite correct and consistent information from binocular stereopsis and motion parallax, resulting in gross failures of size constancy (A. Glennerster, L. Tcheang, S. J. Gilson, A. W. Fitzgibbon, & A. J. Parker, 2006). We determined whether the integration of stereopsis/motion parallax cues with texture-based cues could be modified through feedback. Subjects compared the size of two objects, each visible when the room was of a different size. As the subject walked, the room expanded or contracted, although subjects failed to notice any change. Subjects were given feedback about the accuracy of their size judgments, where the "correct" size setting was defined either by texture-based cues or (in a separate experiment) by stereo/motion parallax cues. Because of feedback, observers were able to adjust responses such that fewer errors were made. For texture-based feedback, the pattern of responses was consistent with observers weighting texture cues more heavily. However, for stereo/motion parallax feedback, performance in many conditions became worse such that, paradoxically, biases moved away from the point reinforced by the feedback. This can be explained by assuming that subjects remap the relationship between stereo/motion parallax cues and perceived size or that they develop strategies to change their criterion for a size match on different trials. In either case, subjects appear not to have direct access to stereo/motion parallax cues. PMID:17209749

  9. Stereo vision-based depth of field rendering on a mobile device

    NASA Astrophysics Data System (ADS)

    Wang, Qiaosong; Yu, Zhan; Rasmussen, Christopher; Yu, Jingyi

    2014-03-01

    The depth of field (DoF) effect is a useful tool in photography and cinematography because of its aesthetic value. However, capturing and displaying dynamic DoF effect were until recently a quality unique to expensive and bulky movie cameras. A computational approach to generate realistic DoF effects for mobile devices such as tablets is proposed. We first calibrate the rear-facing stereo cameras and rectify the stereo image pairs through FCam API, then generate a low-res disparity map using graph cuts stereo matching and subsequently upsample it via joint bilateral upsampling. Next, we generate a synthetic light field by warping the raw color image to nearby viewpoints, according to the corresponding values in the upsampled high-resolution disparity map. Finally, we render dynamic DoF effect on the tablet screen with light field rendering. The user can easily capture and generate desired DoF effects with arbitrary aperture sizes or focal depths using the tablet only, with no additional hardware or software required. The system has been examined in a variety of environments with satisfactory results, according to the subjective evaluation tests.

  10. Comparison on testability of visual acuity, stereo acuity and colour vision tests between children with learning disabilities and children without learning disabilities in government primary schools

    PubMed Central

    Abu Bakar, Nurul Farhana; Chen, Ai-Hong

    2014-01-01

    Context: Children with learning disabilities might have difficulties to communicate effectively and give reliable responses as required in various visual function testing procedures. Aims: The purpose of this study was to compare the testability of visual acuity using the modified Early Treatment Diabetic Retinopathy Study (ETDRS) and Cambridge Crowding Cards, stereo acuity using Lang Stereo test II and Butterfly stereo tests and colour perception using Colour Vision Test Made Easy (CVTME) and Ishihara's Test for Colour Deficiency (Ishihara Test) between children in mainstream classes and children with learning disabilities in special education classes in government primary schools. Materials and Methods: A total of 100 primary school children (50 children from mainstream classes and 50 children from special education classes) matched in age were recruited in this cross-sectional comparative study. The testability was determined by the percentage of children who were able to give reliable respond as required by the respective tests. ‘Unable to test’ was defined as inappropriate response or uncooperative despite best efforts of the screener. Results: The testability of the modified ETDRS, Butterfly stereo test and Ishihara test for respective visual function tests were found lower among children in special education classes (P < 0.001) but not in Cambridge Crowding Cards, Lang Stereo test II and CVTME. Conclusion: Non verbal or “matching” approaches were found to be more superior in testing visual functions in children with learning disabilities. Modifications of vision testing procedures are essential for children with learning disabilities. PMID:24008790

  11. A Skeleton-Based 3D Shape Reconstruction of Free-Form Objects with Stereo Vision

    NASA Astrophysics Data System (ADS)

    Saini, Deepika; Kumar, Sanjeev

    2015-12-01

    In this paper, an efficient approach is proposed for recovering the 3D shape of a free-form object from its arbitrary pair of stereo images. In particular, the reconstruction problem is treated as the reconstruction of the skeleton and the external boundary of the object. The reconstructed skeleton is termed as the line-like representation or curve-skeleton of the 3D object. The proposed solution for object reconstruction is based on this evolved curve-skeleton. It is used as a seed for recovering shape of the 3D object, and the extracted boundary is used for terminating the growing process of the object. NURBS-skeleton is used to extract the skeleton of both views. Affine invariant property of the convex hulls is used to establish the correspondence between the skeletons and boundaries in the stereo images. In the growing process, a distance field is defined for each skeleton point as the smallest distance from that point to the boundary of the object. A sphere centered at a skeleton point of radius equal to the minimum distance to the boundary is tangential to the boundary. Filling in the spheres centered at each skeleton point reconstructs the object. Several results are presented in order to check the applicability and validity of the proposed algorithm.

  12. Interactive stereo games to improve vision in children with amblyopia using dichoptic stimulation

    NASA Astrophysics Data System (ADS)

    Herbison, Nicola; Ash, Isabel M.; MacKeith, Daisy; Vivian, Anthony; Purdy, Jonathan H.; Fakis, Apostolos; Cobb, Sue V.; Hepburn, Trish; Eastgate, Richard M.; Gregson, Richard M.; Foss, Alexander J. E.

    2015-03-01

    Amblyopia is a common condition affecting 2% of all children and traditional treatment consists of either wearing a patch or penalisation. We have developed a treatment using stereo technology, not to provide a 3D image but to allow dichoptic stimulation. This involves presenting an image with the same background to both eyes but with features of interest removed from the image presented to the normal eye with the aim to preferentially stimulated visual development in the amblyopic, or lazy, eye. Our system, called I-BiT can use either a game or a video (DVD) source as input. Pilot studies show that this treatment is effective with short treatment times and has proceeded to randomised controlled clinical trial. The early indications are that the treatment has a high degree of acceptability and corresponding good compliance.

  13. Finger tracking for hand-held device interface using profile-matching stereo vision

    NASA Astrophysics Data System (ADS)

    Chang, Yung-Ping; Lee, Dah-Jye; Moore, Jason; Desai, Alok; Tippetts, Beau

    2013-01-01

    Hundreds of millions of people use hand-held devices frequently and control them by touching the screen with their fingers. If this method of operation is being used by people who are driving, the probability of deaths and accidents occurring substantially increases. With a non-contact control interface, people do not need to touch the screen. As a result, people will not need to pay as much attention to their phones and thus drive more safely than they would otherwise. This interface can be achieved with real-time stereovision. A novel Intensity Profile Shape-Matching Algorithm is able to obtain 3-D information from a pair of stereo images in real time. While this algorithm does have a trade-off between accuracy and processing speed, the result of this algorithm proves the accuracy is sufficient for the practical use of recognizing human poses and finger movement tracking. By choosing an interval of disparity, an object at a certain distance range can be segmented. In other words, we detect the object by its distance to the cameras. The advantage of this profile shape-matching algorithm is that detection of correspondences relies on the shape of profile and not on intensity values, which are subjected to lighting variations. Based on the resulting 3-D information, the movement of fingers in space from a specific distance can be determined. Finger location and movement can then be analyzed for non-contact control of hand-held devices.

  14. Quality inspection guided laser processing of irregular shape objects by stereo vision measurement: application in badminton shuttle manufacturing

    NASA Astrophysics Data System (ADS)

    Qi, Li; Wang, Shun; Zhang, Yixin; Sun, Yingying; Zhang, Xuping

    2015-11-01

    The quality inspection process is usually carried out after first processing of the raw materials such as cutting and milling. This is because the parts of the materials to be used are unidentified until they have been trimmed. If the quality of the material is assessed before the laser process, then the energy and efforts wasted on defected materials can be saved. We proposed a new production scheme that can achieve quantitative quality inspection prior to primitive laser cutting by means of three-dimensional (3-D) vision measurement. First, the 3-D model of the object is reconstructed by the stereo cameras, from which the spatial cutting path is derived. Second, collaborating with another rear camera, the 3-D cutting path is reprojected to both the frontal and rear views of the object and thus generates the regions-of-interest (ROIs) for surface defect analysis. An accurate visual guided laser process and reprojection-based ROI segmentation are enabled by a global-optimization-based trinocular calibration method. The prototype system was built and tested with the processing of raw duck feathers for high-quality badminton shuttle manufacture. Incorporating with a two-dimensional wavelet-decomposition-based defect analysis algorithm, both the geometrical and appearance features of the raw feathers are quantified before they are cut into small patches, which result in fully automatic feather cutting and sorting.

  15. Monocular feature tracker for low-cost stereo vision control of an autonomous guided vehicle (AGV)

    NASA Astrophysics Data System (ADS)

    Pearson, Chris M.; Probert, Penelope J.

    1994-02-01

    We describe a monocular feature tracker (MFT), the first stage of a low cost stereoscopic vision system for use on an autonomous guided vehicle (AGV) in an indoor environment. The system does not require artificial markings or other beacons, but relies upon accurate knowledge of the AGV motion. Linear array cameras (LAC) are used to reduce the data and processing bandwidths. The limited information given by LAC require modelling of the expected features. We model an obstacle as a vertical line segment touching the floor, and can distinguish between these obstacles and most other clutter in an image sequence. Detection of these obstacles is sufficient information for local AGV navigation.

  16. Real-time stereo generation for surgical vision during minimal invasive robotic surgery

    NASA Astrophysics Data System (ADS)

    Laddi, Amit; Bhardwaj, Vijay; Mahapatra, Prasant; Pankaj, Dinesh; Kumar, Amod

    2016-03-01

    This paper proposes a framework for 3D surgical vision for minimal invasive robotic surgery. It presents an approach for generating the three dimensional view of the in-vivo live surgical procedures from two images captured by very small sized, full resolution camera sensor rig. A pre-processing scheme is employed to enhance the image quality and equalizing the color profile of two images. Polarized Projection using interlacing two images give a smooth and strain free three dimensional view. The algorithm runs in real time with good speed at full HD resolution.

  17. Vision Loss With Sexual Activity.

    PubMed

    Lee, Michele D; Odel, Jeffrey G; Rudich, Danielle S; Ritch, Robert

    2016-01-01

    A 51-year-old white man presented with multiple episodes of transient painless unilateral vision loss precipitated by sexual intercourse. Examination was significant for closed angles bilaterally. His visual symptoms completely resolved following treatment with laser peripheral iridotomies.

  18. Hardware Implementation of a Spline-Based Genetic Algorithm for Embedded Stereo Vision Sensor Providing Real-Time Visual Guidance to the Visually Impaired

    NASA Astrophysics Data System (ADS)

    Lee, Dah-Jye; Anderson, Jonathan D.; Archibald, James K.

    2008-12-01

    Many image and signal processing techniques have been applied to medical and health care applications in recent years. In this paper, we present a robust signal processing approach that can be used to solve the correspondence problem for an embedded stereo vision sensor to provide real-time visual guidance to the visually impaired. This approach is based on our new one-dimensional (1D) spline-based genetic algorithm to match signals. The algorithm processes image data lines as 1D signals to generate a dense disparity map, from which 3D information can be extracted. With recent advances in electronics technology, this 1D signal matching technique can be implemented and executed in parallel in hardware such as field-programmable gate arrays (FPGAs) to provide real-time feedback about the environment to the user. In order to complement (not replace) traditional aids for the visually impaired such as canes and Seeing Eyes dogs, vision systems that provide guidance to the visually impaired must be affordable, easy to use, compact, and free from attributes that are awkward or embarrassing to the user. "Seeing Eye Glasses," an embedded stereo vision system utilizing our new algorithm, meets all these requirements.

  19. Scene interpretation module for an active vision system

    NASA Astrophysics Data System (ADS)

    Remagnino, P.; Matas, J.; Illingworth, John; Kittler, Josef

    1993-08-01

    In this paper an implementation of a high level symbolic scene interpreter for an active vision system is considered. The scene interpretation module uses low level image processing and feature extraction results to achieve object recognition and to build up a 3D environment map. The module is structured to exploit spatio-temporal context provided by existing partial world interpretations and has spatial reasoning to direct gaze control and thereby achieve efficient and robust processing using spatial focus of attention. The system builds and maintains an awareness of an environment which is far larger than a single camera view. Experiments on image sequences have shown that the system can: establish its position and orientation in a partially known environment, track simple moving objects such as cups and boxes, temporally integrate recognition results to establish or forget object presence, and utilize spatial focus of attention to achieve efficient and robust object recognition. The system has been extensively tested using images from a single steerable camera viewing a simple table top scene containing box and cylinder-like objects. Work is currently progressing to further develop its competences and interface it with the Surrey active stereo vision head, GETAFIX.

  20. Development of a 3D parallel mechanism robot arm with three vertical-axial pneumatic actuators combined with a stereo vision system.

    PubMed

    Chiang, Mao-Hsiung; Lin, Hao-Ting

    2011-01-01

    This study aimed to develop a novel 3D parallel mechanism robot driven by three vertical-axial pneumatic actuators with a stereo vision system for path tracking control. The mechanical system and the control system are the primary novel parts for developing a 3D parallel mechanism robot. In the mechanical system, a 3D parallel mechanism robot contains three serial chains, a fixed base, a movable platform and a pneumatic servo system. The parallel mechanism are designed and analyzed first for realizing a 3D motion in the X-Y-Z coordinate system of the robot's end-effector. The inverse kinematics and the forward kinematics of the parallel mechanism robot are investigated by using the Denavit-Hartenberg notation (D-H notation) coordinate system. The pneumatic actuators in the three vertical motion axes are modeled. In the control system, the Fourier series-based adaptive sliding-mode controller with H(∞) tracking performance is used to design the path tracking controllers of the three vertical servo pneumatic actuators for realizing 3D path tracking control of the end-effector. Three optical linear scales are used to measure the position of the three pneumatic actuators. The 3D position of the end-effector is then calculated from the measuring position of the three pneumatic actuators by means of the kinematics. However, the calculated 3D position of the end-effector cannot consider the manufacturing and assembly tolerance of the joints and the parallel mechanism so that errors between the actual position and the calculated 3D position of the end-effector exist. In order to improve this situation, sensor collaboration is developed in this paper. A stereo vision system is used to collaborate with the three position sensors of the pneumatic actuators. The stereo vision system combining two CCD serves to measure the actual 3D position of the end-effector and calibrate the error between the actual and the calculated 3D position of the end-effector. Furthermore, to

  1. Coevolution of active vision and feature selection.

    PubMed

    Floreano, Dario; Kato, Toshifumi; Marocco, Davide; Sauser, Eric

    2004-03-01

    We show that complex visual tasks, such as position- and size-invariant shape recognition and navigation in the environment, can be tackled with simple architectures generated by a coevolutionary process of active vision and feature selection. Behavioral machines equipped with primitive vision systems and direct pathways between visual and motor neurons are evolved while they freely interact with their environments. We describe the application of this methodology in three sets of experiments, namely, shape discrimination, car driving, and robot navigation. We show that these systems develop sensitivity to a number of oriented, retinotopic, visual-feature-oriented edges, corners, height, and a behavioral repertoire to locate, bring, and keep these features in sensitive regions of the vision system, resembling strategies observed in simple insects. PMID:15052484

  2. Coevolution of active vision and feature selection.

    PubMed

    Floreano, Dario; Kato, Toshifumi; Marocco, Davide; Sauser, Eric

    2004-03-01

    We show that complex visual tasks, such as position- and size-invariant shape recognition and navigation in the environment, can be tackled with simple architectures generated by a coevolutionary process of active vision and feature selection. Behavioral machines equipped with primitive vision systems and direct pathways between visual and motor neurons are evolved while they freely interact with their environments. We describe the application of this methodology in three sets of experiments, namely, shape discrimination, car driving, and robot navigation. We show that these systems develop sensitivity to a number of oriented, retinotopic, visual-feature-oriented edges, corners, height, and a behavioral repertoire to locate, bring, and keep these features in sensitive regions of the vision system, resembling strategies observed in simple insects.

  3. Design of a high-performance telepresence system incorporating an active vision system for enhanced visual perception of remote environments

    NASA Astrophysics Data System (ADS)

    Pretlove, John R. G.; Asbery, Richard

    1995-12-01

    This paper describes the design, development and implementation of a telepresence system for hazardous environment applications. Its primary feature is a high performance active stereo vision system slaved to the motion of the operators head. To simulate the presence of an operator in a remote, hazardous environment, it is necessary to provide sufficient visual information about the remote environment. The operator must be able to interact with the environment so that he can carry out manipulative tasks. To achieve an enhanced sense of visual perception we have developed a tightly integrated pan and tilt stereo vision system with a head-mounted display. The motion of the operators head is monitored by a six DOF sensor which provides the demand signals to servocontrol the active vision system. The system we have developed is a compact yet high performance system employing mechatronic principles to deliver a system that can be mounted on a small mobile platform. We have also developed an open architecture controller to implement the dynamic, active vision system which exhibits dynamic performance characteristics of the human head-eye system so as to form a natural and intuitive interface. A series of tests have been conducted to establish the system latency and to explore the effectiveness of remote 3D human perception, particularly with regard to manipulation tasks and navigation. The results of these tests are presented.

  4. Robust active binocular vision through intrinsically motivated learning.

    PubMed

    Lonini, Luca; Forestier, Sébastien; Teulière, Céline; Zhao, Yu; Shi, Bertram E; Triesch, Jochen

    2013-01-01

    The efficient coding hypothesis posits that sensory systems of animals strive to encode sensory signals efficiently by taking into account the redundancies in them. This principle has been very successful in explaining response properties of visual sensory neurons as adaptations to the statistics of natural images. Recently, we have begun to extend the efficient coding hypothesis to active perception through a form of intrinsically motivated learning: a sensory model learns an efficient code for the sensory signals while a reinforcement learner generates movements of the sense organs to improve the encoding of the signals. To this end, it receives an intrinsically generated reinforcement signal indicating how well the sensory model encodes the data. This approach has been tested in the context of binocular vison, leading to the autonomous development of disparity tuning and vergence control. Here we systematically investigate the robustness of the new approach in the context of a binocular vision system implemented on a robot. Robustness is an important aspect that reflects the ability of the system to deal with unmodeled disturbances or events, such as insults to the system that displace the stereo cameras. To demonstrate the robustness of our method and its ability to self-calibrate, we introduce various perturbations and test if and how the system recovers from them. We find that (1) the system can fully recover from a perturbation that can be compensated through the system's motor degrees of freedom, (2) performance degrades gracefully if the system cannot use its motor degrees of freedom to compensate for the perturbation, and (3) recovery from a perturbation is improved if both the sensory encoding and the behavior policy can adapt to the perturbation. Overall, this work demonstrates that our intrinsically motivated learning approach for efficient coding in active perception gives rise to a self-calibrating perceptual system of high robustness. PMID:24223552

  5. Robust active binocular vision through intrinsically motivated learning.

    PubMed

    Lonini, Luca; Forestier, Sébastien; Teulière, Céline; Zhao, Yu; Shi, Bertram E; Triesch, Jochen

    2013-01-01

    The efficient coding hypothesis posits that sensory systems of animals strive to encode sensory signals efficiently by taking into account the redundancies in them. This principle has been very successful in explaining response properties of visual sensory neurons as adaptations to the statistics of natural images. Recently, we have begun to extend the efficient coding hypothesis to active perception through a form of intrinsically motivated learning: a sensory model learns an efficient code for the sensory signals while a reinforcement learner generates movements of the sense organs to improve the encoding of the signals. To this end, it receives an intrinsically generated reinforcement signal indicating how well the sensory model encodes the data. This approach has been tested in the context of binocular vison, leading to the autonomous development of disparity tuning and vergence control. Here we systematically investigate the robustness of the new approach in the context of a binocular vision system implemented on a robot. Robustness is an important aspect that reflects the ability of the system to deal with unmodeled disturbances or events, such as insults to the system that displace the stereo cameras. To demonstrate the robustness of our method and its ability to self-calibrate, we introduce various perturbations and test if and how the system recovers from them. We find that (1) the system can fully recover from a perturbation that can be compensated through the system's motor degrees of freedom, (2) performance degrades gracefully if the system cannot use its motor degrees of freedom to compensate for the perturbation, and (3) recovery from a perturbation is improved if both the sensory encoding and the behavior policy can adapt to the perturbation. Overall, this work demonstrates that our intrinsically motivated learning approach for efficient coding in active perception gives rise to a self-calibrating perceptual system of high robustness.

  6. Active vision in satellite scene analysis

    NASA Technical Reports Server (NTRS)

    Naillon, Martine

    1994-01-01

    In earth observation or planetary exploration it is necessary to have more and, more autonomous systems, able to adapt to unpredictable situations. This imposes the use, in artificial systems, of new concepts in cognition, based on the fact that perception should not be separated from recognition and decision making levels. This means that low level signal processing (perception level) should interact with symbolic and high level processing (decision level). This paper is going to describe the new concept of active vision, implemented in Distributed Artificial Intelligence by Dassault Aviation following a 'structuralist' principle. An application to spatial image interpretation is given, oriented toward flexible robotics.

  7. D Reconstruction of Cultural Tourism Attractions from Indoor to Outdoor Based on Portable Four-Camera Stereo Vision System

    NASA Astrophysics Data System (ADS)

    Shao, Z.; Li, C.; Zhong, S.; Liu, B.; Jiang, H.; Wen, X.

    2015-05-01

    Building the fine 3D model from outdoor to indoor is becoming a necessity for protecting the cultural tourism resources. However, the existing 3D modelling technologies mainly focus on outdoor areas. Actually, a 3D model should contain detailed descriptions of both its appearance and its internal structure, including architectural components. In this paper, a portable four-camera stereo photographic measurement system is developed, which can provide a professional solution for fast 3D data acquisition, processing, integration, reconstruction and visualization. Given a specific scene or object, it can directly collect physical geometric information such as positions, sizes and shapes of an object or a scene, as well as physical property information such as the materials and textures. On the basis of the information, 3D model can be automatically constructed. The system has been applied to the indooroutdoor seamless modelling of distinctive architecture existing in two typical cultural tourism zones, that is, Tibetan and Qiang ethnic minority villages in Sichuan Jiuzhaigou Scenic Area and Tujia ethnic minority villages in Hubei Shennongjia Nature Reserve, providing a new method and platform for protection of minority cultural characteristics, 3D reconstruction and cultural tourism.

  8. Deep Vision: An In-Trawl Stereo Camera Makes a Step Forward in Monitoring the Pelagic Community

    PubMed Central

    Underwood, Melanie J.; Rosen, Shale; Engås, Arill; Eriksen, Elena

    2014-01-01

    Ecosystem surveys are carried out annually in the Barents Sea by Russia and Norway to monitor the spatial distribution of ecosystem components and to study population dynamics. One component of the survey is mapping the upper pelagic zone using a trawl towed at several depths. However, the current technique with a single codend does not provide fine-scale spatial data needed to directly study species overlaps. An in-trawl camera system, Deep Vision, was mounted in front of the codend in order to acquire continuous images of all organisms passing. It was possible to identify and quantify of most young-of-the-year fish (e.g. Gadus morhua, Boreogadus saida and Reinhardtius hippoglossoides) and zooplankton, including Ctenophora, which are usually damaged in the codend. The system showed potential for measuring the length of small organisms and also recorded the vertical and horizontal positions where individuals were imaged. Young-of-the-year fish were difficult to identify when passing the camera at maximum range and to quantify during high densities. In addition, a large number of fish with damaged opercula were observed passing the Deep Vision camera during heaving; suggesting individuals had become entangled in meshes farther forward in the trawl. This indicates that unknown numbers of fish are probably lost in forward sections of the trawl and that the heaving procedure may influence the number of fish entering the codend, with implications for abundance indices and understanding population dynamics. This study suggests modifications to the Deep Vision and the trawl to increase our understanding of the population dynamics. PMID:25393121

  9. Deep vision: an in-trawl stereo camera makes a step forward in monitoring the pelagic community.

    PubMed

    Underwood, Melanie J; Rosen, Shale; Engås, Arill; Eriksen, Elena

    2014-01-01

    Ecosystem surveys are carried out annually in the Barents Sea by Russia and Norway to monitor the spatial distribution of ecosystem components and to study population dynamics. One component of the survey is mapping the upper pelagic zone using a trawl towed at several depths. However, the current technique with a single codend does not provide fine-scale spatial data needed to directly study species overlaps. An in-trawl camera system, Deep Vision, was mounted in front of the codend in order to acquire continuous images of all organisms passing. It was possible to identify and quantify of most young-of-the-year fish (e.g. Gadus morhua, Boreogadus saida and Reinhardtius hippoglossoides) and zooplankton, including Ctenophora, which are usually damaged in the codend. The system showed potential for measuring the length of small organisms and also recorded the vertical and horizontal positions where individuals were imaged. Young-of-the-year fish were difficult to identify when passing the camera at maximum range and to quantify during high densities. In addition, a large number of fish with damaged opercula were observed passing the Deep Vision camera during heaving; suggesting individuals had become entangled in meshes farther forward in the trawl. This indicates that unknown numbers of fish are probably lost in forward sections of the trawl and that the heaving procedure may influence the number of fish entering the codend, with implications for abundance indices and understanding population dynamics. This study suggests modifications to the Deep Vision and the trawl to increase our understanding of the population dynamics. PMID:25393121

  10. Leisure Activity Participation of Elderly Individuals with Low Vision.

    ERIC Educational Resources Information Center

    Heinemann, Allen W.

    1988-01-01

    Studied low vision elderly clinic patients (N=63) who reported participation in six categories of leisure activities currently and at onset of vision loss. Found subjects reported significant declines in five of six activity categories. Found prior activity participation was related to current participation only for active crafts, participatory…

  11. Topography from shading and stereo

    NASA Technical Reports Server (NTRS)

    Horn, Berthold K. P.

    1994-01-01

    Methods exploiting photometric information in images that have been developed in machine vision can be applied to planetary imagery. Integrating shape from shading, binocular stereo, and photometric stereo yields a robust system for recovering detailed surface shape and surface reflectance information. Such a system is useful in producing quantitative information from the vast volume of imagery being received, as well as in helping visualize the underlying surface.

  12. Stereo imaging with spaceborne radars

    NASA Technical Reports Server (NTRS)

    Leberl, F.; Kobrick, M.

    1983-01-01

    Stereo viewing is a valuable tool in photointerpretation and is used for the quantitative reconstruction of the three dimensional shape of a topographical surface. Stereo viewing refers to a visual perception of space by presenting an overlapping image pair to an observer so that a three dimensional model is formed in the brain. Some of the observer's function is performed by machine correlation of the overlapping images - so called automated stereo correlation. The direct perception of space with two eyes is often called natural binocular vision; techniques of generating three dimensional models of the surface from two sets of monocular image measurements is the topic of stereology.

  13. Sharpening coarse-to-fine stereo vision by perceptual learning: asymmetric transfer across the spatial frequency spectrum

    PubMed Central

    Tran, Truyet T.; Craven, Ashley P.; Leung, Tsz-Wing; Chat, Sandy W.; Levi, Dennis M.

    2016-01-01

    Neurons in the early visual cortex are finely tuned to different low-level visual features, forming a multi-channel system analysing the visual image formed on the retina in a parallel manner. However, little is known about the potential ‘cross-talk’ among these channels. Here, we systematically investigated whether stereoacuity, over a large range of target spatial frequencies, can be enhanced by perceptual learning. Using narrow-band visual stimuli, we found that practice with coarse (low spatial frequency) targets substantially improves performance, and that the improvement spreads from coarse to fine (high spatial frequency) three-dimensional perception, generalizing broadly across untrained spatial frequencies and orientations. Notably, we observed an asymmetric transfer of learning across the spatial frequency spectrum. The bandwidth of transfer was broader when training was at a high spatial frequency than at a low spatial frequency. Stereoacuity training is most beneficial when trained with fine targets. This broad transfer of stereoacuity learning contrasts with the highly specific learning reported for other basic visual functions. We also revealed strategies to boost learning outcomes ‘beyond-the-plateau’. Our investigations contribute to understanding the functional properties of the network subserving stereovision. The ability to generalize may provide a key principle for restoring impaired binocular vision in clinical situations. PMID:26909178

  14. Recent and episodic volcanic and glacial activity on Mars revealed by the High Resolution Stereo Camera.

    PubMed

    Neukum, G; Jaumann, R; Hoffmann, H; Hauber, E; Head, J W; Basilevsky, A T; Ivanov, B A; Werner, S C; van Gasselt, S; Murray, J B; McCord, T

    2004-12-23

    The large-area coverage at a resolution of 10-20 metres per pixel in colour and three dimensions with the High Resolution Stereo Camera Experiment on the European Space Agency Mars Express Mission has made it possible to study the time-stratigraphic relationships of volcanic and glacial structures in unprecedented detail and give insight into the geological evolution of Mars. Here we show that calderas on five major volcanoes on Mars have undergone repeated activation and resurfacing during the last 20 per cent of martian history, with phases of activity as young as two million years, suggesting that the volcanoes are potentially still active today. Glacial deposits at the base of the Olympus Mons escarpment show evidence for repeated phases of activity as recently as about four million years ago. Morphological evidence is found that snow and ice deposition on the Olympus construct at elevations of more than 7,000 metres led to episodes of glacial activity at this height. Even now, water ice protected by an insulating layer of dust may be present at high altitudes on Olympus Mons.

  15. ROVER: A prototype active vision system

    NASA Astrophysics Data System (ADS)

    Coombs, David J.; Marsh, Brian D.

    1987-08-01

    The Roving Eyes project is an experiment in active vision. We present the design and implementation of a prototype that tracks colored balls in images from an on-line charge coupled device (CCD) camera. Rover is designed to keep up with its rapidly changing environment by handling best and average case conditions and ignoring the worst case. This allows Rover's techniques to be less sophisticated and consequently faster. Each of Rover's major functional units is relatively isolated from the others, and an executive which knows all the functional units directs the computation by deciding which jobs would be most effective to run. This organization is realized with a priority queue of jobs and their arguments. Rover's structure not only allows it to adapt its strategy to the environment, but also makes the system extensible. A capability can be added to the system by adding a functional module with a well defined interface and by modifying the executive to make use of the new module. The current implementation is discussed in the appendices.

  16. Vision restoration after brain and retina damage: the "residual vision activation theory".

    PubMed

    Sabel, Bernhard A; Henrich-Noack, Petra; Fedorov, Anton; Gall, Carolin

    2011-01-01

    Vision loss after retinal or cerebral visual injury (CVI) was long considered to be irreversible. However, there is considerable potential for vision restoration and recovery even in adulthood. Here, we propose the "residual vision activation theory" of how visual functions can be reactivated and restored. CVI is usually not complete, but some structures are typically spared by the damage. They include (i) areas of partial damage at the visual field border, (ii) "islands" of surviving tissue inside the blind field, (iii) extrastriate pathways unaffected by the damage, and (iv) downstream, higher-level neuronal networks. However, residual structures have a triple handicap to be fully functional: (i) fewer neurons, (ii) lack of sufficient attentional resources because of the dominant intact hemisphere caused by excitation/inhibition dysbalance, and (iii) disturbance in their temporal processing. Because of this resulting activation loss, residual structures are unable to contribute much to everyday vision, and their "non-use" further impairs synaptic strength. However, residual structures can be reactivated by engaging them in repetitive stimulation by different means: (i) visual experience, (ii) visual training, or (iii) noninvasive electrical brain current stimulation. These methods lead to strengthening of synaptic transmission and synchronization of partially damaged structures (within-systems plasticity) and downstream neuronal networks (network plasticity). Just as in normal perceptual learning, synaptic plasticity can improve vision and lead to vision restoration. This can be induced at any time after the lesion, at all ages and in all types of visual field impairments after retinal or brain damage (stroke, neurotrauma, glaucoma, amblyopia, age-related macular degeneration). If and to what extent vision restoration can be achieved is a function of the amount of residual tissue and its activation state. However, sustained improvements require repetitive

  17. Vision restoration after brain and retina damage: the "residual vision activation theory".

    PubMed

    Sabel, Bernhard A; Henrich-Noack, Petra; Fedorov, Anton; Gall, Carolin

    2011-01-01

    Vision loss after retinal or cerebral visual injury (CVI) was long considered to be irreversible. However, there is considerable potential for vision restoration and recovery even in adulthood. Here, we propose the "residual vision activation theory" of how visual functions can be reactivated and restored. CVI is usually not complete, but some structures are typically spared by the damage. They include (i) areas of partial damage at the visual field border, (ii) "islands" of surviving tissue inside the blind field, (iii) extrastriate pathways unaffected by the damage, and (iv) downstream, higher-level neuronal networks. However, residual structures have a triple handicap to be fully functional: (i) fewer neurons, (ii) lack of sufficient attentional resources because of the dominant intact hemisphere caused by excitation/inhibition dysbalance, and (iii) disturbance in their temporal processing. Because of this resulting activation loss, residual structures are unable to contribute much to everyday vision, and their "non-use" further impairs synaptic strength. However, residual structures can be reactivated by engaging them in repetitive stimulation by different means: (i) visual experience, (ii) visual training, or (iii) noninvasive electrical brain current stimulation. These methods lead to strengthening of synaptic transmission and synchronization of partially damaged structures (within-systems plasticity) and downstream neuronal networks (network plasticity). Just as in normal perceptual learning, synaptic plasticity can improve vision and lead to vision restoration. This can be induced at any time after the lesion, at all ages and in all types of visual field impairments after retinal or brain damage (stroke, neurotrauma, glaucoma, amblyopia, age-related macular degeneration). If and to what extent vision restoration can be achieved is a function of the amount of residual tissue and its activation state. However, sustained improvements require repetitive

  18. Field-sequential stereo television

    NASA Technical Reports Server (NTRS)

    Perry, W. E.

    1974-01-01

    System includes viewing devices that provide low interference to normal vision. It provides stereo display observable from broader area. Left and right video cameras are focused on object. Output signals from cameras are time provided by each camera. Multiplexed signal, fed to standard television monitor, displays left and right images of object.

  19. Stereo images from space

    NASA Astrophysics Data System (ADS)

    Sabbatini, Massimo; Collon, Maximilien J.; Visentin, Gianfranco

    2008-02-01

    The Erasmus Recording Binocular (ERB1) was the first fully digital stereo camera used on the International Space Station. One year after its first utilisation, the results and feedback collected with various audiences have convinced us to continue exploiting the outreach potential of such media, with its unique capability to bring space down to earth, to share the feeling of weightlessness and confinement with the viewers on earth. The production of stereo is progressing quickly but it still poses problems for the distribution of the media. The Erasmus Centre of the European Space Agency has experienced how difficult it is to master the full production and distribution chain of a stereo system. Efforts are also on the way to standardize the satellite broadcasting part of the distribution. A new stereo camera is being built, ERB2, to be launched to the International Space Station (ISS) in September 2008: it shall have 720p resolution, it shall be able to transmit its images to the ground in real-time allowing the production of live programs and it could possibly be used also outside the ISS, in support of Extra Vehicular Activities of the astronauts. These new features are quite challenging to achieve in the reduced power and mass budget available to space projects and we hope to inspire more designers to come up with ingenious ideas to built cameras capable to operate in the hash Low Earth Orbit environment: radiations, temperature, power consumption and thermal design are the challenges to be met. The intent of this paper is to share with the readers the experience collected so far in all aspects of the 3D video production chain and to increase awareness on the unique content that we are collecting: nice stereo images from space can be used by all actors in the stereo arena to gain consensus on this powerful media. With respect to last year we shall present the progress made in the following areas: a) the satellite broadcasting live of stereo content to D

  20. STEREO Education and Public Outreach Efforts

    NASA Technical Reports Server (NTRS)

    Kucera, Therese

    2007-01-01

    STEREO has had a big year this year with its launch and the start of data collection. STEREO has mostly focused on informal educational venues, most notably with STEREO 3D images made available to museums through the NASA Museum Alliance. Other activities have involved making STEREO imagery available though the AMNH network and Viewspace, continued partnership with the Christa McAuliffe Planetarium, data sonification projects, preservice teacher training, and learning activity development.

  1. Flexible task-specific control using active vision

    NASA Astrophysics Data System (ADS)

    Firby, Robert J.; Swain, Michael J.

    1992-04-01

    This paper is about the interface between continuous and discrete robot control. We advocate encapsulating continuous actions and their related sensing strategies into behaviors called situation specific activities, which can be constructed by a symbolic reactive planner. Task- specific, real-time perception is a fundamental part of these activities. While researchers have successfully used primitive touch and sonar sensors in such situations, it is more problematic to achieve reasonable performance with complex signals such as those from a video camera. Active vision routines are suggested as a means of incorporating visual data into real time control and as one mechanism for designating aspects of the world in an indexical-functional manner. Active vision routines are a particularly flexible sensing methodology because different routines extract different functional attributes from the world using the same sensor. In fact, there will often be different active vision routines for extracting the same functional attribute using different processing techniques. This allows an agent substantial leeway to instantiate its activities in different ways under different circumstances using different active vision routines. We demonstrate the utility of this architecture with an object tracking example. A control system is presented that can be reconfigured by a reactive planner to achieve different tasks. We show how this system allows us to build interchangeable tracking activities that use either color histogram or motion based active vision routines.

  2. The Sun in STEREO

    NASA Technical Reports Server (NTRS)

    2006-01-01

    Parallax gives depth to life. Simultaneous viewing from slightly different vantage points makes binocular humans superior to monocular cyclopes, and fixes us in the third dimension of the Universe. We've been stunned by 3-d images of Venus and Mars (along with more familiar views of earth). Now astronomers plan to give us the best view of all, 3-d images of the dynamic Sun. That's one of the prime goals of NASA's Solar Terrestrial Relations Observatories, also known as STEREO. STEREO is a pair of spacecraft observatories, one placed in orbit in front of earth, and one to be placed in an earth-trailing orbit. Simultaneous observations of the Sun with the two STEREO spacecraft will provide extraordinary 3-d views of all types of solar activity, especially the dramatic events called coronal mass ejections which send high energy particles from the outer solar atmosphere hurtling towards earth. The image above the first image of the sun by the two STEREO spacecraft, an extreme ultraviolet shot of the Sun's million-degree corona, taken by the Extreme Ultraviolet Imager on the Sun Earth Connection Coronal and Heliospheric Investigation (SECCHI) instrument package. STEREO's first 3-d solar images should be available in April if all goes well. Put on your red and blue glasses!

  3. Stereo depth distortions in teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B.; Vonsydow, Marika

    1988-01-01

    In teleoperation, a typical application of stereo vision is to view a work space located short distances (1 to 3m) in front of the cameras. The work presented here treats converged camera placement and studies the effects of intercamera distance, camera-to-object viewing distance, and focal length of the camera lenses on both stereo depth resolution and stereo depth distortion. While viewing the fronto-parallel plane 1.4 m in front of the cameras, depth errors are measured on the order of 2cm. A geometric analysis was made of the distortion of the fronto-parallel plane of divergence for stereo TV viewing. The results of the analysis were then verified experimentally. The objective was to determine the optimal camera configuration which gave high stereo depth resolution while minimizing stereo depth distortion. It is found that for converged cameras at a fixed camera-to-object viewing distance, larger intercamera distances allow higher depth resolutions, but cause greater depth distortions. Thus with larger intercamera distances, operators will make greater depth errors (because of the greater distortions), but will be more certain that they are not errors (because of the higher resolution).

  4. Fast stereo matching under varying illumination

    NASA Astrophysics Data System (ADS)

    Arunagiri, Sarala; Contreras, Adriana; Gallardo, Esthela; DattaGupta, Aritra; Teller, Patricia J.; Deroba, Joseph C.; Nguyen, Lam H.

    2012-06-01

    Stereo matching is a technique of finding the disparity map or correspondence points between two images acquired from different sensor positions; it is a core process in stereoscopy. Automatic stereo processing, which involves stereo matching, is an important process in many applications including vision-based obstacle avoidance for unmanned aerial vehicles (UAVs), extraction of weak targets in clutter, and automatic target detection. Due to its high computational complexity, stereo matching algorithms are one of the most heavily investigated topics in computer vision. Stereo image pairs captured under real conditions, in contrast to those captured under controlled conditions are expected to be different from each other in aspects such as scale, rotation, radiometric differences, and noise. These factors contribute to and enhance the level of difficulty of efficient and accurate stereo matching. In this paper we evaluate the effectiveness of cost functions based on Normalized Cross Correlation (NCC) and Zero mean Normalized Cross Correlation (ZNCC) on images containing speckle noise, differences in level of illumination, and both of these. This is achieved via experiments in which these cost functions are employed by a fast version of an existing modern algorithm, the graph-cut algorithm, to perform stereo matching on 24 image pairs. Stereo matching performance is evaluated in terms of execution time and the quality of the generated output measured in terms of two types of Root Mean Square (RMS) error of the disparity maps generated.

  5. Using perturbations to identify the brain circuits underlying active vision.

    PubMed

    Wurtz, Robert H

    2015-09-19

    The visual and oculomotor systems in the brain have been studied extensively in the primate. Together, they can be regarded as a single brain system that underlies active vision--the normal vision that begins with visual processing in the retina and extends through the brain to the generation of eye movement by the brainstem. The system is probably one of the most thoroughly studied brain systems in the primate, and it offers an ideal opportunity to evaluate the advantages and disadvantages of the series of perturbation techniques that have been used to study it. The perturbations have been critical in moving from correlations between neuronal activity and behaviour closer to a causal relation between neuronal activity and behaviour. The same perturbation techniques have also been used to tease out neuronal circuits that are related to active vision that in turn are driving behaviour. The evolution of perturbation techniques includes ablation of both cortical and subcortical targets, punctate chemical lesions, reversible inactivations, electrical stimulation, and finally the expanding optogenetic techniques. The evolution of perturbation techniques has supported progressively stronger conclusions about what neuronal circuits in the brain underlie active vision and how the circuits themselves might be organized.

  6. Robot Vision Library

    NASA Technical Reports Server (NTRS)

    Howard, Andrew B.; Ansar, Adnan I.; Litwin, Todd E.; Goldberg, Steven B.

    2009-01-01

    The JPL Robot Vision Library (JPLV) provides real-time robot vision algorithms for developers who are not vision specialists. The package includes algorithms for stereo ranging, visual odometry and unsurveyed camera calibration, and has unique support for very wideangle lenses

  7. Sequential digital elevation models of active lava flows from ground-based stereo time-lapse imagery

    NASA Astrophysics Data System (ADS)

    James, M. R.; Robson, S.

    2014-11-01

    We describe a framework for deriving sequences of digital elevation models (DEMs) for the analysis of active lava flows using oblique stereo-pair time-lapse imagery. A photo-based technique was favoured over laser-based alternatives due to low equipment cost, high portability and capability for network expansion, with images of advancing flows captured by digital SLR cameras over durations of up to several hours. However, under typical field scale scenarios, relative camera orientations cannot be rigidly maintained (e.g. through the use of a stereo bar), preventing the use of standard stereo time-lapse processing software. Thus, we trial semi-automated DEM-sequence workflows capable of handling the small camera motions, variable image quality and restricted photogrammetric control that result from the practicalities of data collection at remote and hazardous sites. The image processing workflows implemented either link separate close-range photogrammetry and traditional stereo-matching software, or are integrated in a single software package based on structure-from-motion (SfM). We apply these techniques in contrasting case studies from Kilauea volcano, Hawaii and Mount Etna, Sicily, which differ in scale, duration and image texture. On Kilauea, the advance direction of thin fluid lava lobes was difficult to forecast, preventing good distribution of control. Consequently, volume changes calculated through the different workflows differed by ∼10% for DEMs (over ∼30 m2) that were captured once a minute for 37 min. On Mt. Etna, more predictable advance (∼3 m h-1 for ∼3 h) of a thicker, more viscous lava allowed robust control to be deployed and volumetric change results were generally within 5% (over ∼500 m2). Overall, the integrated SfM software was more straightforward to use and, under favourable conditions, produced results comparable to those from the close-range photogrammetry pipeline. However, under conditions with limited options for photogrammetric

  8. #7 Comparing STEREO, Simulated Helioseismic Images

    NASA Video Gallery

    Farside direct observations from STEREO (left) and simultaneous helioseismic reconstructions (right). Medium to large size active regions clearly appear on the helioseismic images, however the smal...

  9. Prism-based single-camera system for stereo display

    NASA Astrophysics Data System (ADS)

    Zhao, Yue; Cui, Xiaoyu; Wang, Zhiguo; Chen, Hongsheng; Fan, Heyu; Wu, Teresa

    2016-06-01

    This paper combines the prism and single camera and puts forward a method of stereo imaging with low cost. First of all, according to the principle of geometrical optics, we can deduce the relationship between the prism single-camera system and dual-camera system, and according to the principle of binocular vision we can deduce the relationship between binoculars and dual camera. Thus we can establish the relationship between the prism single-camera system and binoculars and get the positional relation of prism, camera, and object with the best effect of stereo display. Finally, using the active shutter stereo glasses of NVIDIA Company, we can realize the three-dimensional (3-D) display of the object. The experimental results show that the proposed approach can make use of the prism single-camera system to simulate the various observation manners of eyes. The stereo imaging system, which is designed by the method proposed by this paper, can restore the 3-D shape of the object being photographed factually.

  10. Computer vision

    NASA Technical Reports Server (NTRS)

    Gennery, D.; Cunningham, R.; Saund, E.; High, J.; Ruoff, C.

    1981-01-01

    The field of computer vision is surveyed and assessed, key research issues are identified, and possibilities for a future vision system are discussed. The problems of descriptions of two and three dimensional worlds are discussed. The representation of such features as texture, edges, curves, and corners are detailed. Recognition methods are described in which cross correlation coefficients are maximized or numerical values for a set of features are measured. Object tracking is discussed in terms of the robust matching algorithms that must be devised. Stereo vision, camera control and calibration, and the hardware and systems architecture are discussed.

  11. Using perturbations to identify the brain circuits underlying active vision

    PubMed Central

    Wurtz, Robert H.

    2015-01-01

    The visual and oculomotor systems in the brain have been studied extensively in the primate. Together, they can be regarded as a single brain system that underlies active vision—the normal vision that begins with visual processing in the retina and extends through the brain to the generation of eye movement by the brainstem. The system is probably one of the most thoroughly studied brain systems in the primate, and it offers an ideal opportunity to evaluate the advantages and disadvantages of the series of perturbation techniques that have been used to study it. The perturbations have been critical in moving from correlations between neuronal activity and behaviour closer to a causal relation between neuronal activity and behaviour. The same perturbation techniques have also been used to tease out neuronal circuits that are related to active vision that in turn are driving behaviour. The evolution of perturbation techniques includes ablation of both cortical and subcortical targets, punctate chemical lesions, reversible inactivations, electrical stimulation, and finally the expanding optogenetic techniques. The evolution of perturbation techniques has supported progressively stronger conclusions about what neuronal circuits in the brain underlie active vision and how the circuits themselves might be organized. PMID:26240420

  12. Calibration of a 3D endoscopic system based on active stereo method for shape measurement of biological tissues and specimen.

    PubMed

    Furukawa, Ryo; Aoyama, Masahito; Hiura, Shinsaku; Aoki, Hirooki; Kominami, Yoko; Sanomura, Yoji; Yoshida, Shigeto; Tanaka, Shinji; Sagawa, Ryusuke; Kawasaki, Hiroshi

    2014-01-01

    For endoscopic medical treatment, measuring the size and shape of the lesion, such as a tumor, is important for the improvement of diagnostic accuracy. We are developing a system to measure the shapes and sizes of living tissue by active stereo method using a normal endoscope on which a micro pattern projector is attached. In order to perform 3D reconstruction, estimating the intrinsic and extrinsic parameters of the endoscopic camera and the pattern projector is required. Particularly, calibration of the pattern projector is difficult. In this paper, we propose a simultaneous estimation method of both intrinsic and extrinsic parameters of the pattern projector. This simplifies the calibration procedure required in practical scenes. Furthermore, we have developed an efficient user interface to intuitively operate the calibration and reconstruction procedures. Using the developed system, we measured the shape of an internal tissue of the soft palate of a human and a biological specimen.

  13. Vector disparity sensor with vergence control for active vision systems.

    PubMed

    Barranco, Francisco; Diaz, Javier; Gibaldi, Agostino; Sabatini, Silvio P; Ros, Eduardo

    2012-01-01

    This paper presents an architecture for computing vector disparity for active vision systems as used on robotics applications. The control of the vergence angle of a binocular system allows us to efficiently explore dynamic environments, but requires a generalization of the disparity computation with respect to a static camera setup, where the disparity is strictly 1-D after the image rectification. The interaction between vision and motor control allows us to develop an active sensor that achieves high accuracy of the disparity computation around the fixation point, and fast reaction time for the vergence control. In this contribution, we address the development of a real-time architecture for vector disparity computation using an FPGA device. We implement the disparity unit and the control module for vergence, version, and tilt to determine the fixation point. In addition, two on-chip different alternatives for the vector disparity engines are discussed based on the luminance (gradient-based) and phase information of the binocular images. The multiscale versions of these engines are able to estimate the vector disparity up to 32 fps on VGA resolution images with very good accuracy as shown using benchmark sequences with known ground-truth. The performances in terms of frame-rate, resource utilization, and accuracy of the presented approaches are discussed. On the basis of these results, our study indicates that the gradient-based approach leads to the best trade-off choice for the integration with the active vision system.

  14. Vector Disparity Sensor with Vergence Control for Active Vision Systems

    PubMed Central

    Barranco, Francisco; Diaz, Javier; Gibaldi, Agostino; Sabatini, Silvio P.; Ros, Eduardo

    2012-01-01

    This paper presents an architecture for computing vector disparity for active vision systems as used on robotics applications. The control of the vergence angle of a binocular system allows us to efficiently explore dynamic environments, but requires a generalization of the disparity computation with respect to a static camera setup, where the disparity is strictly 1-D after the image rectification. The interaction between vision and motor control allows us to develop an active sensor that achieves high accuracy of the disparity computation around the fixation point, and fast reaction time for the vergence control. In this contribution, we address the development of a real-time architecture for vector disparity computation using an FPGA device. We implement the disparity unit and the control module for vergence, version, and tilt to determine the fixation point. In addition, two on-chip different alternatives for the vector disparity engines are discussed based on the luminance (gradient-based) and phase information of the binocular images. The multiscale versions of these engines are able to estimate the vector disparity up to 32 fps on VGA resolution images with very good accuracy as shown using benchmark sequences with known ground-truth. The performances in terms of frame-rate, resource utilization, and accuracy of the presented approaches are discussed. On the basis of these results, our study indicates that the gradient-based approach leads to the best trade-off choice for the integration with the active vision system. PMID:22438737

  15. Kiwi Forego Vision in the Guidance of Their Nocturnal Activities

    PubMed Central

    Martin, Graham R.; Wilson, Kerry-Jayne; Martin Wild, J.; Parsons, Stuart; Fabiana Kubke, M.; Corfield, Jeremy

    2007-01-01

    Background In vision, there is a trade-off between sensitivity and resolution, and any eye which maximises information gain at low light levels needs to be large. This imposes exacting constraints upon vision in nocturnal flying birds. Eyes are essentially heavy, fluid-filled chambers, and in flying birds their increased size is countered by selection for both reduced body mass and the distribution of mass towards the body core. Freed from these mass constraints, it would be predicted that in flightless birds nocturnality should favour the evolution of large eyes and reliance upon visual cues for the guidance of activity. Methodology/Principal Findings We show that in Kiwi (Apterygidae), flightlessness and nocturnality have, in fact, resulted in the opposite outcome. Kiwi show minimal reliance upon vision indicated by eye structure, visual field topography, and brain structures, and increased reliance upon tactile and olfactory information. Conclusions/Significance This lack of reliance upon vision and increased reliance upon tactile and olfactory information in Kiwi is markedly similar to the situation in nocturnal mammals that exploit the forest floor. That Kiwi and mammals evolved to exploit these habitats quite independently provides evidence for convergent evolution in their sensory capacities that are tuned to a common set of perceptual challenges found in forest floor habitats at night and which cannot be met by the vertebrate visual system. We propose that the Kiwi visual system has undergone adaptive regressive evolution driven by the trade-off between the relatively low rate of gain of visual information that is possible at low light levels, and the metabolic costs of extracting that information. PMID:17332846

  16. Active vision and receptive field development in evolutionary robots.

    PubMed

    Floreano, Dario; Suzuki, Mototaka; Mattiussi, Dario

    2005-01-01

    In this paper, we describe the artificial evolution of adaptive neural controllers for an outdoor mobile robot equipped with a mobile camera. The robot can dynamically select the gazing direction by moving the body and/or the camera. The neural control system, which maps visual information to motor commands, is evolved online by means of a genetic algorithm, but the synaptic connections (receptive fields) from visual photoreceptors to internal neurons can also be modified by Hebbian plasticity while the robot moves in the environment. We show that robots evolved in physics-based simulations with Hebbian visual plasticity display more robust adaptive behavior when transferred to real outdoor environments as compared to robots evolved without visual plasticity. We also show that the formation of visual receptive fields is significantly and consistently affected by active vision as compared to the formation of receptive fields with grid sample images in the environment of the robot. Finally, we show that the interplay between active vision and receptive field formation amounts to the selection and exploitation of a small and constant subset of visual features available to the robot.

  17. Stereo visualization in the ground segment tasks of the science space missions

    NASA Astrophysics Data System (ADS)

    Korneva, Natalia; Nazarov, Vladimir; Mogilevsky, Mikhail; Nazirov, Ravil

    The ground segment is one of the key components of any science space mission. Its functionality substantially defines the scientific effectiveness of the experiment as a whole. And it should be noted that its outstanding feature (in contrast to the other information systems of the scientific space projects) is interaction between researcher and project information system in order to interpret data being obtained during experiments. Therefore the ability to visualize the data being processed is essential prerequisite for ground segment's software and the usage of modern technological solutions and approaches in this area will allow increasing science return in general and providing a framework for new experiments creation. Mostly for the visualization of data being processed 2D and 3D graphics are used that is caused by the traditional visualization tools capabilities. Besides that the stereo data visualization methods are used actively in solving some tasks. However their usage is usually limited to such tasks as visualization of virtual and augmented reality, remote sensing data processing and suchlike. Low prevalence of stereo visualization methods in solving science ground segment tasks is primarily explained by extremely high cost of the necessary hardware. But recently appeared low cost hardware solutions for stereo visualization based on the page-flip method of views separation. In this case it seems promising to use the stereo visualization as an instrument for investigation of a wide range of problems, mainly for stereo visualization of complex physical processes as well as mathematical abstractions and models. The article is concerned with an attempt to use this approach. It describes the details and problems of using stereo visualization (page-flip method based on NVIDIA 3D Vision Kit, graphic processor GeForce) for display of some datasets of magnetospheric satellite onboard measurements and also in development of the software for manual stereo matching.

  18. Active Vision in Marmosets: A Model System for Visual Neuroscience

    PubMed Central

    Reynolds, John H.; Miller, Cory T.

    2014-01-01

    The common marmoset (Callithrix jacchus), a small-bodied New World primate, offers several advantages to complement vision research in larger primates. Studies in the anesthetized marmoset have detailed the anatomy and physiology of their visual system (Rosa et al., 2009) while studies of auditory and vocal processing have established their utility for awake and behaving neurophysiological investigations (Lu et al., 2001a,b; Eliades and Wang, 2008a,b; Osmanski and Wang, 2011; Remington et al., 2012). However, a critical unknown is whether marmosets can perform visual tasks under head restraint. This has been essential for studies in macaques, enabling both accurate eye tracking and head stabilization for neurophysiology. In one set of experiments we compared the free viewing behavior of head-fixed marmosets to that of macaques, and found that their saccadic behavior is comparable across a number of saccade metrics and that saccades target similar regions of interest including faces. In a second set of experiments we applied behavioral conditioning techniques to determine whether the marmoset could control fixation for liquid reward. Two marmosets could fixate a central point and ignore peripheral flashing stimuli, as needed for receptive field mapping. Both marmosets also performed an orientation discrimination task, exhibiting a saturating psychometric function with reliable performance and shorter reaction times for easier discriminations. These data suggest that the marmoset is a viable model for studies of active vision and its underlying neural mechanisms. PMID:24453311

  19. An active vision system for multitarget surveillance in dynamic environments.

    PubMed

    Bakhtari, Ardevan; Benhabib, Beno

    2007-02-01

    This paper presents a novel agent-based method for the dynamic coordinated selection and positioning of active-vision cameras for the simultaneous surveillance of multiple objects-of-interest as they travel through a cluttered environment with a-priori unknown trajectories. The proposed system dynamically adjusts not only the orientation but also the position of the cameras in order to maximize the system's performance by avoiding occlusions and acquiring images with preferred viewing angles. Sensor selection and positioning are accomplished through an agent-based approach. The proposed sensing-system reconfiguration strategy has been verified via simulations and implemented on an experimental prototype setup for automated facial recognition. Both simulations and experimental analyses have shown that the use of dynamic sensors along with an effective online dispatching strategy may tangibly improve the surveillance performance of a sensing system.

  20. A vision architecture for the extravehicular activity retriever

    NASA Technical Reports Server (NTRS)

    Magee, Michael

    1992-01-01

    The Extravehicular Activity Retriever (EVAR) is a robotic device currently being developed by the Automation and Robotics Division at the NASA Johnson Space Center to support activities in the neighborhood of the Space Shuttle or Space Station Freedom. As the name implies, the Retriever's primary function will be to provide the capability to retrieve tools, equipment or other objects which have become detached from the spacecraft, but it will also be able to rescue a crew member who may have become inadvertently de-tethered. Later goals will include cooperative operations between a crew member and the Retriever such as fetching a tool that is required for servicing or maintenance operations. This report documents a preliminary design for a Vision System Planner (VSP) for the EVAR that is capable of achieving visual objectives provided to it by a high level task planner. Typical commands which the task planner might issue to the VSP relate to object recognition, object location determination, and obstacle detection. Upon receiving a command from the task planner, the VSP then plans a sequence of actions to achieve the specified objective using a model-based reasoning approach. This sequence may involve choosing an appropriate sensor, selecting an algorithm to process the data, reorienting the sensor, adjusting the effective resolution of the image using lens zooming capability, and/or requesting the task planner to reposition the EVAR to obtain a different view of the object. An initial version of the Vision System Planner which realizes the above capabilities using simulated images has been implemented and tested. The remaining sections describe the architecture and capabilities of the VSP and its relationship to the high level task planner. In addition, typical plans that are generated to achieve visual goals for various scenarios will be discussed. Specific topics to be addressed will include object search strategies, repositioning of the EVAR to improve the

  1. Range gated active night vision system for automobiles.

    PubMed

    David, Ofer; Kopeika, Norman S; Weizer, Boaz

    2006-10-01

    Night vision for automobiles is an emerging safety feature that is being introduced for automotive safety. We develop what we believe is an innovative new night vision system using gated imaging principles. The concept of gated imaging is described and its basic advantages, including the backscatter reduction mechanism for improved vision through fog, rain, and snow. Evaluation of performance is presented by analyzing bar pattern modulation and comparing Johnson chart predictions.

  2. STEREO Mission Design

    NASA Technical Reports Server (NTRS)

    Dunham, David W.; Guzman, Jose J.; Sharer, Peter J.; Friessen, Henry D.

    2007-01-01

    STEREO (Solar-TErestrial RElations Observatory) is the third mission in the Solar Terrestrial Probes program (STP) of the National Aeronautics and Space Administration (NASA). STEREO is the first mission to utilize phasing loops and multiple lunar flybys to alter the trajectories of more than one satellite. This paper describes the launch computation methodology, the launch constraints, and the resulting nine launch windows that were prepared for STEREO. More details are provided for the window in late October 2006 that was actually used.

  3. Attenuating Stereo Pixel-Locking via Affine Window Adaptation

    NASA Technical Reports Server (NTRS)

    Stein, Andrew N.; Huertas, Andres; Matthies, Larry H.

    2006-01-01

    For real-time stereo vision systems, the standard method for estimating sub-pixel stereo disparity given an initial integer disparity map involves fitting parabolas to a matching cost function aggregated over rectangular windows. This results in a phenomenon known as 'pixel-locking,' which produces artificially-peaked histograms of sub-pixel disparity. These peaks correspond to the introduction of erroneous ripples or waves in the 3D reconstruction of truly Rat surfaces. Since stereo vision is a common input modality for autonomous vehicles, these inaccuracies can pose a problem for safe, reliable navigation. This paper proposes a new method for sub-pixel stereo disparity estimation, based on ideas from Lucas-Kanade tracking and optical flow, which substantially reduces the pixel-locking effect. In addition, it has the ability to correct much larger initial disparity errors than previous approaches and is more general as it applies not only to the ground plane.

  4. Simple method for calibrating omnidirectional stereo with multiple cameras

    NASA Astrophysics Data System (ADS)

    Ha, Jong-Eun; Choi, I.-Sak

    2011-04-01

    Cameras can give useful information for the autonomous navigation of a mobile robot. Typically, one or two cameras are used for this task. Recently, an omnidirectional stereo vision system that can cover the whole surrounding environment of a mobile robot is adopted. They usually adopt a mirror that cannot offer uniform spatial resolution. In this paper, we deal with an omnidirectional stereo system which consists of eight cameras where each two vertical cameras constitute one stereo system. Camera calibration is the first necessary step to obtain 3D information. Calibration using a planar pattern requires many images acquired under different poses so it is a tedious step to calibrate all eight cameras. In this paper, we present a simple calibration procedure using a cubic-type calibration structure that surrounds the omnidirectional stereo system. We can calibrate all the cameras on an omnidirectional stereo system in just one shot.

  5. Acceleration of Stereo Correlation in Verilog

    NASA Technical Reports Server (NTRS)

    Villalpando, Carlos

    2006-01-01

    To speed up vision processing in low speed, low power devices, embedding FPGA hardware is becoming an effective way to add processing capability. FPGAs offer the ability to flexibly add parallel and/or deeply pipelined computation to embedded processors without adding significantly to the mass and power requirements of an embedded system. This paper will discuss the JPL stereo vision system, and describe how a portion of that system was accelerated by using custom FPGA hardware to process the computationally intensive portions of JPL stereo. The architecture described takes full advantage of the ability of an FPGA to use many small computation elements in parallel. This resulted in a 16 times speedup in real hardware over using a simple linear processor to compute image correlation and disparity.

  6. Solar Eclipse, STEREO Style

    NASA Technical Reports Server (NTRS)

    2007-01-01

    There was a transit of the Moon across the face of the Sun - but it could not be seen from Earth. This sight was visible only from the STEREO-B spacecraft in its orbit about the sun, trailing behind the Earth. NASA's STEREO mission consists of two spacecraft launched in October, 2006 to study solar storms. The transit starts at 1:56 am EST and continued for 12 hours until 1:57 pm EST. STEREO-B is currently about 1 million miles from the Earth, 4.4 times farther away from the Moon than we are on Earth. As the result, the Moon will appear 4.4 times smaller than what we are used to. This is still, however, much larger than, say, the planet Venus appeared when it transited the Sun as seen from Earth in 2004. This alignment of STEREO-B and the Moon is not just due to luck. It was arranged with a small tweak to STEREO-B's orbit last December. The transit is quite useful to STEREO scientists for measuring the focus and the amount of scattered light in the STEREO imagers and for determining the pointing of the STEREO coronagraphs. The Sun as it appears in these the images and each frame of the movie is a composite of nearly simultaneous images in four different wavelengths of extreme ultraviolet light that were separated into color channels and then recombined with some level of transparency for each.

  7. Improved stereo matching applied to digitization of greenhouse plants

    NASA Astrophysics Data System (ADS)

    Zhang, Peng; Xu, Lihong; Li, Dawei; Gu, Xiaomeng

    2015-03-01

    The digitization of greenhouse plants is an important aspect of digital agriculture. Its ultimate aim is to reconstruct a visible and interoperable virtual plant model on the computer by using state-of-the-art image process and computer graphics technologies. The most prominent difficulties of the digitization of greenhouse plants include how to acquire the three-dimensional shape data of greenhouse plants and how to carry out its realistic stereo reconstruction. Concerning these issues an effective method for the digitization of greenhouse plants is proposed by using a binocular stereo vision system in this paper. Stereo vision is a technique aiming at inferring depth information from two or more cameras; it consists of four parts: calibration of the cameras, stereo rectification, search of stereo correspondence and triangulation. Through the final triangulation procedure, the 3D point cloud of the plant can be achieved. The proposed stereo vision system can facilitate further segmentation of plant organs such as stems and leaves; moreover, it can provide reliable digital samples for the visualization of greenhouse tomato plants.

  8. Overview of NETL In-House Vision 21 Activities

    SciTech Connect

    Wildman, David J.

    2001-11-06

    The Office of Science and Technology at the National Energy Technology Laboratory, conducts research in support of Department of Energy's Fossil Energy Program. The research is funded through a variety of programs with each program focusing on a particular aspect of fossil energy. Since the Vision 21 Concept is based on the Advanced Power System Programs (Integrated Gasification Combined Cycle, Pressurized Fluid Bed, HIPPS, Advanced Turbine Systems, and Fuel Cells) it is not surprising that much of the research supports the Vision 21 Concept. The research is classified and presented according to ''enabling technologies'' and ''supporting technologies'' as defined by the Vision 21 Program. Enabling technology include fuel flexible gasification, fuel flexible combustion, hydrogen separation from fuel gas, advanced combustion systems, circulating fluid bed technology, and fuel cells. Supporting technologies include development of advanced materials, computer simulations, computation al fluid dynamics modeling, and advanced environmental control. An overview of Vision 21 related research is described, emphasizing recent accomplishments and capabilities.

  9. JAVA Stereo Display Toolkit

    NASA Technical Reports Server (NTRS)

    Edmonds, Karina

    2008-01-01

    This toolkit provides a common interface for displaying graphical user interface (GUI) components in stereo using either specialized stereo display hardware (e.g., liquid crystal shutter or polarized glasses) or anaglyph display (red/blue glasses) on standard workstation displays. An application using this toolkit will work without modification in either environment, allowing stereo software to reach a wider audience without sacrificing high-quality display on dedicated hardware. The toolkit is written in Java for use with the Swing GUI Toolkit and has cross-platform compatibility. It hooks into the graphics system, allowing any standard Swing component to be displayed in stereo. It uses the OpenGL graphics library to control the stereo hardware and to perform the rendering. It also supports anaglyph and special stereo hardware using the same API (application-program interface), and has the ability to simulate color stereo in anaglyph mode by combining the red band of the left image with the green/blue bands of the right image. This is a low-level toolkit that accomplishes simply the display of components (including the JadeDisplay image display component). It does not include higher-level functions such as disparity adjustment, 3D cursor, or overlays all of which can be built using this toolkit.

  10. A computational theory of human stereo vision.

    PubMed

    Marr, D; Poggio, T

    1979-05-23

    An algorithm is proposed for solving the stereoscopic matching problem. The algorithm consists of five steps: (1) Each image is filtered at different orientations with bar masks of four sizes that increase with eccentricity; the equivalent filters are one or two octaves wide. (2) Zero-crossings in the filtered images, which roughly correspond to edges, are localized. Positions of the ends of lines and edges are also found. (3) For each mask orientation and size, matching takes place between pairs of zero-crossings or terminationss of the same sign in the two images, for a range of disparities up to about the width of the mask's central region. (4) Wide masks can control vergence movements, thus causing small masks to come into correspondence. (5) When a correspondence is achieved, it is stored in a dynamic buffer, called the 2 1/2-D sketch. It is shown that this proposal provides a theoretical framework for most existing psychophysical and neurophysiological data about stereopsis. Several critical experimental predictions are also made, for instance about the size of Panum's area under various conditions. The results of such experiments would tell us whether, for example, cooperativity is necessary for the matching process.

  11. STEREO Sun360 Teaser

    NASA Video Gallery

    For the past 4 years, the two STEREO spacecraft have been moving away from Earth and gaining a more complete picture of the sun. On Feb. 6, 2011, NASA will reveal the first ever images of the entir...

  12. Stereo Measurements from Satellites

    NASA Technical Reports Server (NTRS)

    Adler, R.

    1982-01-01

    The papers in this presentation include: 1) 'Stereographic Observations from Geosynchronous Satellites: An Important New Tool for the Atmospheric Sciences'; 2) 'Thunderstorm Cloud Top Ascent Rates Determined from Stereoscopic Satellite Observations'; 3) 'Artificial Stereo Presentation of Meteorological Data Fields'.

  13. Hybrid-Based Dense Stereo Matching

    NASA Astrophysics Data System (ADS)

    Chuang, T. Y.; Ting, H. W.; Jaw, J. J.

    2016-06-01

    Stereo matching generating accurate and dense disparity maps is an indispensable technique for 3D exploitation of imagery in the fields of Computer vision and Photogrammetry. Although numerous solutions and advances have been proposed in the literature, occlusions, disparity discontinuities, sparse texture, image distortion, and illumination changes still lead to problematic issues and await better treatment. In this paper, a hybrid-based method based on semi-global matching is presented to tackle the challenges on dense stereo matching. To ease the sensitiveness of SGM cost aggregation towards penalty parameters, a formal way to provide proper penalty estimates is proposed. To this end, the study manipulates a shape-adaptive cross-based matching with an edge constraint to generate an initial disparity map for penalty estimation. Image edges, indicating the potential locations of occlusions as well as disparity discontinuities, are approved by the edge drawing algorithm to ensure the local support regions not to cover significant disparity changes. Besides, an additional penalty parameter 𝑃𝑒 is imposed onto the energy function of SGM cost aggregation to specifically handle edge pixels. Furthermore, the final disparities of edge pixels are found by weighting both values derived from the SGM cost aggregation and the U-SURF matching, providing more reliable estimates at disparity discontinuity areas. Evaluations on Middlebury stereo benchmarks demonstrate satisfactory performance and reveal the potency of the hybrid-based dense stereo matching method.

  14. Active vision task and postural control in healthy, young adults: Synergy and probably not duality.

    PubMed

    Bonnet, Cédrick T; Baudry, Stéphane

    2016-07-01

    In upright stance, individuals sway continuously and the sway pattern in dual tasks (e.g., a cognitive task performed in upright stance) differs significantly from that observed during the control quiet stance task. The cognitive approach has generated models (limited attentional resources, U-shaped nonlinear interaction) to explain such patterns based on competitive sharing of attentional resources. The objective of the current manuscript was to review these cognitive models in the specific context of visual tasks involving gaze shifts toward precise targets (here called active vision tasks). The selection excluded the effects of early and late stages of life or disease, external perturbations, active vision tasks requiring head and body motions and the combination of two tasks performed together (e.g., a visual task in addition to a computation in one's head). The selection included studies performed by healthy, young adults with control and active - difficult - vision tasks. Over 174 studies found in Pubmed and Mendeley databases, nine were selected. In these studies, young adults exhibited significantly lower amplitude of body displacement (center of pressure and/or body marker) under active vision tasks than under the control task. Furthermore, the more difficult the active vision tasks were, the better the postural control was. This underscores that postural control during active vision tasks may rely on synergistic relations between the postural and visual systems rather than on competitive or dual relations. In contrast, in the control task, there would not be any synergistic or competitive relations.

  15. Stereo and IMU-Assisted Visual Odometry for Small Robots

    NASA Technical Reports Server (NTRS)

    2012-01-01

    This software performs two functions: (1) taking stereo image pairs as input, it computes stereo disparity maps from them by cross-correlation to achieve 3D (three-dimensional) perception; (2) taking a sequence of stereo image pairs as input, it tracks features in the image sequence to estimate the motion of the cameras between successive image pairs. A real-time stereo vision system with IMU (inertial measurement unit)-assisted visual odometry was implemented on a single 750 MHz/520 MHz OMAP3530 SoC (system on chip) from TI (Texas Instruments). Frame rates of 46 fps (frames per second) were achieved at QVGA (Quarter Video Graphics Array i.e. 320 240), or 8 fps at VGA (Video Graphics Array 640 480) resolutions, while simultaneously tracking up to 200 features, taking full advantage of the OMAP3530's integer DSP (digital signal processor) and floating point ARM processors. This is a substantial advancement over previous work as the stereo implementation produces 146 Mde/s (millions of disparities evaluated per second) in 2.5W, yielding a stereo energy efficiency of 58.8 Mde/J, which is 3.75 better than prior DSP stereo while providing more functionality.

  16. Digital stereoscopic photography using StereoData Maker

    NASA Astrophysics Data System (ADS)

    Toeppen, John; Sykes, David

    2009-02-01

    Stereoscopic digital photography has become much more practical with the use of USB wired connections between a pair of Canon cameras using StereoData Maker software for precise synchronization. StereoPhoto Maker software is now used to automatically combine and align right and left image files to produce a stereo pair. Side by side images are saved as pairs and may be viewed using software that converts the images into the preferred viewing format at the time of display. Stereo images may be shared on the internet, displayed on computer monitors, autostereo displays, viewed on high definition 3D TVs, or projected for a group. Stereo photographers are now free to control composition using point and shoot settings, or are able to control shutter speed, aperture, focus, ISO, and zoom. The quality of the output depends on the developed skills of the photographer as well as their understanding of the software, human vision and the geometry they choose for their cameras and subjects. Observers of digital stereo images can zoom in for greater detail and scroll across large panoramic fields with a few keystrokes. The art, science, and methods of taking, creating and viewing digital stereo photos are presented in a historic and developmental context in this paper.

  17. Multiview stereo and silhouette fusion via minimizing generalized reprojection error☆

    PubMed Central

    Li, Zhaoxin; Wang, Kuanquan; Jia, Wenyan; Chen, Hsin-Chen; Zuo, Wangmeng; Meng, Deyu; Sun, Mingui

    2014-01-01

    Accurate reconstruction of 3D geometrical shape from a set of calibrated 2D multiview images is an active yet challenging task in computer vision. The existing multiview stereo methods usually perform poorly in recovering deeply concave and thinly protruding structures, and suffer from several common problems like slow convergence, sensitivity to initial conditions, and high memory requirements. To address these issues, we propose a two-phase optimization method for generalized reprojection error minimization (TwGREM), where a generalized framework of reprojection error is proposed to integrate stereo and silhouette cues into a unified energy function. For the minimization of the function, we first introduce a convex relaxation on 3D volumetric grids which can be efficiently solved using variable splitting and Chambolle projection. Then, the resulting surface is parameterized as a triangle mesh and refined using surface evolution to obtain a high-quality 3D reconstruction. Our comparative experiments with several state-of-the-art methods show that the performance of TwGREM based 3D reconstruction is among the highest with respect to accuracy and efficiency, especially for data with smooth texture and sparsely sampled viewpoints. PMID:25558120

  18. A stereo-compound hybrid microscope for combined intracellular and optical recording of invertebrate neural network activity.

    PubMed

    Frost, William N; Wang, Jean; Brandon, Christopher J

    2007-05-15

    Optical recording studies of invertebrate neural networks with voltage-sensitive dyes seldom employ conventional intracellular electrodes. This may in part be due to the traditional reliance on compound microscopes for such work. While such microscopes have high light-gathering power, they do not provide depth of field, making working with sharp electrodes difficult. Here we describe a hybrid microscope design, with switchable compound and stereo objectives, that eases the use of conventional intracellular electrodes in optical recording experiments. We use it, in combination with a voltage-sensitive dye and photodiode array, to identify neurons participating in the swim motor program of the marine mollusk Tritonia. This microscope design should be applicable to optical recording studies in many preparations.

  19. Stereo Orthogonal Axonometric Perspective for the Teaching of Descriptive Geometry

    ERIC Educational Resources Information Center

    Méxas, José Geraldo Franco; Guedes, Karla Bastos; Tavares, Ronaldo da Silva

    2015-01-01

    Purpose: The purpose of this paper is to present the development of a software for stereo visualization of geometric solids, applied to the teaching/learning of Descriptive Geometry. Design/methodology/approach: The paper presents the traditional method commonly used in computer graphic stereoscopic vision (implemented in C language) and the…

  20. Exploring Techniques for Vision Based Human Activity Recognition: Methods, Systems, and Evaluation

    PubMed Central

    Xu, Xin; Tang, Jinshan; Zhang, Xiaolong; Liu, Xiaoming; Zhang, Hong; Qiu, Yimin

    2013-01-01

    With the wide applications of vision based intelligent systems, image and video analysis technologies have attracted the attention of researchers in the computer vision field. In image and video analysis, human activity recognition is an important research direction. By interpreting and understanding human activities, we can recognize and predict the occurrence of crimes and help the police or other agencies react immediately. In the past, a large number of papers have been published on human activity recognition in video and image sequences. In this paper, we provide a comprehensive survey of the recent development of the techniques, including methods, systems, and quantitative evaluation of the performance of human activity recognition. PMID:23353144

  1. Exploring techniques for vision based human activity recognition: methods, systems, and evaluation.

    PubMed

    Xu, Xin; Tang, Jinshan; Zhang, Xiaolong; Liu, Xiaoming; Zhang, Hong; Qiu, Yimin

    2013-01-25

    With the wide applications of vision based intelligent systems, image and video analysis technologies have attracted the attention of researchers in the computer vision field. In image and video analysis, human activity recognition is an important research direction. By interpreting and understanding human activity, we can recognize and predict the occurrence of crimes and help the police or other agencies react immediately. In the past, a large number of papers have been published on human activity recognition in video and image sequences. In this paper, we provide a comprehensive survey of the recent development of the techniques, including methods, systems, and quantitative evaluation towards the performance of human activity recognition.

  2. STEREO Mission Design Implementation

    NASA Technical Reports Server (NTRS)

    Guzman, Jose J.; Dunham, David W.; Sharer, Peter J.; Hunt, Jack W.; Ray, J. Courtney; Shapiro, Hongxing S.; Ossing, Daniel A.; Eichstedt, John E.

    2007-01-01

    STEREO (Solar-TErrestrial RElations Observatory) is the third mission in the Solar Terrestrial Probes program (STP) of the National Aeronautics and Space Administration (NASA) Science Mission Directorate Sun-Earth Connection theme. This paper describes the successful implementation (lunar swingby targeting) of the mission following the first phasing orbit to deployment into the heliocentric mission orbits following the two lunar swingbys. The STEREO Project had to make some interesting trajectory decisions in order to exploit opportunities to image a bright comet and an unusual lunar transit across the Sun.

  3. New stereo matching algorithm

    NASA Astrophysics Data System (ADS)

    Ahmed, Yasser A.; Afifi, Hossam; Rubino, Gerardo

    1999-05-01

    This paper present a new algorithm for stereo matching. The main idea is to decompose the original problem into independent hierarchical and more elementary problems that can be solved faster without any complicated mathematics using BBD. To achieve that, we use a new image feature called 'continuity feature' instead of classical noise. This feature can be extracted from any kind of images by a simple process and without using a searching technique. A new matching technique is proposed to match the continuity feature. The new algorithm resolves the main disadvantages of feature based stereo matching algorithms.

  4. Applications of artificial intelligence 1993: Machine vision and robotics; Proceedings of the Meeting, Orlando, FL, Apr. 14-16, 1993

    SciTech Connect

    Boyer, K.L.; Stark, L.

    1993-01-01

    Various levels of machine vision and robotics are addressed, including object recognition, image feature extraction, active vision, stereo and matching, range image acquisition and analysis, sensor models, motion and path planning, and software environments. Papers are presented on integration of geometric and nongeometric attributes for fast object recognition, a four-degree-of-freedom robot head for active computer vision, shape reconstruction from shading with perspective projection, fast extraction of planar surfaces from range images, and real-time reconstruction and rendering of three-dimensional occupancy maps.

  5. Computer vision: automating DEM generation of active lava flows and domes from photos

    NASA Astrophysics Data System (ADS)

    James, M. R.; Varley, N. R.; Tuffen, H.

    2012-12-01

    Accurate digital elevation models (DEMs) form fundamental data for assessing many volcanic processes. We present a photo-based approach developed within the computer vision community to produce DEMs from a consumer-grade digital camera and freely available software. Two case studies, based on the Volcán de Colima lava dome and the Puyehue Cordón-Caulle obsidian flow, highlight the advantages of the technique in terms of the minimal expertise required, the speed of data acquisition and the automated processing involved. The reconstruction procedure combines structure-from-motion and multi-view stereo algorithms (SfM-MVS) and can generate dense 3D point clouds (millions of points) from multiple photographs of a scene taken from different positions. Processing is carried out by automated software (e.g. http://blog.neonascent.net/archives/bundler-photogrammetry-package/). SfM-MVS reconstructions are initally un-scaled and un-oriented so additional geo-referencing software has been developed. Although this step requires the presence of some control points, the SfM-MVS approach has significantly easier image acquisition and control requirements than traditional photogrammetry, facilitating its use in a broad range of difficult environments. At Colima, the lava dome surface was reconstructed from recent and archive images taken from light aircraft over flights (2007-2011). Scaling and geo-referencing was carried out using features identified in web-sourced ortho-imagery obtained as a basemap layer in ArcMap - no ground-based measurements were required. Average surface measurement densities are typically 10-40 points per m2. Over mean viewing distances of ~500-2500 m (for different surveys), RMS error on the control features is ~1.5 m. The derived DEMs (with 1-m grid resolution) are sufficient to quantify volumetric change, as well as to highlight the structural evolution of the upper surface of the dome following an explosion in June 2011. At Puyehue Cord

  6. Neural correlates of active vision: An fMRI comparison of natural reading and scene viewing.

    PubMed

    Choi, Wonil; Henderson, John M

    2015-08-01

    Theories of eye movement control during active vision tasks such as reading and scene viewing have primarily been developed and tested using data from eye tracking and computational modeling, and little is currently known about the neurocognition of active vision. The current fMRI study was conducted to examine the nature of the cortical networks that are associated with active vision. Subjects were asked to read passages for meaning and view photographs of scenes for a later memory test. The eye movement control network comprising frontal eye field (FEF), supplementary eye fields (SEF), and intraparietal sulcus (IPS), commonly activated during single-saccade eye movement tasks, were also involved in reading and scene viewing, suggesting that a common control network is engaged when eye movements are executed. However, the activated locus of the FEF varied across the two tasks, with medial FEF more activated in scene viewing relative to passage reading and lateral FEF more activated in reading than scene viewing. The results suggest that eye movements during active vision are associated with both domain-general and domain-specific components of the eye movement control network.

  7. Neural correlates of active vision: An fMRI comparison of natural reading and scene viewing.

    PubMed

    Choi, Wonil; Henderson, John M

    2015-08-01

    Theories of eye movement control during active vision tasks such as reading and scene viewing have primarily been developed and tested using data from eye tracking and computational modeling, and little is currently known about the neurocognition of active vision. The current fMRI study was conducted to examine the nature of the cortical networks that are associated with active vision. Subjects were asked to read passages for meaning and view photographs of scenes for a later memory test. The eye movement control network comprising frontal eye field (FEF), supplementary eye fields (SEF), and intraparietal sulcus (IPS), commonly activated during single-saccade eye movement tasks, were also involved in reading and scene viewing, suggesting that a common control network is engaged when eye movements are executed. However, the activated locus of the FEF varied across the two tasks, with medial FEF more activated in scene viewing relative to passage reading and lateral FEF more activated in reading than scene viewing. The results suggest that eye movements during active vision are associated with both domain-general and domain-specific components of the eye movement control network. PMID:26026255

  8. An Inquiry-Based Vision Science Activity for Graduate Students and Postdoctoral Research Scientists

    NASA Astrophysics Data System (ADS)

    Putnam, N. M.; Maness, H. L.; Rossi, E. A.; Hunter, J. J.

    2010-12-01

    The vision science activity was originally designed for the 2007 Center for Adaptive Optics (CfAO) Summer School. Participants were graduate students, postdoctoral researchers, and professionals studying the basics of adaptive optics. The majority were working in fields outside vision science, mainly astronomy and engineering. The primary goal of the activity was to give participants first-hand experience with the use of a wavefront sensor designed for clinical measurement of the aberrations of the human eye and to demonstrate how the resulting wavefront data generated from these measurements can be used to assess optical quality. A secondary goal was to examine the role wavefront measurements play in the investigation of vision-related scientific questions. In 2008, the activity was expanded to include a new section emphasizing defocus and astigmatism and vision testing/correction in a broad sense. As many of the participants were future post-secondary educators, a final goal of the activity was to highlight the inquiry-based approach as a distinct and effective alternative to traditional laboratory exercises. Participants worked in groups throughout the activity and formative assessment by a facilitator (instructor) was used to ensure that participants made progress toward the content goals. At the close of the activity, participants gave short presentations about their work to the whole group, the major points of which were referenced in a facilitator-led synthesis lecture. We discuss highlights and limitations of the vision science activity in its current format (2008 and 2009 summer schools) and make recommendations for its improvement and adaptation to different audiences.

  9. Northern Sinus Meridiani Stereo

    NASA Technical Reports Server (NTRS)

    2003-01-01

    MGS MOC Release No. MOC2-341, 25 April 2003

    This is a stereo (3-d anaglyph) composite of Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) wide angle images of northern Sinus Meridiani near 2oN, 0oW. The light-toned materials at the south (bottom) end of the picture are considered to be thick (100-200 meters; 300-600 ft) exposures of sedimentary rock. Several ancient meteor impact craters are being exhumed from within these layered materials. To view in stereo, use '3-d' glasses with red over the left eye, and blue over the right. The picture covers an area approximately 113 km (70 mi) wide; north is up.

  10. Stereo Imaging Velocimetry

    NASA Technical Reports Server (NTRS)

    McDowell, Mark (Inventor); Glasgow, Thomas K. (Inventor)

    1999-01-01

    A system and a method for measuring three-dimensional velocities at a plurality of points in a fluid employing at least two cameras positioned approximately perpendicular to one another. The cameras are calibrated to accurately represent image coordinates in world coordinate system. The two-dimensional views of the cameras are recorded for image processing and centroid coordinate determination. Any overlapping particle clusters are decomposed into constituent centroids. The tracer particles are tracked on a two-dimensional basis and then stereo matched to obtain three-dimensional locations of the particles as a function of time so that velocities can be measured therefrom The stereo imaging velocimetry technique of the present invention provides a full-field. quantitative, three-dimensional map of any optically transparent fluid which is seeded with tracer particles.

  11. Robust Mosaicking of Stereo Digital Elevation Models from the Ames Stereo Pipeline

    NASA Technical Reports Server (NTRS)

    Kim, Tae Min; Moratto, Zachary M.; Nefian, Ara Victor

    2010-01-01

    Robust estimation method is proposed to combine multiple observations and create consistent, accurate, dense Digital Elevation Models (DEMs) from lunar orbital imagery. The NASA Ames Intelligent Robotics Group (IRG) aims to produce higher-quality terrain reconstructions of the Moon from Apollo Metric Camera (AMC) data than is currently possible. In particular, IRG makes use of a stereo vision process, the Ames Stereo Pipeline (ASP), to automatically generate DEMs from consecutive AMC image pairs. However, the DEMs currently produced by the ASP often contain errors and inconsistencies due to image noise, shadows, etc. The proposed method addresses this problem by making use of multiple observations and by considering their goodness of fit to improve both the accuracy and robustness of the estimate. The stepwise regression method is applied to estimate the relaxed weight of each observation.

  12. High Resolution Stereo Camera (HRSC) on Mars Express - a decade of PR/EO activities at Freie Universität Berlin

    NASA Astrophysics Data System (ADS)

    Balthasar, Heike; Dumke, Alexander; van Gasselt, Stephan; Gross, Christoph; Michael, Gregory; Musiol, Stefanie; Neu, Dominik; Platz, Thomas; Rosenberg, Heike; Schreiner, Björn; Walter, Sebastian

    2014-05-01

    Since 2003 the High Resolution Stereo Camera (HRSC) experiment on the Mars Express mission is in orbit around Mars. First images were sent to Earth on January 14th, 2004. The goal-oriented HRSC data dissemination and the transparent representation of the associated work and results are the main aspects that contributed to the success in the public perception of the experiment. The Planetary Sciences and Remote Sensing Group at Freie Universität Berlin (FUB) offers both, an interactive web based data access, and browse/download options for HRSC press products [www.fu-berlin.de/planets]. Close collaborations with exhibitors as well as print and digital media representatives allows for regular and directed dissemination of, e.g., conventional imagery, orbital/synthetic surface epipolar images, video footage, and high-resolution displays. On a monthly basis we prepare press releases in close collaboration with the European Space Agency (ESA) and the German Aerospace Center (DLR) [http://www.geo.fu-berlin.de/en/geol/fachrichtungen/planet/press/index.html]. A release comprises panchromatic, colour, anaglyph, and perspective views of a scene taken from an HRSC image of the Martian surface. In addition, a context map and descriptive texts in English and German are provided. More sophisticated press releases include elaborate animations and simulated flights over the Martian surface, perspective views of stereo data combined with colour and high resolution, mosaics, and perspective views of data mosaics. Altogether 970 high quality PR products and 15 movies were created at FUB during the last decade and published via FUB/DLR/ESA platforms. We support educational outreach events, as well as permanent and special exhibitions. Examples for that are the yearly "Science Fair", where special programs for kids are offered, and the exhibition "Mars Mission and Vision" which is on tour until 2015 through 20 German towns, showing 3-D movies, surface models, and images of the HRSC

  13. Usability of car stereo.

    PubMed

    Razza, Bruno Montanari; Paschoarelli, Luis Carlos

    2012-01-01

    Automotive sound systems vary widely in terms of functions and way of use between different brands and models what can bring difficulties and lack of consistency to the user. This study aimed to analyze the usability of car stereo commonly found in the market. Four products were analyzed by task analysis and after use reports and the results indicate serious usability issues with respect to the form of operation, organization, clarity and quality of information, visibility and readability, among others.

  14. Poor Vision, Functioning, and Depressive Symptoms: A Test of the Activity Restriction Model

    ERIC Educational Resources Information Center

    Bookwala, Jamila; Lawson, Brendan

    2011-01-01

    Purpose: This study tested the applicability of the activity restriction model of depressed affect to the context of poor vision in late life. This model hypothesizes that late-life stressors contribute to poorer mental health not only directly but also indirectly by restricting routine everyday functioning. Method: We used data from a national…

  15. The influence of active vision on the exoskeleton of intelligent agents

    NASA Astrophysics Data System (ADS)

    Smith, Patrice; Terry, Theodore B.

    2016-04-01

    Chameleonization occurs when a self-learning autonomous mobile system's (SLAMR) active vision scans the surface of which it is perched causing the exoskeleton to changes colors exhibiting a chameleon effect. Intelligent agents having the ability to adapt to their environment and exhibit key survivability characteristics of its environments would largely be due in part to the use of active vision. Active vision would allow the intelligent agent to scan its environment and adapt as needed in order to avoid detection. The SLAMR system would have an exoskeleton, which would change, based on the surface it was perched on; this is known as the "chameleon effect." Not in the common sense of the term, but from the techno-bio inspired meaning as addressed in our previous paper. Active vision, utilizing stereoscopic color sensing functionality would enable the intelligent agent to scan an object within its close proximity, determine the color scheme, and match it; allowing the agent to blend with its environment. Through the use of its' optical capabilities, the SLAMR system would be able to further determine its position, taking into account spatial and temporal correlation and spatial frequency content of neighboring structures further ensuring successful background blending. The complex visual tasks of identifying objects, using edge detection, image filtering, and feature extraction are essential for an intelligent agent to gain additional knowledge about its environmental surroundings.

  16. Asynchronous event-based binocular stereo matching.

    PubMed

    Rogister, Paul; Benosman, Ryad; Ieng, Sio-Hoi; Lichtsteiner, Patrick; Delbruck, Tobi

    2012-02-01

    We present a novel event-based stereo matching algorithm that exploits the asynchronous visual events from a pair of silicon retinas. Unlike conventional frame-based cameras, recent artificial retinas transmit their outputs as a continuous stream of asynchronous temporal events, in a manner similar to the output cells of the biological retina. Our algorithm uses the timing information carried by this representation in addressing the stereo-matching problem on moving objects. Using the high temporal resolution of the acquired data stream for the dynamic vision sensor, we show that matching on the timing of the visual events provides a new solution to the real-time computation of 3-D objects when combined with geometric constraints using the distance to the epipolar lines. The proposed algorithm is able to filter out incorrect matches and to accurately reconstruct the depth of moving objects despite the low spatial resolution of the sensor. This brief sets up the principles for further event-based vision processing and demonstrates the importance of dynamic information and spike timing in processing asynchronous streams of visual events. PMID:24808513

  17. Stereo Imaging Miniature Endoscope

    NASA Technical Reports Server (NTRS)

    Bae, Youngsam; Manohara, Harish; White, Victor; Shcheglov, Kirill V.; Shahinian, Hrayr

    2011-01-01

    Stereo imaging requires two different perspectives of the same object and, traditionally, a pair of side-by-side cameras would be used but are not feasible for something as tiny as a less than 4-mm-diameter endoscope that could be used for minimally invasive surgeries or geoexploration through tiny fissures or bores. The proposed solution here is to employ a single lens, and a pair of conjugated, multiple-bandpass filters (CMBFs) to separate stereo images. When a CMBF is placed in front of each of the stereo channels, only one wavelength of the visible spectrum that falls within the passbands of the CMBF is transmitted through at a time when illuminated. Because the passbands are conjugated, only one of the two channels will see a particular wavelength. These time-multiplexed images are then mixed and reconstructed to display as stereo images. The basic principle of stereo imaging involves an object that is illuminated at specific wavelengths, and a range of illumination wavelengths is time multiplexed. The light reflected from the object selectively passes through one of the two CMBFs integrated with two pupils separated by a baseline distance, and is focused onto the imaging plane through an objective lens. The passband range of CMBFs and the illumination wavelengths are synchronized such that each of the CMBFs allows transmission of only the alternate illumination wavelength bands. And the transmission bandwidths of CMBFs are complementary to each other, so that when one transmits, the other one blocks. This can be clearly understood if the wavelength bands are divided broadly into red, green, and blue, then the illumination wavelengths contain two bands in red (R1, R2), two bands in green (G1, G2), and two bands in blue (B1, B2). Therefore, when the objective is illuminated by R1, the reflected light enters through only the left-CMBF as the R1 band corresponds to the transmission window of the left CMBF at the left pupil. This is blocked by the right CMBF. The

  18. Stereo Matching by Filtering-Based Disparity Propagation.

    PubMed

    Wang, Xingzheng; Tian, Yushi; Wang, Haoqian; Zhang, Yongbing

    2016-01-01

    Stereo matching is essential and fundamental in computer vision tasks. In this paper, a novel stereo matching algorithm based on disparity propagation using edge-aware filtering is proposed. By extracting disparity subsets for reliable points and customizing the cost volume, the initial disparity map is refined through filtering-based disparity propagation. Then, an edge-aware filter with low computational complexity is adopted to formulate the cost column, which makes the proposed method independent on the local window size. Experimental results demonstrate the effectiveness of the proposed scheme. Bad pixels in our output disparity map are considerably decreased. The proposed method greatly outperforms the adaptive support-weight approach and other conditional window-based local stereo matching algorithms. PMID:27626800

  19. Stereo Matching by Filtering-Based Disparity Propagation

    PubMed Central

    Wang, Xingzheng; Tian, Yushi; Wang, Haoqian; Zhang, Yongbing

    2016-01-01

    Stereo matching is essential and fundamental in computer vision tasks. In this paper, a novel stereo matching algorithm based on disparity propagation using edge-aware filtering is proposed. By extracting disparity subsets for reliable points and customizing the cost volume, the initial disparity map is refined through filtering-based disparity propagation. Then, an edge-aware filter with low computational complexity is adopted to formulate the cost column, which makes the proposed method independent on the local window size. Experimental results demonstrate the effectiveness of the proposed scheme. Bad pixels in our output disparity map are considerably decreased. The proposed method greatly outperforms the adaptive support-weight approach and other conditional window-based local stereo matching algorithms. PMID:27626800

  20. STEREO - The Sun from Two Points of View

    NASA Technical Reports Server (NTRS)

    Kucera, Therese A.

    2010-01-01

    NASA's STEREO (Solar TErrestrial RElations Observatory) mission continues its investigations into the three dimensional structure of the sun and heliosphere. With the recent increases in solar activity STEREO is yielding new results obtained using the mission's full array of imaging and in-situ instrumentation, and in February 2011 the two spacecraft will be 180 degrees apart allowing us to directly image the entire solar disk for the first time. We will discuss the latest results from STEREO and how they change our view of solar activity and its effects on our solar system.

  1. Opportunity's View, Sol 958 (Stereo)

    NASA Technical Reports Server (NTRS)

    2006-01-01

    [figure removed for brevity, see original site] Left-eye view of a stereo pair for PIA01897

    [figure removed for brevity, see original site] Right-eye view of a stereo pair for PIA01897

    NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this stereo view of the rover's surroundings on the 958th sol, or Martian day, of its surface mission (Oct. 4, 2006).

    This view is presented as a cylindrical-perspective projection with geometric seam correction. The image appears three-dimensional when viewed through red-green stereo glasses.

  2. Visions of the Future. Social Science Activities Text. Teacher's Edition.

    ERIC Educational Resources Information Center

    Melnick, Rob; Ronan, Bernard

    Intended to put both national and global issues into perspective and help students make decisions about their futures, this teacher's edition provides instructional objectives, ideas for discussion and inquiries, test blanks for each section, and answer keys for the 22 activities provided in the accompanying student text. Designed to provide high…

  3. Hearing symptoms personal stereos

    PubMed Central

    da Luz, Tiara Santos; Borja, Ana Lúcia Vieira de Freitas

    2012-01-01

    Summary Introduction: Practical and portable the personal stereos if had become almost indispensable accessories in the day the day. Studies disclose that the portable players of music can cause auditory damages in the long run for who hear music in high volume for a drawn out time. Objective: to verify the prevalence of auditory symptoms in users of amplified players and to know its habits of use Method: Observational prospective study of transversal cut carried through in three institutions of education of the city of Salvador BA, being two of public net and one of the private net. 400 students had answered to the questionnaire, of both the sex, between 14 and 30 years that had related the habit to use personal stereos. Results: The symptoms most prevalent had been hyperacusis (43.5%), auricular fullness (30.5%) and humming (27.5), being that the humming is the symptom most present in the population youngest. How much to the daily habits: 62.3% frequent use, 57% in raised intensities, 34% in drawn out periods. An inverse relation between exposition time was verified and the band of age (p = 0,000) and direct with the prevalence of the humming. Conclusion: Although to admit to have knowledge on the damages that the exposition the sound of high intensity can cause the hearing, the daily habits of the young evidence the inadequate use of the portable stereos characterized by long periods of exposition, raised intensities, frequent use and preference for the insertion phones. The high prevalence of symptoms after the use suggests a bigger risk for the hearing of these young. PMID:25991931

  4. Stereo imaging in astronomy with ultralong baseline interfereometry

    NASA Astrophysics Data System (ADS)

    Ray, Alak

    2015-08-01

    Astronomical images recorded on two-dimensional detectors do not give depth information even for extended objects. Three-dimensional (3D) reconstruction of such objects, e.g. supernova remnants (SNRs) is based on Doppler velocity measurements across the image assuming a position-velocity correspondence about the explosion center. Stereo imaging of astronomical objects, when possible, directly yield, independently of this assumption, 3D structures that will advance our understanding of their evolution and origins, and allow comparison with model simulations. The large distance to astronomical objects and the relatively small attainable stereo baselines make two views of the scene (the stereo image pair) differing by a very small angle and require very high-resolution imaging. Interferometry in the radio, mm, and shorter wavelengths will be required with interplanetary baselines to match these requirements. Using the earth's orbital diameter as the stereo base for images constructed six months apart, as in parallax measurements, through very high resolution telescope arrays may achieve these goals. Apart from challenges of space based interferometry and refractive variations of the intervening medium, issues of camera calibration, triangulation in the presence of realistic noise, image texture recognition and enhancement that are commonly faced in the field of Computer Vision have to be successfully addressed for stereo imaging in astronomy.

  5. Vision guided automatic measuring in coordinate metrology

    NASA Astrophysics Data System (ADS)

    Qin, Yuhong; Wang, Lei; Xie, Lusheng; Huang, Yuanqing

    2008-12-01

    A novel automatically measuring planning method in coordinate metrology based on computer vision is presented in this paper. An active stereo vision system is established by attaching a CCD camera to the mechanical probe of the coordinate measuring machine (CMM). Through the movement of the probe of the CMM, as well as the camera, 3D edge characters of the object can be acquired, which are used as clues for automatically coordinate measuring. A multi-baseline matching method is presented to overcome the ambiguity in stereo matching, and a quadratic interpolating is used in sub pixel matching to get continuous depth image. The matching is only done on character edges in images, so it is much faster and more robust. Two methods of measuring path planning are put forward, in one way, a 2D characteristic edge image which are often stand for rapidly changes in depth or curvature of object surface can be acquired by projecting 3D edge characters to a scanning plane, and then the sampling points of mechanical probe are selected depending on the edge image. In the other way, surface patches are fitted to these 3D edges, and the sampling grid is determined by the type and area of every patch. Using these techniques, a highly automated high-speed, high-precision, 3-D coordinate acquisition system based on multiple-sensor integration can be developed. It has potential applications in manufacturing problems as metrology, inspection, and reverse engineering.

  6. Viking orbiter stereo imaging catalog

    NASA Technical Reports Server (NTRS)

    Blasius, K. R.; Vertrone, A. V.; Lewis, B. H.; Martin, M. D.

    1982-01-01

    The extremely long mission of the two Viking Orbiter spacecraft produced a wealth of photos of surface features. Many of these photos can be used to form stereo images allowing the student of Mars to examine a subject in three dimensional. This catalog is a technical guide to the use of stereo coverage within the complex Viking imaging data set.

  7. Person and gesture tracking with smart stereo cameras

    NASA Astrophysics Data System (ADS)

    Gordon, Gaile; Chen, Xiangrong; Buck, Ron

    2008-02-01

    Physical security increasingly involves sophisticated, real-time visual tracking of a person's location inside a given environment, often in conjunction with biometrics and other security-related technologies. However, demanding real-world conditions like crowded rooms, changes in lighting and physical obstructions have proved incredibly challenging for 2D computer vision technology. In contrast, 3D imaging technology is not affected by constant changes in lighting and apparent color, and thus allows tracking accuracy to be maintained in dynamically lit environments. In addition, person tracking with a 3D stereo camera can provide the location and movement of each individual very precisely, even in a very crowded environment. 3D vision only requires that the subject be partially visible to a single stereo camera to be correctly tracked; multiple cameras are used to extend the system's operational footprint, and to contend with heavy occlusion. A successful person tracking system, must not only perform visual analysis robustly, but also be small, cheap and consume relatively little power. The TYZX Embedded 3D Vision systems are perfectly suited to provide the low power, small footprint, and low cost points required by these types of volume applications. Several security-focused organizations, including the U.S Government, have deployed TYZX 3D stereo vision systems in security applications. 3D image data is also advantageous in the related application area of gesture tracking. Visual (uninstrumented) tracking of natural hand gestures and movement provides new opportunities for interactive control including: video gaming, location based entertainment, and interactive displays. 2D images have been used to extract the location of hands within a plane, but 3D hand location enables a much broader range of interactive applications. In this paper, we provide some background on the TYZX smart stereo cameras platform, describe the person tracking and gesture tracking systems

  8. Calculation of utmost parameters of active vision system based on nonscanning thermal imager

    NASA Astrophysics Data System (ADS)

    Sviridov, A. N.

    2003-09-01

    An active vision system (AVS) based on a non scanning thermal imager (TI) and CO2 - quantum amplifier of the image is offered. AVS mathematical model within which investigation of utmost signal / noise values and other system parameters depending on the distances to the scene - the area of observation (AO), an illumination impulse energy (W), an amplification factor (K) of a quantum amplifier, objective lens characteristics, spectral band width of a cooled filter of the thermal imager as well as object and scene characteristics is developed. Calculations were carried out for the following possible operating modes of a discussed vision system: - an active mode of a thermal imager with a cooled wideband filter; an active mode of a thermal imager with a cooled narrowband filter; - passive mode (W = 0, K = 1) of a thermal imager with a cooled wideband filter. As a result of carried out researches the opportunity and expediency of designing AVS, having a nonscanning thermal imager, impulse CO2 - quantum image amplifier and impulse CO2 - illumination laser are shown. It is shown that AVS have advantages over thermal imaging at observation of objects, temperature and reflection factors of which differ slightly from similar parameters of the scene. AVS depending on the W-K product can detect at a distance of up to 3000..5000m practically any local changes (you are interested in ) of a reflection factor. AVS not replacing the thermal imaging allow to receive additional information about observation objects. The images obtained with the help of AVS are more natural and more easy identified than thermal images received at the expense of the object own radiation. For quantitative determination of utmost values of AVS sensitivity it is offered to introduce a new parameter - NERD - 'radiation nose equivalent reflection factors difference'. IR active vision systems of vision, as well as a human vision and vision systems in the near IR - range on the basis image intensifiers

  9. Vision based assistive technology for people with dementia performing activities of daily living (ADLs): an overview

    NASA Astrophysics Data System (ADS)

    As'ari, M. A.; Sheikh, U. U.

    2012-04-01

    The rapid development of intelligent assistive technology for replacing a human caregiver in assisting people with dementia performing activities of daily living (ADLs) promises in the reduction of care cost especially in training and hiring human caregiver. The main problem however, is the various kinds of sensing agents used in such system and is dependent on the intent (types of ADLs) and environment where the activity is performed. In this paper on overview of the potential of computer vision based sensing agent in assistive system and how it can be generalized and be invariant to various kind of ADLs and environment. We find that there exists a gap from the existing vision based human action recognition method in designing such system due to cognitive and physical impairment of people with dementia.

  10. FIRST THREE-DIMENSIONAL RECONSTRUCTIONS OF CORONAL LOOPS WITH THE STEREO A+B SPACECRAFT. III. INSTANT STEREOSCOPIC TOMOGRAPHY OF ACTIVE REGIONS

    SciTech Connect

    Aschwanden, Markus J.; Wuelser, Jean-Pierre; Nitta, Nariaki V.; Lemen, James R.; Sandman, Anne

    2009-04-10

    Here we develop a novel three-dimensional (3D) reconstruction method of the coronal plasma of an active region by combining stereoscopic triangulation of loops with density and temperature modeling of coronal loops with a filling factor equivalent to tomographic volume rendering. Because this method requires only a stereoscopic image pair in multiple temperature filters, which are sampled within {approx}1 minute with the recent STEREO/EUVI instrument, this method is about four orders of magnitude faster than conventional solar rotation-based tomography. We reconstruct the 3D density and temperature distribution of active region NOAA 10955 by stereoscopic triangulation of 70 loops, which are used as a skeleton for a 3D field interpolation of some 7000 loop components, leading to a 3D model that reproduces the observed fluxes in each stereoscopic image pair with an accuracy of a few percents (of the average flux) in each pixel. With the stereoscopic tomography we infer also a differential emission measure distribution over the entire temperature range of T {approx} 10{sup 4}-10{sup 7}, with predictions for the transition region and hotter corona in soft X-rays. The tomographic 3D model provides also large statistics of physical parameters. We find that the extreme-ultraviolet loops with apex temperatures of T{sub m} {approx}< 3.0 MK tend to be super-hydrostatic, while hotter loops with T{sub m} {approx} 4-7 MK are near-hydrostatic. The new 3D reconstruction model is fully independent of any magnetic field data and is promising for future tests of theoretical magnetic field models and coronal heating models.

  11. Vision Sensors and Cameras

    NASA Astrophysics Data System (ADS)

    Hoefflinger, Bernd

    Silicon charge-coupled-device (CCD) imagers have been and are a specialty market ruled by a few companies for decades. Based on CMOS technologies, active-pixel sensors (APS) began to appear in 1990 at the 1 μm technology node. These pixels allow random access, global shutters, and they are compatible with focal-plane imaging systems combining sensing and first-level image processing. The progress towards smaller features and towards ultra-low leakage currents has provided reduced dark currents and μm-size pixels. All chips offer Mega-pixel resolution, and many have very high sensitivities equivalent to ASA 12.800. As a result, HDTV video cameras will become a commodity. Because charge-integration sensors suffer from a limited dynamic range, significant processing effort is spent on multiple exposure and piece-wise analog-digital conversion to reach ranges >10,000:1. The fundamental alternative is log-converting pixels with an eye-like response. This offers a range of almost a million to 1, constant contrast sensitivity and constant colors, important features in professional, technical and medical applications. 3D retino-morphic stacking of sensing and processing on top of each other is being revisited with sub-100 nm CMOS circuits and with TSV technology. With sensor outputs directly on top of neurons, neural focal-plane processing will regain momentum, and new levels of intelligent vision will be achieved. The industry push towards thinned wafers and TSV enables backside-illuminated and other pixels with a 100% fill-factor. 3D vision, which relies on stereo or on time-of-flight, high-speed circuitry, will also benefit from scaled-down CMOS technologies both because of their size as well as their higher speed.

  12. Stereo transparency and the disparity gradient limit

    NASA Technical Reports Server (NTRS)

    McKee, Suzanne P.; Verghese, Preeti

    2002-01-01

    Several studies (Vision Research 15 (1975) 583; Perception 9 (1980) 671) have shown that binocular fusion is limited by the disparity gradient (disparity/distance) separating image points, rather than by their absolute disparity values. Points separated by a gradient >1 appear diplopic. These results are sometimes interpreted as a constraint on human stereo matching, rather than a constraint on fusion. Here we have used psychophysical measurements on stereo transparency to show that human stereo matching is not constrained by a gradient of 1. We created transparent surfaces composed of many pairs of dots, in which each member of a pair was assigned a disparity equal and opposite to the disparity of the other member. For example, each pair could be composed of one dot with a crossed disparity of 6' and the other with uncrossed disparity of 6', vertically separated by a parametrically varied distance. When the vertical separation between the paired dots was small, the disparity gradient for each pair was very steep. Nevertheless, these opponent-disparity dot pairs produced a striking appearance of two transparent surfaces for disparity gradients ranging between 0.5 and 3. The apparent depth separating the two transparent planes was correctly matched to an equivalent disparity defined by two opaque surfaces. A test target presented between the two transparent planes was easily detected, indicating robust segregation of the disparities associated with the paired dots into two transparent surfaces with few mismatches in the target plane. Our simulations using the Tsai-Victor model show that the response profiles produced by scaled disparity-energy mechanisms can account for many of our results on the transparency generated by steep gradients.

  13. Design and control of active vision based mechanisms for intelligent robots

    NASA Technical Reports Server (NTRS)

    Wu, Liwei; Marefat, Michael M.

    1994-01-01

    In this paper, we propose a design of an active vision system for intelligent robot application purposes. The system has the degrees of freedom of pan, tilt, vergence, camera height adjustment, and baseline adjustment with a hierarchical control system structure. Based on this vision system, we discuss two problems involved in the binocular gaze stabilization process: fixation point selection and vergence disparity extraction. A hierarchical approach to determining point of fixation from potential gaze targets using evaluation function representing human visual behavior to outside stimuli is suggested. We also characterize different visual tasks in two cameras for vergence control purposes, and a phase-based method based on binarized images to extract vergence disparity for vergence control is presented. A control algorithm for vergence control is discussed.

  14. Versatile transformations of hydrocarbons in anaerobic bacteria: substrate ranges and regio- and stereo-chemistry of activation reactions†

    PubMed Central

    Jarling, René; Kühner, Simon; Basílio Janke, Eline; Gruner, Andrea; Drozdowska, Marta; Golding, Bernard T.; Rabus, Ralf; Wilkes, Heinz

    2015-01-01

    Anaerobic metabolism of hydrocarbons proceeds either via addition to fumarate or by hydroxylation in various microorganisms, e.g., sulfate-reducing or denitrifying bacteria, which are specialized in utilizing n-alkanes or alkylbenzenes as growth substrates. General pathways for carbon assimilation and energy gain have been elucidated for a limited number of possible substrates. In this work the metabolic activity of 11 bacterial strains during anaerobic growth with crude oil was investigated and compared with the metabolite patterns appearing during anaerobic growth with more than 40 different hydrocarbons supplied as binary mixtures. We show that the range of co-metabolically formed alkyl- and arylalkyl-succinates is much broader in n-alkane than in alkylbenzene utilizers. The structures and stereochemistry of these products are resolved. Furthermore, we demonstrate that anaerobic hydroxylation of alkylbenzenes does not only occur in denitrifiers but also in sulfate reducers. We propose that these processes play a role in detoxification under conditions of solvent stress. The thermophilic sulfate-reducing strain TD3 is shown to produce n-alkylsuccinates, which are suggested not to derive from terminal activation of n-alkanes, but rather to represent intermediates of a metabolic pathway short-cutting fumarate regeneration by reverse action of succinate synthase. The outcomes of this study provide a basis for geochemically tracing such processes in natural habitats and contribute to an improved understanding of microbial activity in hydrocarbon-rich anoxic environments. PMID:26441848

  15. Opportunity's View, Sol 959, (Stereo)

    NASA Technical Reports Server (NTRS)

    2006-01-01

    [figure removed for brevity, see original site] Left-eye view of a stereo pair for PIA01893

    [figure removed for brevity, see original site] Right-eye view of a stereo pair for PIA01893

    NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this stereo view of the rover's surroundings on sol (or Martian day) 959 of its surface mission.

    This view is presented as a cylindrical-perspective projection with geometric seam correction.

  16. Recent STEREO Observations of Coronal Mass Ejections

    NASA Technical Reports Server (NTRS)

    SaintCyr, Chris Orville; Xie, Hong; Mays, Mona Leila; Davila, Joseph M.; Gilbert, Holly R.; Jones, Shaela I.; Pesnell, William Dean; Gopalswamy, Nat; Gurman, Joseph B.; Yashiro, Seiji; Wuelser, Jean-Pierre; Howard, Russell A.; Thompson, Barbara J.; Thompson, William T.

    2008-01-01

    Over 400 CMEs have been observed by STEREO SECCHI COR1 during the mission's three year duration (2006-2009). Many of the solar activity indicators have been at minimal values over this period, and the Carrington rotation-averaged CME rate has been comparable to that measured during the minima between Cycle 21-22 (SMM C/P) and Cycle 22-23 (SOHO LASCO). That rate is about 0.5 CMEs/day. During the current solar minimum (leading to Cycle 24), there have been entire Carrington rotations where no sunspots were detected and the daily values of the 2800 MHz solar flux remained below 70 sfu. CMEs continued to be detected during these exceptionally quiet periods, indicating that active regions are not necessary to the generation of at least a portion of the CME population. In the past, researchers were limited to a single view of the Sun and could conclude that activity on the unseen portion of the disk might be associated with CMEs. But as the STEREO mission has progressed we have been able to observe an increasing fraction of the Sun's corona with STEREO SECCHI EUVI and were able to eliminate this possibility. Here we report on the nature of CMEs detected during these exceptionally quiet periods, and we speculate on how the corona remains dynamic during such conditions.

  17. CONDOR Advanced Visionics System

    NASA Astrophysics Data System (ADS)

    Kanahele, David L.; Buckanin, Robert M.

    1996-06-01

    The Covert Night/Day Operations for Rotorcraft (CONDOR) program is a collaborative research and development program between the governments of the United States and the United Kingdom of Great Britain and Northern Ireland to develop and demonstrate an advanced visionics concept coupled with an advanced flight control system to improve rotorcraft mission effectiveness during day, night, and adverse weather conditions in the Nap- of-the-Earth environment. The Advanced Visionics System for CONDOR is the flight- ruggedized head mounted display and computer graphics generator with the intended use of exploring, developing, and evaluating proposed visionic concepts for rotorcraft including; the application of color displays, wide field-of-view, enhanced imagery, virtual displays, mission symbology, stereo imagery, and other graphical interfaces.

  18. A tactile vision substitution system for the study of active sensing.

    PubMed

    Hsu, Brian; Hsieh, Cheng-Han; Yu, Sung-Nien; Ahissar, Ehud; Arieli, Amos; Zilbershtain-Kra, Yael

    2013-01-01

    This paper presents a tactile vision substitution system (TVSS) for the study of active sensing. Two algorithms, namely image processing and trajectory tracking, were developed to enhance the capability of conventional TVSS. Image processing techniques were applied to reduce the artifacts and extract important features from the active camera and effectively converted the information into tactile stimuli with much lower resolution. A fixed camera was used to record the movement of the active camera. A trajectory tracking algorithm was developed to analyze the active sensing strategy of the TVSS users to explore the environment. The image processing subsystem showed advantageous improvement in extracting object's features for superior recognition. The trajectory tracking subsystem, on the other hand, enabled accurately locating the portion of the scene pointed by the active camera and providing profound information for the study of active sensing strategy applied by TVSS users.

  19. What is stereoscopic vision good for?

    NASA Astrophysics Data System (ADS)

    Read, Jenny C. A.

    2015-03-01

    Stereo vision is a resource-intensive process. Nevertheless, it has evolved in many animals including mammals, birds, amphibians and insects. It must therefore convey significant fitness benefits. It is often assumed that the main benefit is improved accuracy of depth judgments, but camouflage breaking may be as important, particularly in predatory animals. In humans, for the last 150 years, stereo vision has been turned to a new use: helping us reproduce visual reality for artistic purposes. By recreating the different views of a scene seen by the two eyes, stereo achieves unprecedented levels of realism. However, it also has some unexpected effects on viewer experience. The disruption of established mechanisms for interpreting pictures may be one reason why some viewers find stereoscopic content disturbing. Stereo vision also has uses in ophthalmology. Clinical stereoacuity tests are used in the management of conditions such as strabismus and amblyopia as well as vision screening. Stereoacuity can reveal the effectiveness of therapy and even predict long-term outcomes post surgery. Yet current clinical stereo tests fall far short of the accuracy and precision achievable in the lab. At Newcastle University, we are exploiting the recent availability of autostereo 3D tablet computers to design a clinical stereotest app in the form of a game suitable for young children. Our goal is to enable quick, accurate and precise stereoacuity measures which will enable clinicians to obtain better outcomes for children with visual disorders.

  20. Retinal Structures and Visual Cortex Activity are Impaired Prior to Clinical Vision Loss in Glaucoma

    PubMed Central

    Murphy, Matthew C.; Conner, Ian P.; Teng, Cindy Y.; Lawrence, Jesse D.; Safiullah, Zaid; Wang, Bo; Bilonick, Richard A.; Kim, Seong-Gi; Wollstein, Gadi; Schuman, Joel S.; Chan, Kevin C.

    2016-01-01

    Glaucoma is the second leading cause of blindness worldwide and its pathogenesis remains unclear. In this study, we measured the structure, metabolism and function of the visual system by optical coherence tomography and multi-modal magnetic resonance imaging in healthy subjects and glaucoma patients with different degrees of vision loss. We found that inner retinal layer thinning, optic nerve cupping and reduced visual cortex activity occurred before patients showed visual field impairment. The primary visual cortex also exhibited more severe functional deficits than higher-order visual brain areas in glaucoma. Within the visual cortex, choline metabolism was perturbed along with increasing disease severity in the eye, optic radiation and visual field. In summary, this study showed evidence that glaucoma deterioration is already present in the eye and the brain before substantial vision loss can be detected clinically using current testing methods. In addition, cortical cholinergic abnormalities are involved during trans-neuronal degeneration and can be detected non-invasively in glaucoma. The current results can be of impact for identifying early glaucoma mechanisms, detecting and monitoring pathophysiological events and eye-brain-behavior relationships, and guiding vision preservation strategies in the visual system, which may help reduce the burden of this irreversible but preventable neurodegenerative disease. PMID:27510406

  1. Retinal Structures and Visual Cortex Activity are Impaired Prior to Clinical Vision Loss in Glaucoma.

    PubMed

    Murphy, Matthew C; Conner, Ian P; Teng, Cindy Y; Lawrence, Jesse D; Safiullah, Zaid; Wang, Bo; Bilonick, Richard A; Kim, Seong-Gi; Wollstein, Gadi; Schuman, Joel S; Chan, Kevin C

    2016-01-01

    Glaucoma is the second leading cause of blindness worldwide and its pathogenesis remains unclear. In this study, we measured the structure, metabolism and function of the visual system by optical coherence tomography and multi-modal magnetic resonance imaging in healthy subjects and glaucoma patients with different degrees of vision loss. We found that inner retinal layer thinning, optic nerve cupping and reduced visual cortex activity occurred before patients showed visual field impairment. The primary visual cortex also exhibited more severe functional deficits than higher-order visual brain areas in glaucoma. Within the visual cortex, choline metabolism was perturbed along with increasing disease severity in the eye, optic radiation and visual field. In summary, this study showed evidence that glaucoma deterioration is already present in the eye and the brain before substantial vision loss can be detected clinically using current testing methods. In addition, cortical cholinergic abnormalities are involved during trans-neuronal degeneration and can be detected non-invasively in glaucoma. The current results can be of impact for identifying early glaucoma mechanisms, detecting and monitoring pathophysiological events and eye-brain-behavior relationships, and guiding vision preservation strategies in the visual system, which may help reduce the burden of this irreversible but preventable neurodegenerative disease. PMID:27510406

  2. #3 STEREO - Approaching 360 Degrees

    NASA Video Gallery

    As the STEREO spacecraft have moved out on either side of Earth they have imaged more and more of the Sun's surface. This video shows how our coverage of the Sun has increased. The Sun is shown as ...

  3. Stereoscopic depth perception for robot vision: algorithms and architectures

    SciTech Connect

    Safranek, R.J.; Kak, A.C.

    1983-01-01

    The implementation of depth perception algorithms for computer vision is considered. In automated manufacturing, depth information is vital for tasks such as path planning and 3-d scene analysis. The presentation begins with a survey of computer algorithms for stereoscopic depth perception. The emphasis is on the Marr-Poggio paradigm of human stereo vision and its computer implementation. In addition, a stereo matching algorithm based on the relaxation labelling technique is examined. A computer architecture designed to efficiently implement stereo matching algorithms, an MIMD array interfaced to a global memory, is presented. 9 references.

  4. The effect of gender and level of vision on the physical activity level of children and adolescents with visual impairment.

    PubMed

    Aslan, Ummuhan Bas; Calik, Bilge Basakcı; Kitiş, Ali

    2012-01-01

    This study was planned in order to determine physical activity levels of visually impaired children and adolescents and to investigate the effect of gender and level of vision on physical activity level in visually impaired children and adolescents. A total of 30 visually impaired children and adolescents (16 low vision and 14 blind) aged between 8 and 16 years participated in the study. The physical activity level of cases was evaluated with a physical activity diary (PAD) and one-mile run/walk test (OMR-WT). No difference was found between the PAD and the OMR-WT results of low vision and blind children and adolescents. The visually impaired children and adolescents were detected not to participate in vigorous physical activity. A difference was found in favor of low vision boys in terms of mild, moderate activities and OMR-WT durations. However, no difference was found between physical activity levels of blind girls and boys. The results of our study suggested that the physical activity level of visually impaired children and adolescents was low, and gender affected physical activity in low vision children and adolescents.

  5. The research on binocular stereo video imaging and display system based on low-light CMOS

    NASA Astrophysics Data System (ADS)

    Xie, Ruobing; Li, Li; Jin, Weiqi; Guo, Hong

    2015-10-01

    It is prevalent for the low-light night-vision helmet to equip the binocular viewer with image intensifiers. Such equipment can not only acquire night vision ability, but also obtain the sense of stereo vision to achieve better perception and understanding of the visual field. However, since the image intensifier is for direct-observation, it is difficult to apply the modern image processing technology. As a result, developing digital video technology in night vision is of great significance. In this paper, we design a low-light night-vision helmet with digital imaging device. It consists of three parts: a set of two low-illumination CMOS cameras, a binocular OLED micro display and an image processing PCB. Stereopsis is achieved through the binocular OLED micro display. We choose Speed-Up Robust Feature (SURF) algorithm for image registration. Based on the image matching information and the cameras' calibration parameters, disparity can be calculated in real-time. We then elaborately derive the constraints of binocular stereo display. The sense of stereo vision can be obtained by dynamically adjusting the content of the binocular OLED micro display. There is sufficient space for function extensions in our system. The performance of this low-light night-vision helmet can be further enhanced in combination with The HDR technology and image fusion technology, etc.

  6. Human machine interface by using stereo-based depth extraction

    NASA Astrophysics Data System (ADS)

    Liao, Chao-Kang; Wu, Chi-Hao; Lin, Hsueh-Yi; Chang, Ting-Ting; Lin, Tung-Yang; Huang, Po-Kuan

    2014-03-01

    The ongoing success of three-dimensional (3D) cinema fuels increasing efforts to spread the commercial success of 3D to new markets. The possibilities of a convincing 3D experience at home, such as three-dimensional television (3DTV), has generated a great deal of interest within the research and standardization community. A central issue for 3DTV is the creation and representation of 3D content. Acquiring scene depth information is a fundamental task in computer vision, yet complex and error-prone. Dedicated range sensors, such as the Time­ of-Flight camera (ToF), can simplify the scene depth capture process and overcome shortcomings of traditional solutions, such as active or passive stereo analysis. Admittedly, currently available ToF sensors deliver only a limited spatial resolution. However, sophisticated depth upscaling approaches use texture information to match depth and video resolution. At Electronic Imaging 2012 we proposed an upscaling routine based on error energy minimization, weighted with edge information from an accompanying video source. In this article we develop our algorithm further. By adding temporal consistency constraints to the upscaling process, we reduce disturbing depth jumps and flickering artifacts in the final 3DTV content. Temporal consistency in depth maps enhances the 3D experience, leading to a wider acceptance of 3D media content. More content in better quality can boost the commercial success of 3DTV.

  7. Stereo matching based on census transformation of image gradients

    NASA Astrophysics Data System (ADS)

    Stentoumis, C.; Grammatikopoulos, L.; Kalisperakis, I.; Karras, G.; Petsa, E.

    2015-05-01

    Although multiple-view matching provides certain significant advantages regarding accuracy, occlusion handling and radiometric fidelity, stereo-matching remains indispensable for a variety of applications; these involve cases when image acquisition requires fixed geometry and limited number of images or speed. Such instances include robotics, autonomous navigation, reconstruction from a limited number of aerial/satellite images, industrial inspection and augmented reality through smart-phones. As a consequence, stereo-matching is a continuously evolving research field with growing variety of applicable scenarios. In this work a novel multi-purpose cost for stereo-matching is proposed, based on census transformation on image gradients and evaluated within a local matching scheme. It is demonstrated that when the census transformation is applied on gradients the invariance of the cost function to changes in illumination (non-linear) is significantly strengthened. The calculated cost values are aggregated through adaptive support regions, based both on cross-skeletons and basic rectangular windows. The matching algorithm is tuned for the parameters in each case. The described matching cost has been evaluated on the Middlebury stereo-vision 2006 datasets, which include changes in illumination and exposure. The tests verify that the census transformation on image gradients indeed results in a more robust cost function, regardless of aggregation strategy.

  8. The zone of comfort: Predicting visual discomfort with stereo displays

    PubMed Central

    Shibata, Takashi; Kim, Joohwan; Hoffman, David M.; Banks, Martin S.

    2012-01-01

    Recent increased usage of stereo displays has been accompanied by public concern about potential adverse effects associated with prolonged viewing of stereo imagery. There are numerous potential sources of adverse effects, but we focused on how vergence–accommodation conflicts in stereo displays affect visual discomfort and fatigue. In one experiment, we examined the effect of viewing distance on discomfort and fatigue. We found that conflicts of a given dioptric value were slightly less comfortable at far than at near distance. In a second experiment, we examined the effect of the sign of the vergence–accommodation conflict on discomfort and fatigue. We found that negative conflicts (stereo content behind the screen) are less comfortable at far distances and that positive conflicts (content in front of screen) are less comfortable at near distances. In a third experiment, we measured phoria and the zone of clear single binocular vision, which are clinical measurements commonly associated with correcting refractive error. Those measurements predicted susceptibility to discomfort in the first two experiments. We discuss the relevance of these findings for a wide variety of situations including the viewing of mobile devices, desktop displays, television, and cinema. PMID:21778252

  9. The zone of comfort: Predicting visual discomfort with stereo displays.

    PubMed

    Shibata, Takashi; Kim, Joohwan; Hoffman, David M; Banks, Martin S

    2011-01-01

    Recent increased usage of stereo displays has been accompanied by public concern about potential adverse effects associated with prolonged viewing of stereo imagery. There are numerous potential sources of adverse effects, but we focused on how vergence-accommodation conflicts in stereo displays affect visual discomfort and fatigue. In one experiment, we examined the effect of viewing distance on discomfort and fatigue. We found that conflicts of a given dioptric value were slightly less comfortable at far than at near distance. In a second experiment, we examined the effect of the sign of the vergence-accommodation conflict on discomfort and fatigue. We found that negative conflicts (stereo content behind the screen) are less comfortable at far distances and that positive conflicts (content in front of screen) are less comfortable at near distances. In a third experiment, we measured phoria and the zone of clear single binocular vision, which are clinical measurements commonly associated with correcting refractive error. Those measurements predicted susceptibility to discomfort in the first two experiments. We discuss the relevance of these findings for a wide variety of situations including the viewing of mobile devices, desktop displays, television, and cinema.

  10. Photometric stereo endoscopy

    PubMed Central

    Parot, Vicente; Lim, Daryl; González, Germán; Traverso, Giovanni; Nishioka, Norman S.; Vakoc, Benjamin J.

    2013-01-01

    Abstract. While color video endoscopy has enabled wide-field examination of the gastrointestinal tract, it often misses or incorrectly classifies lesions. Many of these missed lesions exhibit characteristic three-dimensional surface topographies. An endoscopic system that adds topographical measurements to conventional color imagery could therefore increase lesion detection and improve classification accuracy. We introduce photometric stereo endoscopy (PSE), a technique which allows high spatial frequency components of surface topography to be acquired simultaneously with conventional two-dimensional color imagery. We implement this technique in an endoscopic form factor and demonstrate that it can acquire the topography of small features with complex geometries and heterogeneous optical properties. PSE imaging of ex vivo human gastrointestinal tissue shows that surface topography measurements enable differentiation of abnormal shapes from surrounding normal tissue. Together, these results confirm that the topographical measurements can be obtained with relatively simple hardware in an endoscopic form factor, and suggest the potential of PSE to improve lesion detection and classification in gastrointestinal imaging. PMID:23864015

  11. Using Fuzzy Logic to Enhance Stereo Matching in Multiresolution Images

    PubMed Central

    Medeiros, Marcos D.; Gonçalves, Luiz Marcos G.; Frery, Alejandro C.

    2010-01-01

    Stereo matching is an open problem in Computer Vision, for which local features are extracted to identify corresponding points in pairs of images. The results are heavily dependent on the initial steps. We apply image decomposition in multiresolution levels, for reducing the search space, computational time, and errors. We propose a solution to the problem of how deep (coarse) should the stereo measures start, trading between error minimization and time consumption, by starting stereo calculation at varying resolution levels, for each pixel, according to fuzzy decisions. Our heuristic enhances the overall execution time since it only employs deeper resolution levels when strictly necessary. It also reduces errors because it measures similarity between windows with enough details. We also compare our algorithm with a very fast multi-resolution approach, and one based on fuzzy logic. Our algorithm performs faster and/or better than all those approaches, becoming, thus, a good candidate for robotic vision applications. We also discuss the system architecture that efficiently implements our solution. PMID:22205859

  12. An Active Vision Approach to Understanding and Improving Visual Training in the Geosciences

    NASA Astrophysics Data System (ADS)

    Voronov, J.; Tarduno, J. A.; Jacobs, R. A.; Pelz, J. B.; Rosen, M. R.

    2009-12-01

    Experience in the field is a fundamental aspect of geologic training, and its effectiveness is largely unchallenged because of anecdotal evidence of its success among expert geologists. However, there have been only a few quantitative studies based on large data collection efforts to investigate how Earth Scientists learn in the field. In a recent collaboration between Earth scientists, Cognitive scientists and experts in Imaging science at the University of Rochester and Rochester Institute of Technology, we are investigating such a study. Within Cognitive Science, one school of thought, referred to as the Active Vision approach, emphasizes that visual perception is an active process requiring us to move our eyes to acquire new information about our environment. The Active Vision approach indicates the perceptual skills which experts possess and which novices will need to acquire to achieve expert performance. We describe data collection efforts using portable eye-trackers to assess how novice and expert geologists acquire visual knowledge in the field. We also discuss our efforts to collect images for use in a semi-immersive classroom environment, useful for further testing of novices and experts using eye-tracking technologies.

  13. Defining filled and empty space: reassessing the filled space illusion for active touch and vision.

    PubMed

    Collier, Elizabeth S; Lawson, Rebecca

    2016-09-01

    In the filled space illusion, an extent filled with gratings is estimated as longer than an equivalent extent that is apparently empty. However, researchers do not seem to have carefully considered the terms filled and empty when describing this illusion. Specifically, for active touch, smooth, solid surfaces have typically been used to represent empty space. Thus, it is not known whether comparing gratings to truly empty space (air) during active exploration by touch elicits the same illusionary effect. In Experiments 1 and 2, gratings were estimated as longer if they were compared to smooth, solid surfaces rather than being compared to truly empty space. Consistent with this, Experiment 3 showed that empty space was perceived as longer than solid surfaces when the two were compared directly. Together these results are consistent with the hypothesis that, for touch, the standard filled space illusion only occurs if gratings are compared to smooth, solid surfaces and that it may reverse if gratings are compared to empty space. Finally, Experiment 4 showed that gratings were estimated as longer than both solid and empty extents in vision, so the direction of the filled space illusion in vision was not affected by the nature of the comparator. These results are discussed in relation to the dual nature of active touch.

  14. Defining filled and empty space: reassessing the filled space illusion for active touch and vision.

    PubMed

    Collier, Elizabeth S; Lawson, Rebecca

    2016-09-01

    In the filled space illusion, an extent filled with gratings is estimated as longer than an equivalent extent that is apparently empty. However, researchers do not seem to have carefully considered the terms filled and empty when describing this illusion. Specifically, for active touch, smooth, solid surfaces have typically been used to represent empty space. Thus, it is not known whether comparing gratings to truly empty space (air) during active exploration by touch elicits the same illusionary effect. In Experiments 1 and 2, gratings were estimated as longer if they were compared to smooth, solid surfaces rather than being compared to truly empty space. Consistent with this, Experiment 3 showed that empty space was perceived as longer than solid surfaces when the two were compared directly. Together these results are consistent with the hypothesis that, for touch, the standard filled space illusion only occurs if gratings are compared to smooth, solid surfaces and that it may reverse if gratings are compared to empty space. Finally, Experiment 4 showed that gratings were estimated as longer than both solid and empty extents in vision, so the direction of the filled space illusion in vision was not affected by the nature of the comparator. These results are discussed in relation to the dual nature of active touch. PMID:27233286

  15. Altered Vision-Related Resting-State Activity in Pituitary Adenoma Patients with Visual Damage

    PubMed Central

    Qian, Haiyan; Wang, Xingchao; Wang, Zhongyan; Wang, Zhenmin; Liu, Pinan

    2016-01-01

    Objective To investigate changes of vision-related resting-state activity in pituitary adenoma (PA) patients with visual damage through comparison to healthy controls (HCs). Methods 25 PA patients with visual damage and 25 age- and sex-matched corrected-to-normal-vision HCs underwent a complete neuro-ophthalmologic evaluation, including automated perimetry, fundus examinations, and a magnetic resonance imaging (MRI) protocol, including structural and resting-state fMRI (RS-fMRI) sequences. The regional homogeneity (ReHo) of the vision-related cortex and the functional connectivity (FC) of 6 seeds within the visual cortex (the primary visual cortex (V1), the secondary visual cortex (V2), and the middle temporal visual cortex (MT+)) were evaluated. Two-sample t-tests were conducted to identify the differences between the two groups. Results Compared with the HCs, the PA group exhibited reduced ReHo in the bilateral V1, V2, V3, fusiform, MT+, BA37, thalamus, postcentral gyrus and left precentral gyrus and increased ReHo in the precuneus, prefrontal cortex, posterior cingulate cortex (PCC), anterior cingulate cortex (ACC), insula, supramarginal gyrus (SMG), and putamen. Compared with the HCs, V1, V2, and MT+ in the PAs exhibited decreased FC with the V1, V2, MT+, fusiform, BA37, and increased FC primarily in the bilateral temporal lobe (especially BA20,21,22), prefrontal cortex, PCC, insular, angular gyrus, ACC, pre-SMA, SMG, hippocampal formation, caudate and putamen. It is worth mentioning that compared with HCs, V1 in PAs exhibited decreased or similar FC with the thalamus, whereas V2 and MT+ exhibited increased FCs with the thalamus, especially pulvinar. Conclusions In our study, we identified significant neural reorganization in the vision-related cortex of PA patients with visual damage compared with HCs. Most subareas within the visual cortex exhibited remarkable neural dysfunction. Some subareas, including the MT+ and V2, exhibited enhanced FC with the thalamic

  16. Recognition of Activities of Daily Living with Egocentric Vision: A Review

    PubMed Central

    Nguyen, Thi-Hoa-Cuc; Nebel, Jean-Christophe; Florez-Revuelta, Francisco

    2016-01-01

    Video-based recognition of activities of daily living (ADLs) is being used in ambient assisted living systems in order to support the independent living of older people. However, current systems based on cameras located in the environment present a number of problems, such as occlusions and a limited field of view. Recently, wearable cameras have begun to be exploited. This paper presents a review of the state of the art of egocentric vision systems for the recognition of ADLs following a hierarchical structure: motion, action and activity levels, where each level provides higher semantic information and involves a longer time frame. The current egocentric vision literature suggests that ADLs recognition is mainly driven by the objects present in the scene, especially those associated with specific tasks. However, although object-based approaches have proven popular, object recognition remains a challenge due to the intra-class variations found in unconstrained scenarios. As a consequence, the performance of current systems is far from satisfactory. PMID:26751452

  17. Phobos in Stereo

    NASA Technical Reports Server (NTRS)

    2008-01-01

    The High Resolution Imaging Science Experiment (HiRISE) camera on NASA's Mars Reconnaissance Orbiter took two images of the larger of Mars' two moons, Phobos, within 10 minutes of each other on March 23, 2008. This view combines the two images. Because the two were taken at slightly different viewing angles, this provides a three-dimensional effect when seen through red-blue glasses (red on left eye).

    The illuminated part of Phobos seen here is about 21 kilometers (13 miles) across. The most prominent feature is the large crater Stickney at the bottom of the image. With a diameter of 9 kilometers (5.6 miles), it is the largest feature on Phobos. A series of troughs and crater chains is obvious on other parts of the moon. Although many appear radial to Stickney in this image, recent studies from the European Space Agency's Mars Express orbiter indicate that they are not related to Stickney. Instead, they may have formed when material ejected from impacts on Mars later collided with Phobos. The lineated textures on the walls of Stickney and other large craters are landslides formed from materials falling into the crater interiors in the weak Phobos gravity (less than one one-thousandth of the gravity on Earth).

    This stereo view combines images in the HiRISE catalog as PSP_007769_9010 (in red here) and PSP_007769_9015 (in blue here).

    NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the Mars Reconnaissance Orbiter for NASA's Science Mission Directorate, Washington. Lockheed Martin Space Systems, Denver, is the prime contractor for the project and built the spacecraft. The High Resolution Imaging Science Experiment is operated by the University of Arizona, Tucson, and the instrument was built by Ball Aerospace & Technologies Corp., Boulder, Colo.

  18. Digital ruler: real-time object tracking and dimension measurement using stereo cameras

    NASA Astrophysics Data System (ADS)

    Nash, James; Atanassov, Kalin; Goma, Sergio; Ramachandra, Vikas; Siddiqui, Hasib

    2013-02-01

    Stereo metrology involves obtaining spatial estimates of an object's length or perimeter using the disparity between boundary points. True 3D scene information is required to extract length measurements of an object's projection onto the 2D image plane. In stereo vision the disparity measurement is highly sensitive to object distance, baseline distance, calibration errors, and relative movement of the left and right demarcation points between successive frames. Therefore a tracking filter is necessary to reduce position error and improve the accuracy of the length measurement to a useful level. A Cartesian coordinate extended Kalman (EKF) filter is designed based on the canonical equations of stereo vision. This filter represents a simple reference design that has not seen much exposure in the literature. A second filter formulated in a modified sensor-disparity (DS) coordinate system is also presented and shown to exhibit lower errors during a simulated experiment.

  19. Evolution of activity patterns and chromatic vision in primates: morphometrics, genetics and cladistics.

    PubMed

    Heesy, C P; Ross, C F

    2001-02-01

    Hypotheses for the adaptive origin of primates have reconstructed nocturnality as the primitive activity pattern for the entire order based on functional/adaptive interpretations of the relative size and orientation of the orbits, body size and dietary reconstruction. Based on comparative data from extant taxa this reconstruction implies that basal primates were also solitary, faunivorous, and arboreal. Recently, primates have been hypothesized to be primitively diurnal, based in part on the distribution of color-sensitive photoreceptor opsin genes and active trichromatic color vision in several extant strepsirrhines, as well as anthropoid primates (Tan & Li, 1999 Nature402, 36; Li, 2000 Am. J. phys. Anthrop. Supple.30, 318). If diurnality is primitive for all primates then the functional and adaptive significance of aspects of strepsirrhine retinal morphology and other adaptations of the primate visual system such as high acuity stereopsis, have been misinterpreted for decades. This hypothesis also implies that nocturnality evolved numerous times in primates. However, the hypothesis that primates are primitively diurnal has not been analyzed in a phylogenetic context, nor have the activity patterns of several fossil primates been considered. This study investigated the evolution of activity patterns and trichromacy in primates using a new method for reconstructing activity patterns in fragmentary fossils and by reconstructing visual system character evolution at key ancestral nodes of primate higher taxa. Results support previous studies that reconstruct omomyiform primates as nocturnal. The larger body sizes of adapiform primates confound inferences regarding activity pattern evolution in this group. The hypothesis of diurnality and trichromacy as primitive for primates is not supported by the phylogenetic data. On the contrary, nocturnality and dichromatic vision are not only primitive for all primates, but also for extant strepsirrhines. Diurnality, and

  20. A binocular stereo approach to AR/C at the Johnson Space Center

    NASA Technical Reports Server (NTRS)

    Fisher, Timothy E.; Smith, Alan T.

    1991-01-01

    Automated Rendezvous and Capture requires the determination of the 6 DOF relating two free bodies. Sensor systems that can provide such information have varying sizes, weights, power requirements, complexities, and accuracies. One type of sensor system that can provide several key advantages is a binocular stereo vision system.

  1. Photometric invariant stereo matching method.

    PubMed

    Gu, Feifei; Zhao, Hong; Zhou, Xiang; Li, Jinjun; Bu, Penghui; Zhao, Zixin

    2015-12-14

    A robust stereo matching method based on a comprehensive mathematical model for color formation process is proposed to estimate the disparity map of stereo images with noise and photometric variations. The band-pass filter with DoP kernel is firstly used to filter out noise component of the stereo images. Then the log-chromaticity normalization process is applied to eliminate the influence of lightning geometry. All the other factors that may influence the color formation process are removed through the disparity estimation process with a specific matching cost. Performance of the developed method is evaluated by comparing with some up-to-date algorithms. Experimental results are presented to demonstrate the robustness and accuracy of the method. PMID:26698970

  2. Vision-based localization in urban environments

    NASA Astrophysics Data System (ADS)

    McHenry, Michael; Cheng, Yang; Matthies, Larry

    2005-05-01

    As part of DARPA's MARS2020 program, the Jet Propulsion Laboratory has developed a vision-based system for localization in urban environments that requires neither GPS nor active sensors. System hardware consists of a pair of small FireWire cameras and a standard Pentium-based computer. The inputs to the software system consist of: 1) a crude grid-based map describing the positions of buildings, 2) an initial estimate of robot location and 3) the video streams produced by the stereo pair. At each step during the traverse the system: captures new image data, finds image features hypothesized to lie on the outside of a building, computes the range to those features, determines an estimate of the robot's motion since the previous step and combines that data with the map to update a probabilistic representation of the robot's location. This probabilistic representation allows the system to simultaneously represent multiple possible locations. For our testing, we have derived the a priori map manually using non-orthorectified overhead imagery, although this process could be automated. The software system consists of three primary components. The first is a stereo-based visual odometry system that calculates the 6-degree of freedom camera motion between sequential frames. The second component uses a set of heuristics to identify straight-line segments that are likely to be part of a building exterior. Ranging to these straight-line features is computed using binocular or wide-baseline stereo. The resulting features and the associated range measurements are fed to the third software component, a particle-filter based localization system. This system uses the map and the most recent results from the first two to update the estimate of the robot's location. This report summarizes the design of both the hardware and software and describes the results of applying the system to the global localization of a camera system over an approximately half-kilometer traverse across JPL

  3. Stereo matching: performance study of two global algorithms

    NASA Astrophysics Data System (ADS)

    Arunagiri, Sarala; Jordan, Victor J.; Teller, Patricia J.; Deroba, Joseph C.; Shires, Dale R.; Park, Song J.; Nguyen, Lam H.

    2011-06-01

    Techniques such as clinometry, stereoscopy, interferometry, and polarimetry are used for Digital Elevation Model (DEM) generation from Synthetic Aperture Radar (SAR) images. The choice of technique depends on the SAR configuration, the means used for image acquisition, and the relief type. The most popular techniques are interferometry for regions of high coherence and stereoscopy for regions such as steep forested mountain slopes. Stereo matching, which is finds the disparity map or correspondence points between two images acquired from different sensor positions, is a core process in stereoscopy. Additionally, automatic stereo processing, which involves stereo matching, is an important process in other applications including vision-based obstacle avoidance for unmanned air vehicles (UAVs), extraction of weak targets in clutter, and automatic target detection. Due to its high computational complexity, stereo matching has traditionally been, and continues to be, one of the most heavily investigated topics in computer vision. A stereo matching algorithm performs a subset of the following four steps: cost computation, cost (support) aggregation, disparity computation/optimization, and disparity refinement. Based on the method used for cost computation, the algorithms are classified into feature-, phase-, and area-based algorithms; and they are classified as local or global based on how they perform disparity computation/optimization. We present a comparative performance study of two pairs, i.e., four versions, of global stereo matching codes. Each pair uses a different minimization technique: a simulated annealing or graph cut algorithm. And, the codes of a pair differ in terms of the employed global cost function: absolute difference (AD) or a variation of normalized cross correlation (NCC). The performance comparison is in terms of execution time, the global minimum cost achieved, power and energy consumption, and the quality of generated output. The results of

  4. #1 Stereo Orbit - Launch to Feb 2011

    NASA Video Gallery

    The STEREO mission consists of two spacecraft orbiting the Sun, one moving a bit faster than Earth and the other a bit slower. In the time since the STEREO spacecraft entered these orbits near the ...

  5. How to Read a NASA STEREO Image

    NASA Video Gallery

    NASA’s STEREO mission observed a coronal mass ejection on July 23, 2012 – one of the fastest CMEs on record. The video uses STEREO imagery from this rare event to describe features to pay attention...

  6. Viking orbiter stereo imaging catalog

    NASA Technical Reports Server (NTRS)

    Blasius, K. R.; Vetrone, A. V.; Martin, M. D.

    1980-01-01

    The extremely long missions of the two Viking Orbiter spacecraft produced a wealth of photos of surface features. Many of which can be used to form stereo images allowing the earth-bound student of Mars to examine the subject in 3-D. This catalog is a technical guide to the use of stereo coverage within the complex Viking imaging data set. Since that data set is still growing (January, 1980, about 3 1/2 years after the mission began), a second edition of this catalog is planned with completion expected about November, 1980.

  7. Stereo Pair, Honolulu, Oahu

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Honolulu, on the island of Oahu, is a large and growing urban area. This stereoscopic image pair, combining a Landsat image with topography measured by the Shuttle Radar Topography Mission (SRTM), shows how topography controls the urban pattern. This color image can be viewed in 3-D by viewing the left image with the right eye and the right image with the left eye (cross-eyed viewing), or by downloading and printing the image pair, and viewing them with a stereoscope.

    Features of interest in this scene include Diamond Head (an extinct volcano near the bottom of the image), Waikiki Beach (just above Diamond Head), the Punchbowl National Cemetary (another extinct volcano, near the image center), downtown Honolulu and Honolulu harbor (image left-center), and offshore reef patterns. The slopes of the Koolau mountain range are seen in the right half of the image. Clouds commonly hang above ridges and peaks of the Hawaiian Islands, but in this synthesized stereo rendition appear draped directly on the mountains. The clouds are actually about 1000 meters (3300 feet) above sea level.

    This stereoscopic image pair was generated using topographic data from the Shuttle Radar Topography Mission, combined with a Landsat 7 Thematic Mapper image collected at the same time as the SRTM flight. The topography data were used to create two differing perspectives, one for each eye. When stereoscopically merged, the result is a vertically exaggerated view of the Earth's surface in its full three dimensions. The United States Geological Survey's Earth Resources Observations Systems (EROS) Data Center, Sioux Falls, South Dakota, provided the Landsat data.

    The Shuttle Radar Topography Mission (SRTM), launched on February 11, 2000, used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. The mission was designed to collect three-dimensional measurements of the

  8. A vision system planner for increasing the autonomy of the Extravehicular Activity Helper/Retriever

    NASA Technical Reports Server (NTRS)

    Magee, Michael

    1993-01-01

    The Extravehicular Activity Retriever (EVAR) is a robotic device currently being developed by the Automation and Robotics Division at the NASA Johnson Space Center to support activities in the neighborhood of the Space Shuttle or Space Station Freedom. As the name implies, the Retriever's primary function will be to provide the capability to retrieve tools and equipment or other objects which have become detached from the spacecraft, but it will also be able to rescue a crew member who may have become inadvertently de-tethered. Later goals will include cooperative operations between a crew member and the Retriever such as fetching a tool that is required for servicing or maintenance operations. This paper documents a preliminary design for a Vision System Planner (VSP) for the EVAR that is capable of achieving visual objectives provided to it by a high level task planner. Typical commands which the task planner might issue to the VSP relate to object recognition, object location determination, and obstacle detection. Upon receiving a command from the task planner, the VSP then plans a sequence of actions to achieve the specified objective using a model-based reasoning approach. This sequence may involve choosing an appropriate sensor, selecting an algorithm to process the data, reorienting the sensor, adjusting the effective resolution of the image using lens zooming capability, and/or requesting the task planner to reposition the EVAR to obtain a different view of the object. An initial version of the Vision System Planner which realizes the above capabilities using simulated images has been implemented and tested. The remaining sections describe the architecture and capabilities of the VSP and its relationship to the high level task planner. In addition, typical plans that are generated to achieve visual goals for various scenarios are discussed. Specific topics to be addressed will include object search strategies, repositioning of the EVAR to improve the

  9. Real time swallowing measurement system by using photometric stereo

    NASA Astrophysics Data System (ADS)

    Fujino, Masahiro; Kato, Kunihito; Mura, Emi; Nagai, Hajime

    2015-04-01

    In this paper, we propose a measurement system to evaluate the swallowing by estimating the movement of the thyroid cartilage. We developed a measurement system based on the vision sensor in order to achieve the noncontact and non-invasive sensor. The movement of the subject's thyroid cartilage is tracked by the three dimensional information of the surface of the skin measured by the photometric stereo. We constructed a camera system that uses near-IR light sources and three camera sensors. We conformed the effectiveness of the proposed system by experiments.

  10. STEREO-IMPACT E/PO: Getting Ready for Launch!

    NASA Astrophysics Data System (ADS)

    Mendez, B. J.; Peticolas, L. M.; Craig, N.

    2005-12-01

    The Solar Terrestrial Relations Observatory (STEREO) is scheduled for launch in April/May 2006. STEREO will study the Sun with two spacecraft on either side of Earth in orbit around the Sun. The primary science goal is to understand the nature of Coronal Mass Ejections (CMEs). The E/PO program for the IMPACT suite of instruments aboard the two crafts is planning several activities leading up to launch to raise awareness and interest in the mission and its scientific discoveries. We will be participating in NASA's Sun-Earth day events which are scheduled to coincide with a total solar eclipse in March. This event offers a good opportunity to engage the public in STEREO science, because an eclipse allows one to see the solar corona from where CMEs erupt. We will be conducting teacher workshops locally in California and also at the annual conference of the National Science Teachers Association. At these workshops, we will focus on the basics of magnetism and then its connection to CMEs and Earth's magnetic field, leading to the questions STEREO scientists hope to answer. In addition, we will be working with NASA's Public Relations office to ensure that STEREO E/PO programs are highlighted in press releases about the mission.

  11. Pilot performance and eye movement activity with varying levels of display integration in a synthetic vision cockpit

    NASA Astrophysics Data System (ADS)

    Stark, Julie Michele

    The primary goal of the present study was to investigate the effects of display integration in a simulated commercial aircraft cockpit equipped with a synthetic vision display. Combinations of display integration level (low/high), display view (synthetic vision view/traditional display), and workload (low/high) were presented to each participant. Sixteen commercial pilots flew multiple approaches under IMC conditions in a moderate fidelity fixed-base part-task simulator. Pilot performance data, visual activity, mental workload, and self-report situation awareness were measured. Congruent with the Proximity Compatibility Principle, the more integrated display facilitated superior performance on integrative tasks (lateral and vertical path maintenance), whereas a less integrated display elicited better focus task performance (airspeed maintenance). The synthetic vision displays facilitated superior path maintenance performance under low workload, but these performance gains were not as evident during high workload. The majority of the eye movement findings identified differences in visual acquisition of the airspeed indicator, the glideslope indicator, the localizer, and the altimeter as a function of display integration level or display view. There were more fixations on the airspeed indicator with the more integrated display layout and during high workload trials. There were also more fixations on the glideslope indicator with the more integrated display layout. However, there were more fixations on the localizer with the less integrated display layout. There were more fixations on the altimeter with the more integrated display and with the traditional view. Only a few eye movement differences were produced by the synthetic vision displays; pilots looked at the glideslope indicator and the altimeter less with the synthetic vision view. This supports the notion that utilizing a synthetic vision display should not adversely impact visual acquisition of data. Self

  12. Epipolar geometry of opti-acoustic stereo imaging.

    PubMed

    Negahdaripour, Shahriar

    2007-10-01

    Optical and acoustic cameras are suitable imaging systems to inspect underwater structures, both in regular maintenance and security operations. Despite high resolution, optical systems have limited visibility range when deployed in turbid waters. In contrast, the new generation of high-frequency (MHz) acoustic cameras can provide images with enhanced target details in highly turbid waters, though their range is reduced by one to two orders of magnitude compared to traditional low-/midfrequency (10s-100s KHz) sonar systems. It is conceivable that an effective inspection strategy is the deployment of both optical and acoustic cameras on a submersible platform, to enable target imaging in a range of turbidity conditions. Under this scenario and where visibility allows, registration of the images from both cameras arranged in binocular stereo configuration provides valuable scene information that cannot be readily recovered from each sensor alone. We explore and derive the constraint equations for the epipolar geometry and stereo triangulation in utilizing these two sensing modalities with different projection models. Theoretical results supported by computer simulations show that an opti-acoustic stereo imaging system outperforms a traditional binocular vision with optical cameras, particularly for increasing target distance and (or) turbidity.

  13. Calibration of stereo rigs based on the backward projection process

    NASA Astrophysics Data System (ADS)

    Gu, Feifei; Zhao, Hong; Ma, Yueyang; Bu, Penghui; Zhao, Zixin

    2016-08-01

    High-accuracy 3D measurement based on binocular vision system is heavily dependent on the accurate calibration of two rigidly-fixed cameras. In most traditional calibration methods, stereo parameters are iteratively optimized through the forward imaging process (FIP). However, the results can only guarantee the minimal 2D pixel errors, but not the minimal 3D reconstruction errors. To address this problem, a simple method to calibrate a stereo rig based on the backward projection process (BPP) is proposed. The position of a spatial point can be determined separately from each camera by planar constraints provided by the planar pattern target. Then combined with pre-defined spatial points, intrinsic and extrinsic parameters of the stereo-rig can be optimized by minimizing the total 3D errors of both left and right cameras. An extensive performance study for the method in the presence of image noise and lens distortions is implemented. Experiments conducted on synthetic and real data demonstrate the accuracy and robustness of the proposed method.

  14. Evaluating the performance of close-range 3D active vision systems for industrial design applications

    NASA Astrophysics Data System (ADS)

    Beraldin, J.-Angelo; Gaiani, Marco

    2004-12-01

    In recent years, active three-dimensional (3D) active vision systems or range cameras for short have come out of research laboratories to find niche markets in application fields as diverse as industrial design, automotive manufacturing, geomatics, space exploration and cultural heritage to name a few. Many publications address different issues link to 3D sensing and processing but currently these technologies pose a number of challenges to many recent users, i.e., "what are they, how good are they and how do they compare?". The need to understand, test and integrate those range cameras with other technologies, e.g. photogrammetry, CAD, etc. is driven by the quest for optimal resolution, accuracy, speed and cost. Before investing, users want to be certain that a given range camera satisfy their operational requirements. The understanding of the basic theory and best practices associated with those cameras are in fact fundamental to fulfilling the requirements listed above in an optimal way. This paper addresses the evaluation of active 3D range cameras as part of a study to better understand and select one or a number of them to fulfill the needs of industrial design applications. In particular, object material and surface features effect, calibration and performance evaluation are discussed. Results are given for six different range cameras for close range applications.

  15. Evaluating the performance of close-range 3D active vision systems for industrial design applications

    NASA Astrophysics Data System (ADS)

    Beraldin, J.-Angelo; Gaiani, Marco

    2005-01-01

    In recent years, active three-dimensional (3D) active vision systems or range cameras for short have come out of research laboratories to find niche markets in application fields as diverse as industrial design, automotive manufacturing, geomatics, space exploration and cultural heritage to name a few. Many publications address different issues link to 3D sensing and processing but currently these technologies pose a number of challenges to many recent users, i.e., "what are they, how good are they and how do they compare?". The need to understand, test and integrate those range cameras with other technologies, e.g. photogrammetry, CAD, etc. is driven by the quest for optimal resolution, accuracy, speed and cost. Before investing, users want to be certain that a given range camera satisfy their operational requirements. The understanding of the basic theory and best practices associated with those cameras are in fact fundamental to fulfilling the requirements listed above in an optimal way. This paper addresses the evaluation of active 3D range cameras as part of a study to better understand and select one or a number of them to fulfill the needs of industrial design applications. In particular, object material and surface features effect, calibration and performance evaluation are discussed. Results are given for six different range cameras for close range applications.

  16. Viewing The Entire Sun With STEREO And SDO

    NASA Astrophysics Data System (ADS)

    Thompson, William T.; Gurman, J. B.; Kucera, T. A.; Howard, R. A.; Vourlidas, A.; Wuelser, J.; Pesnell, D.

    2011-05-01

    On 6 February 2011, the two Solar Terrestrial Relations Observatory (STEREO) spacecraft were at 180 degrees separation. This allowed the first-ever simultaneous view of the entire Sun. Combining the STEREO data with corresponding images from the Solar Dynamics Observatory (SDO) allows this full-Sun view to continue for the next eight years. We show how the data from the three viewpoints are combined into a single heliographic map. Processing of the STEREO beacon telemetry allows these full-Sun views to be created in near-real-time, allowing tracking of solar activity even on the far side of the Sun. This is a valuable space-weather tool, not only for anticipating activity before it rotates onto the Earth-view, but also for deep space missions in other parts of the solar system. Scientific use of the data includes the ability to continuously track the entire lifecycle of active regions, filaments, coronal holes, and other solar features. There is also a significant public outreach component to this activity. The STEREO Science Center produces products from the three viewpoints used in iPhone/iPad and Android applications, as well as time sequences for spherical projection systems used in museums, such as Science-on-a-Sphere and Magic Planet.

  17. Orbit Determination and Navigation of the Solar Terrestrial Relations Observatory (STEREO)

    NASA Technical Reports Server (NTRS)

    Mesarch, Michael A.; Robertson, Mika; Ottenstein, Neil; Nicholson, Ann; Nicholson, Mark; Ward, Douglas T.; Cosgrove, Jennifer; German, Darla; Hendry, Stephen; Shaw, James

    2007-01-01

    This paper provides an overview of the required upgrades necessary for navigation of NASA's twin heliocentric science missions, Solar TErestrial RElations Observatory (STEREO) Ahead and Behind. The orbit determination of the STEREO spacecraft was provided by the NASA Goddard Space Flight Center's (GSFC) Flight Dynamics Facility (FDF) in support of the mission operations activities performed by the Johns Hopkins University Applied Physics Laboratory (APL). The changes to FDF's orbit determination software included modeling upgrades as well as modifications required to process the Deep Space Network X-band tracking data used for STEREO. Orbit results as well as comparisons to independently computed solutions are also included. The successful orbit determination support aided in maneuvering the STEREO spacecraft, launched on October 26, 2006 (00:52 Z), to target the lunar gravity assists required to place the spacecraft into their final heliocentric drift-away orbits where they are providing stereo imaging of the Sun.

  18. Orbit Determination and Navigation of the Solar Terrestrial Relations Observatory (STEREO)

    NASA Technical Reports Server (NTRS)

    Mesarch, Michael; Robertson, Mika; Ottenstein, Neil; Nicholson, Ann; Nicholson, Mark; Ward, Douglas T.; Cosgrove, Jennifer; German, Darla; Hendry, Stephen; Shaw, James

    2007-01-01

    This paper provides an overview of the required upgrades necessary for navigation of NASA's twin heliocentric science missions, Solar TErestrial RElations Observatory (STEREO) Ahead and Behind. The orbit determination of the STEREO spacecraft was provided by the NASA Goddard Space Flight Center's (GSFC) Flight Dynamics Facility (FDF) in support of the mission operations activities performed by the Johns Hopkins University Applied Physics Laboratory (APL). The changes to FDF s orbit determination software included modeling upgrades as well as modifications required to process the Deep Space Network X-band tracking data used for STEREO. Orbit results as well as comparisons to independently computed solutions are also included. The successful orbit determination support aided in maneuvering the STEREO spacecraft, launched on October 26, 2006 (00:52 Z), to target the lunar gravity assists required to place the spacecraft into their final heliocentric drift-away orbits where they are providing stereo imaging of the Sun.

  19. Statistical Building Roof Reconstruction from WORLDVIEW-2 Stereo Imagery

    NASA Astrophysics Data System (ADS)

    Partovi, T.; Huang, H.; Krauß, T.; Mayer, H.; Reinartz, P.

    2015-03-01

    3D building reconstruction from point clouds is an active research topic in remote sensing, photogrammetry and computer vision. Most of the prior research has been done on 3D building reconstruction from LiDAR data which means high resolution and dense data. The interest of this work is 3D building reconstruction from Digital Surface Models (DSM) of stereo image matching of space borne satellite data which cover larger areas than LiDAR datasets in one data acquisition step and can be used also for remote regions. The challenging problem is the noise of this data because of low resolution and matching errors. In this paper, a top-down and bottom-up method is developed to find building roof models which exhibit the optimum fit to the point clouds of the DSM. In the bottom up step of this hybrid method, the building mask and roof components such as ridge lines are extracted. In addition, in order to reduce the computational complexity and search space, roofs are classified to pitched and flat roofs as well. Ridge lines are utilized to estimate the roof primitives from a building library such as width, length, positions and orientation. Thereafter, a topdown approach based on Markov Chain Monte Carlo and simulated annealing is applied to optimize roof parameters in an iterative manner by stochastic sampling and minimizing the average of Euclidean distance between point cloud and model surface as fitness function. Experiments are performed on two areas of Munich city which include three roof types (hipped, gable and flat roofs). The results show the efficiency of this method in even for this type of noisy datasets.

  20. Attention in Active Vision: A Perspective on Perceptual Continuity Across Saccades.

    PubMed

    Rolfs, Martin

    2015-01-01

    Alfred L. Yarbus was among the first to demonstrate that eye movements actively serve our perceptual and cognitive goals, a crucial recognition that is at the heart of today's research on active vision. He realized that not the changes in fixation stick in memory but the changes in shifts of attention. Indeed, oculomotor control is tightly coupled to functions as fundamental as attention and memory. This tight relationship offers an intriguing perspective on transsaccadic perceptual continuity, which we experience despite the fact that saccades cause rapid shifts of the image across the retina. Here, I elaborate this perspective based on a series of psychophysical findings. First, saccade preparation shapes the visual system's priorities; it enhances visual performance and perceived stimulus intensity at the targets of the eye movement. Second, before saccades, the deployment of visual attention is updated, predictively facilitating perception at those retinal locations that will be relevant once the eyes land. Third, saccadic eye movements strongly affect the contents of visual memory, highlighting their crucial role for which parts of a scene we remember or forget. Together, these results provide insights on how attentional processes enable the visual system to cope with the retinal consequences of saccades.

  1. Solid state active/passive night vision imager using continuous-wave laser diodes and silicon focal plane arrays

    NASA Astrophysics Data System (ADS)

    Vollmerhausen, Richard H.

    2013-04-01

    Passive imaging offers covertness and low power, while active imaging provides longer range target acquisition without the need for natural or external illumination. This paper describes a focal plane array (FPA) concept that has the low noise needed for state-of-the-art passive imaging and the high-speed gating needed for active imaging. The FPA is used with highly efficient but low-peak-power laser diodes to create a night vision imager that has the size, weight, and power attributes suitable for man-portable applications. Video output is provided in both the active and passive modes. In addition, the active mode is Class 1 eye safe and is not visible to the naked eye or to night vision goggles.

  2. Stereo reconstruction from multiperspective panoramas.

    PubMed

    Li, Yin; Shum, Heung-Yeung; Tang, Chi-Keung; Szeliski, Richard

    2004-01-01

    A new approach to computing a panoramic (360 degrees) depth map is presented in this paper. Our approach uses a large collection of images taken by a camera whose motion has been constrained to planar concentric circles. We resample regular perspective images to produce a set of multiperspective panoramas and then compute depth maps directly from these resampled panoramas. Our panoramas sample uniformly in three dimensions: rotation angle, inverse radial distance, and vertical elevation. The use of multiperspective panoramas eliminates the limited overlap present in the original input images and, thus, problems as in conventional multibaseline stereo can be avoided. Our approach differs from stereo matching of single-perspective panoramic images taken from different locations, where the epipolar constraints are sine curves. For our multiperspective panoramas, the epipolar geometry, to the first order approximation, consists of horizontal lines. Therefore, any traditional stereo algorithm can be applied to multiperspective panoramas with little modification. In this paper, we describe two reconstruction algorithms. The first is a cylinder sweep algorithm that uses a small number of resampled multiperspective panoramas to obtain dense 3D reconstruction. The second algorithm, in contrast, uses a large number of multiperspective panoramas and takes advantage of the approximate horizontal epipolar geometry inherent in multiperspective panoramas. It comprises a novel and efficient 1D multibaseline matching technique, followed by tensor voting to extract the depth surface. Experiments show that our algorithms are capable of producing comparable high quality depth maps which can be used for applications such as view interpolation. PMID:15382685

  3. Wide baseline stereo matching based on double topological relationship consistency

    NASA Astrophysics Data System (ADS)

    Zou, Xiaohong; Liu, Bin; Song, Xiaoxue; Liu, Yang

    2009-07-01

    Stereo matching is one of the most important branches in computer vision. In this paper, an algorithm is proposed for wide-baseline stereo vision matching. Here, a novel scheme is presented called double topological relationship consistency (DCTR). The combination of double topological configuration includes the consistency of first topological relationship (CFTR) and the consistency of second topological relationship (CSTR). It not only sets up a more advanced model on matching, but discards mismatches by iteratively computing the fitness of the feature matches and overcomes many problems of traditional methods depending on the powerful invariance to changes in the scale, rotation or illumination across large view changes and even occlusions. Experimental examples are shown where the two cameras have been located in very different orientations. Also, epipolar geometry can be recovered using RANSAC by far the most widely method adopted possibly. By the method, we can obtain correspondences with high precision on wide baseline matching problems. Finally, the effectiveness and reliability of this method are demonstrated in wide-baseline experiments on the image pairs.

  4. FPGA implementation of glass-free stereo vision

    NASA Astrophysics Data System (ADS)

    Tang, Weidong; Yan, Xiaolin

    2016-04-01

    This paper presents a real-time efficient glass-free 3D system, which is based on FPGA. The system converts two-view input that is 60 frames per second (fps) 1080P stream into a multi-view video with 30fps and 4K resolution. In order to provide smooth and comfortable viewing experience, glass-free 3D systems must display multi-view videos. To generate a multi-view video from a two-view input includes three steps, the first is to compute disparity maps from two input views; the second is to synthesize a couple of new views based on the computed disparity maps and input views; the last is to produce video from the new views according to the specifications of the lens installed on TV sets.

  5. Ranging Apparatus and Method Implementing Stereo Vision System

    NASA Technical Reports Server (NTRS)

    Li, Larry C. (Inventor); Cox, Brian J. (Inventor)

    1997-01-01

    A laser-directed ranging system for use in telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a left and right video camera mounted on a camera platform, and a remotely positioned operator. The position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. A laser is provided between the left and right video camera and is directed by the user to point to a target device. The images produced by the left and right video cameras are processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. The horizontal disparity between the two processed images is calculated for use in a stereometric ranging analysis from which range is determined.

  6. The analysis on optical property for stereo vision measurement system

    NASA Astrophysics Data System (ADS)

    Liu, Zong-ming; Ye, Dong; Zhang, Yu; Lu, Shan; Cao, Shu-qing

    2016-01-01

    In the relative measurement for the space non-cooperative target, the analysis to the optical property of the target is one of premises of the sensor design. The article is targeted on GEO satellites. From the perspective of photometry and based on the blackbody radiation law, we analyze the visible light energy of the sun outside the atmosphere, and consider the impact of satellite thermal control multilayer, model the luminosity feature related to the solar incident angle and the sensor observing angle. Finally we get the equivalent visual magnitude of the target satellite at the pupil of the camera. Our research could effectively direct the design and development of the visible relative measurement sensor.

  7. An analysis of simulated stereo radar imagery

    NASA Technical Reports Server (NTRS)

    Pisaruck, M. A.; Kaupp, V. H.; Macdonald, H. C.; Waite, W. P.

    1983-01-01

    Simulated stereo radar imagery is used to investigate parameters for a spaceborne imaging radar. Incidence angles ranging from small to intermediate to large are used with three digital terrain model areas which are representative of relatively flat, moderately rough, and mountainous terrain. The simulated radar imagery was evaluated by interpreters for ease of stereo perception and information content, and rank order within each class of terrain. The interpreter's results are analyzed for trends between the height of a feature and either parallax or vertical exaggeration for a stereo pair. A model is developed which predicts the amount of parallax (or vertical exaggeration) an interpreter would desire for best stereo perception of a feature of a specific height. Results indicate the selection of angle of incidence and stereo intersection angle depend upon the relative relief of the terrain. Examples of the simulated stereo imagery are presented for a candidate spaceborne imaging radar having four selectable angles of incidence.

  8. Compact stereo endoscopic camera using microprism arrays.

    PubMed

    Yang, Sung-Pyo; Kim, Jae-Jun; Jang, Kyung-Won; Song, Weon-Kook; Jeong, Ki-Hun

    2016-03-15

    This work reports a microprism array (MPA) based compact stereo endoscopic camera with a single image sensor. The MPAs were monolithically fabricated by using two-step photolithography and geometry-guided resist reflow to form an appropriate prism angle for stereo image pair formation. The fabricated MPAs were transferred onto a glass substrate with a UV curable resin replica by using polydimethylsiloxane (PDMS) replica molding and then successfully integrated in front of a single camera module. The stereo endoscopic camera with MPA splits an image into two stereo images and successfully demonstrates the binocular disparities between the stereo image pairs for objects with different distances. This stereo endoscopic camera can serve as a compact and 3D imaging platform for medical, industrial, or military uses.

  9. Posterior Parietal Cortex Drives Inferotemporal Activations During Three-Dimensional Object Vision

    PubMed Central

    Van Dromme, Ilse C.; Premereur, Elsie; Verhoef, Bram-Ernst; Vanduffel, Wim; Janssen, Peter

    2016-01-01

    The primate visual system consists of a ventral stream, specialized for object recognition, and a dorsal visual stream, which is crucial for spatial vision and actions. However, little is known about the interactions and information flow between these two streams. We investigated these interactions within the network processing three-dimensional (3D) object information, comprising both the dorsal and ventral stream. Reversible inactivation of the macaque caudal intraparietal area (CIP) during functional magnetic resonance imaging (fMRI) reduced fMRI activations in posterior parietal cortex in the dorsal stream and, surprisingly, also in the inferotemporal cortex (ITC) in the ventral visual stream. Moreover, CIP inactivation caused a perceptual deficit in a depth-structure categorization task. CIP-microstimulation during fMRI further suggests that CIP projects via posterior parietal areas to the ITC in the ventral stream. To our knowledge, these results provide the first causal evidence for the flow of visual 3D information from the dorsal stream to the ventral stream, and identify CIP as a key area for depth-structure processing. Thus, combining reversible inactivation and electrical microstimulation during fMRI provides a detailed view of the functional interactions between the two visual processing streams. PMID:27082854

  10. Quasi-Epipolar Resampling of High Resolution Satellite Stereo Imagery for Semi Global Matching

    NASA Astrophysics Data System (ADS)

    Tatar, N.; Saadatseresht, M.; Arefi, H.; Hadavand, A.

    2015-12-01

    Semi-global matching is a well-known stereo matching algorithm in photogrammetric and computer vision society. Epipolar images are supposed as input of this algorithm. Epipolar geometry of linear array scanners is not a straight line as in case of frame camera. Traditional epipolar resampling algorithms demands for rational polynomial coefficients (RPCs), physical sensor model or ground control points. In this paper we propose a new solution for epipolar resampling method which works without the need for these information. In proposed method, automatic feature extraction algorithms are employed to generate corresponding features for registering stereo pairs. Also original images are divided into small tiles. In this way by omitting the need for extra information, the speed of matching algorithm increased and the need for high temporal memory decreased. Our experiments on GeoEye-1 stereo pair captured over Qom city in Iran demonstrates that the epipolar images are generated with sub-pixel accuracy.

  11. Orthographic Stereo Correlator on the Terrain Model for Apollo Metric Images

    NASA Technical Reports Server (NTRS)

    Kim, Taemin; Husmann, Kyle; Moratto, Zachary; Nefian, Ara V.

    2011-01-01

    A stereo correlation method on the object domain is proposed to generate the accurate and dense Digital Elevation Models (DEMs) from lunar orbital imagery. The NASA Ames Intelligent Robotics Group (IRG) aims to produce high-quality terrain reconstructions of the Moon from Apollo Metric Camera (AMC) data. In particular, IRG makes use of a stereo vision process, the Ames Stereo Pipeline (ASP), to automatically generate DEMs from consecutive AMC image pairs. Given camera parameters of an image pair from bundle adjustment in ASP, a correlation window is defined on the terrain with the predefined surface normal of a post rather than image domain. The squared error of back-projected images on the local terrain is minimized with respect to the post elevation. This single dimensional optimization is solved efficiently and improves the accuracy of the elevation estimate.

  12. Stereo image coding: a projection approach.

    PubMed

    Aydinoğlu, H; Hayes, M H

    1998-01-01

    Recently, due to advances in display technology, three-dimensional (3-D) imaging systems are becoming increasingly popular. One way of stimulating 3-D perception is to use stereo pairs, a pair of images of the same scene acquired from different perspectives. Since there is an inherent redundancy between the images of a stereo pair, data compression algorithms should be employed to represent stereo pairs efficiently. This paper focuses on the stereo image coding problem. We begin with a description of the problem and a survey of current stereo coding techniques. A new stereo image coding algorithm that is based on disparity compensation and subspace projection is described. This algorithm, the subspace projection technique (SPT), is a transform domain approach with a space-varying transformation matrix and may be interpreted as a spatial-transform domain representation of the stereo data. The advantage of the proposed approach is that it can locally adapt to the changes in the cross-correlation characteristics of the stereo pairs. Several design issues and implementations of the algorithm are discussed. Finally, we present empirical results suggesting that the SPT approach outperforms current stereo coding techniques. PMID:18276269

  13. Lambda Vision

    NASA Astrophysics Data System (ADS)

    Czajkowski, Michael

    2014-06-01

    There is an explosion in the quantity and quality of IMINT data being captured in Intelligence Surveillance and Reconnaissance (ISR) today. While automated exploitation techniques involving computer vision are arriving, only a few architectures can manage both the storage and bandwidth of large volumes of IMINT data and also present results to analysts quickly. Lockheed Martin Advanced Technology Laboratories (ATL) has been actively researching in the area of applying Big Data cloud computing techniques to computer vision applications. This paper presents the results of this work in adopting a Lambda Architecture to process and disseminate IMINT data using computer vision algorithms. The approach embodies an end-to-end solution by processing IMINT data from sensors to serving information products quickly to analysts, independent of the size of the data. The solution lies in dividing up the architecture into a speed layer for low-latent processing and a batch layer for higher quality answers at the expense of time, but in a robust and fault-tolerant way. This approach was evaluated using a large corpus of IMINT data collected by a C-130 Shadow Harvest sensor over Afghanistan from 2010 through 2012. The evaluation data corpus included full motion video from both narrow and wide area field-of-views. The evaluation was done on a scaled-out cloud infrastructure that is similar in composition to those found in the Intelligence Community. The paper shows experimental results to prove the scalability of the architecture and precision of its results using a computer vision algorithm designed to identify man-made objects in sparse data terrain.

  14. How to assess vision.

    PubMed

    Marsden, Janet

    2016-09-21

    Rationale and key points An objective assessment of the patient's vision is important to assess variation from 'normal' vision in acute and community settings, to establish a baseline before examination and treatment in the emergency department, and to assess any changes during ophthalmic outpatient appointments. » Vision is one of the essential senses that permits people to make sense of the world. » Visual assessment does not only involve measuring central visual acuity, it also involves assessing the consequences of reduced vision. » Assessment of vision in children is crucial to identify issues that might affect vision and visual development, and to optimise lifelong vision. » Untreatable loss of vision is not an inevitable consequence of ageing. » Timely and repeated assessment of vision over life can reduce the incidence of falls, prevent injury and optimise independence. Reflective activity 'How to' articles can help update you practice and ensure it remains evidence based. Apply this article to your practice. Reflect on and write a short account of: 1. How this article might change your practice when assessing people holistically. 2. How you could use this article to educate your colleagues in the assessment of vision. PMID:27654560

  15. Subjective evaluations of multiple three-dimensional displays by a stereo-deficient viewer: an interesting case study

    NASA Astrophysics Data System (ADS)

    McIntire, John P.; Ellis, Sharon A.; Harrington, Lawrence K.; Havig, Paul R.

    2014-06-01

    A study was conducted with sixteen observers evaluating four different three-dimensional (3D) displays for usability, quality, and physical comfort. One volumetric display and three different stereoscopic displays were tested. The observers completed several different types of questionnaires before, during and after each test session. All observers were tested for distance acuity, color vision, and stereoscopic acuity. One observer in particular appeared to have either degraded or absent binocular vision on the stereo acuity test. During the subjective portions of the data collection, this observer showed no obvious signs of depth perception problems and finished the study with no issues reported. Upon further post-hoc stereovision testing of this observer, we discovered that he essentially failed all tests requiring depth judgments of fine disparity and had at best only gross levels of stereoscopic vision (failed all administered stereoacuity threshold tests, testing up to about 800 arc sec of disparity). When questioned about this, the stereo-deficiency was unknown to the observer, who reported having seen several stereoscopic 3D movies (and enjoyed the 3D experiences). Interestingly, we had collected subjective reports about the quality of three-dimensional imagery across multiple stereoscopic displays from a person with deficient stereo-vision. We discuss the participant's unique pattern of results and compare and contrast these results with the other stereo-normal participants. The implications for subjective measurements on stereoscopic three-dimensional displays and for subjective display measurement in general are considered.

  16. Stereo imaging based particle velocimeter

    NASA Technical Reports Server (NTRS)

    Batur, Celal

    1994-01-01

    Three dimensional coordinates of an object are determined from it's two dimensional images for a class of points on the object. Two dimensional images are first filtered by a Laplacian of Gaussian (LOG) filter in order to detect a set of feature points on the object. The feature points on the left and the right images are then matched using a Hopfield type optimization network. The performance index of the Hopfield network contains both local and global properties of the images. Parallel computing in stereo matching can be achieved by the proposed methodology.

  17. Stereo matching using Hebbian learning.

    PubMed

    Pajares, G; Cruz, J M; Lopez-Orozco, J A

    1999-01-01

    This paper presents an approach to the local stereo matching problem using edge segments as features with several attributes. We have verified that the differences in attributes for the true matches cluster in a cloud around a center. The correspondence is established on the basis of the minimum distance criterion, computing the Mahalanobis distance between the difference of the attributes for a current pair of features and the cluster center (similarity constraint). We introduce a learning strategy based on the Hebbian Learning to get the best cluster center. A comparative analysis among methods without learning and with other learning strategies is illustrated. PMID:18252332

  18. Comparison of visual cortical activations induced by electro-acupuncture at vision and nonvision-related acupoints.

    PubMed

    Zhang, Yi; Liang, Jimin; Qin, Wei; Liu, Peng; von Deneen, Karen M; Chen, Peng; Bai, Lijun; Tian, Jie; Liu, Yijun

    2009-07-10

    In the current study, we investigated whether or not stimulation at vision and nonvision-related acupoints was able to induce similarity in the time domain, although stimulation at different acupoints could produce similar spatial distributions. This phenomenon still remains uncertain and contradictory. We introduced a novel experimental paradigm using a modified non-repeated event-related (NRER) design, and utilized the methods of independent component analysis (ICA) combined with seed correlated functional connectivity analysis to locate visual cortical activations and to study their temporal characteristics during electro-acupuncture (EAS) at vision-related acupoint GB 37 and nonvision-related acupoint KI 8. Results showed that strong activations were present in the visual cortical areas (BA 17/18/19) at both acupoints, but temporal correlation analysis indicated that they were modulated in opposite directions during the resting state after acupuncture. Our results revealed that acupuncture at vision and nonvision-related acupoints can induce similar activations in spatial distribution but different modulation effects temporally.

  19. Stereo Pair, Patagonia, Argentina

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This view of northern Patagonia, at Los Menucos, Argentina shows remnants of relatively young volcanoes built upon an eroded plain of much older and contorted volcanic, granitic, and sedimentary rocks. The large purple, brown, and green 'butterfly' pattern is a single volcano that has been deeply eroded. Large holes on the volcano's flanks indicate that they may have collapsed soon after eruption, as fluid molten rock drained out from under its cooled and solidified outer shell. At the upper left, a more recent eruption occurred and produced a small volcanic cone and a long stream of lava, which flowed down a gully. At the top of the image, volcanic intrusions permeated the older rocks resulting in a chain of small dark volcanic peaks. At the top center of the image, two halves of a tan ellipse pattern are offset from each other. This feature is an old igneous intrusion that has been split by a right-lateral fault. The apparent offset is about 6.6 kilometers (4 miles). Color, tonal, and topographic discontinuities reveal the fault trace as it extends across the image to the lower left. However, young unbroken basalt flows show that the fault has not been active recently.

    This cross-eyed stereoscopic image pair was generated using topographic data from the Shuttle Radar Topography Mission, combined with an enhanced Landsat 7satellite color image. The topography data are used to create two differing perspectives of a single image, one perspective for each eye. In doing so, each point in the image is shifted slightly, depending on its elevation. When stereoscopically merged, the result is a vertically exaggerated view of the Earth's surface in its full three dimensions.

    Landsat satellites have provided visible light and infrared images of the Earth continuously since 1972. SRTM topographic data match the 30-meter (99-foot) spatial resolution of most Landsat images and provide a valuable complement for studying the historic and growing Landsat data archive

  20. Stereo Image Reversible Watermarking for Authentication

    NASA Astrophysics Data System (ADS)

    Luo, Ting; Jiang, Gangyi; Peng, Zongju; Shao, Feng; Yu, Mei

    2015-03-01

    To authenticate stereo image, this paper proposes a stereo image reversible watermarking method for three dimensional television systems. The proposed method can recover the original stereo image without any distortion if stereo image is not tampered. Otherwise, it can authenticate stereo image with the capability of locating tamper. Difference expansion is used to embed authentication bits reversibly into each block for locating tamper. Pixel pairs without resulting in overflow/underflow after expansion are chosen as candidates of embedding locations. In order to avoid transmit big size of embedding location map, the same embedding locations for each block are decided in advance, which adaptively determines the block size. Moreover, in order to resist collage attack, authentication bits generated from each block embedded into its mapping blocks, so that authentication of each block is not independent. Experimental results demonstrate that stereo image can be totally recovered if watermarked stereo image is not modified. Moreover, it proves suitability and stability of the proposed method for tamper location with relationships of stereo images.

  1. New Views of the Sun: STEREO and Hinode

    NASA Astrophysics Data System (ADS)

    Luhmann, Janet G.; Tsuneta, Saku; Bougeret, J.-L.; Galvin, Antoinette; Howard, R. A.; Kaiser, Michael; Thompson, W. T.

    The twin-spacecraft STEREO mission has now been in orbit for 1.5 years. Although the main scientific objective of STEREO is the origin and evolution of Coronal Mass Ejections (CMEs) and their heliospheric consequences, the slow decline of the previous solar cycle has provided an extraordinary opportunity for close scrutiny of the quiet corona and solar wind, including suprathermal and energetic particles. However, STEREO has also captured a few late cycle CMEs that have given us a taste of the observations and analyses to come. Images from the SECCHI investigation afforded by STEREO's separated perspectives and the heliospheric imager have already allowed us to visibly witness the origins of the slow solar wind and the Sun-to-1 AU transit of ICMEs. The SWAVES investigation has monitored the transit of interplanetary shocks in 3D while the PLASTIC and IMPACT in-situ measurements provide the 'ground truth' of what is remotely sensed. New prospects for space weather forecasting have been demonstrated with the STEREO behind spacecraft, a successful proof-of-concept test for future space weather mission designs. The data sets for the STEREO investigations are openly available through a STEREO Science Center web interface that also provides supporting information for potential users from all communities. Comet observers and astronomers, interplanetary dust researchers and planetary scientists have already made use of this resource. The potential for detailed Sun-to-Earth CME/ICME interpretations with sophisticated modeling efforts are an upcoming STEREO-Hinode partnering activity whose success we can only anticipate at this time. Since its launch in September 2006, Hinode has sent back solar images of unprecedented clarity every day. The primary purpose of this mission is a systems approach to understanding the generation, transport and ultimate dissipation of solar magnetic fields with a well-coordinated set of advanced telescopes. Hinode is equipped with three

  2. STEREO Spies McNaught

    NASA Technical Reports Server (NTRS)

    2007-01-01

    An instrument on one of the two new STEREO spacecraft captured an unprecedented view of the brightest comet of the last 40 years. Positioned out in space ahead of the Earth as its orbits the Sun, it had a ringside seat on the very brilliant Comet McNaught. The SECCHI/HI-1A instrument on the NASA STEREO-A (Ahead) spacecraft took the frames for this spectacular video during the period of January 11- 18, 2007. (The still shows the comet on January 17.) The full field of view of the HI instrument (a wide-angle sky imager) is centered at about 14 degrees from Sun's center and is 20 degrees wide. The comet tail is approximately 7 degrees in length and shows multiple rays. The image shows the comet tail in spectacular detail, especially once the bright comet head left the field of view and stopped saturating the images. These images are very likely the most detailed images ever taken of a comet while it is very close (0.17 Astronomical Units, which is even closer than Mercury) to the Sun. It has been described by one experienced comet scientist as 'one of, if not the most, beautiful uninterrupted sequence of images of a comet ever made.' Also visible in these movies is Venus (bright object left of center at the bottom) and Mercury (appears from the right later in the sequence). Even their brightness creates saturation streaks on the very sensitive imager.

  3. Stereo Correspondence Using Moment Invariants

    NASA Astrophysics Data System (ADS)

    Premaratne, Prashan; Safaei, Farzad

    Autonomous navigation is seen as a vital tool in harnessing the enormous potential of Unmanned Aerial Vehicles (UAV) and small robotic vehicles for both military and civilian use. Even though, laser based scanning solutions for Simultaneous Location And Mapping (SLAM) is considered as the most reliable for depth estimation, they are not feasible for use in UAV and land-based small vehicles due to their physical size and weight. Stereovision is considered as the best approach for any autonomous navigation solution as stereo rigs are considered to be lightweight and inexpensive. However, stereoscopy which estimates the depth information through pairs of stereo images can still be computationally expensive and unreliable. This is mainly due to some of the algorithms used in successful stereovision solutions require high computational requirements that cannot be met by small robotic vehicles. In our research, we implement a feature-based stereovision solution using moment invariants as a metric to find corresponding regions in image pairs that will reduce the computational complexity and improve the accuracy of the disparity measures that will be significant for the use in UAVs and in small robotic vehicles.

  4. Low vision and surfing.

    PubMed

    Owen, J; Herse, P

    1996-08-01

    Low vision rehabilitation often concentrates on vocational and living skills training. Nonetheless, the motivation for improving reading or travel skills may be to pursue some enjoyable recreational activity. A case report of a telescopic aid for surfing is presented, emphasizing the importance of recreation in low vision rehabilitation. PMID:8869988

  5. Active vision system for planning and programming of industrial robots in one-of-a-kind manufacturing

    NASA Astrophysics Data System (ADS)

    Berger, Ulrich; Schmidt, Achim

    1995-10-01

    The aspects of automation technology in industrial one-of-a-kind manufacturing are discussed. An approach to improve the quality and cost relation is developed and an overview of an 3D- vision supported automation system is given. This system is based on an active vision sensor for 3D-geometry feedback. Its measurement principle, the coded light approach, is explained. The experimental environment for the technical validation of the automation approach is demonstrated, where robot based processes (assembly, arc welding and flame cutting) are graphically simulated and off-line programmed. A typical process sequence for automated one- of-a-kind manufacturing is described. The results of this research development are applied to a project on the automated disassembling of car parts for recycling using industrial robots.

  6. Vision Based Localization in Urban Environments

    NASA Technical Reports Server (NTRS)

    McHenry, Michael; Cheng, Yang; Matthies, Larry

    2005-01-01

    As part of DARPA's MARS2020 program, the Jet Propulsion Laboratory developed a vision-based system for localization in urban environments that requires neither GPS nor active sensors. System hardware consists of a pair of small FireWire cameras and a standard Pentium-based computer. The inputs to the software system consist of: 1) a crude grid-based map describing the positions of buildings, 2) an initial estimate of robot location and 3) the video streams produced by each camera. At each step during the traverse the system: captures new image data, finds image features hypothesized to lie on the outside of a building, computes the range to those features, determines an estimate of the robot's motion since the previous step and combines that data with the map to update a probabilistic representation of the robot's location. This probabilistic representation allows the system to simultaneously represent multiple possible locations, For our testing, we have derived the a priori map manually using non-orthorectified overhead imagery, although this process could be automated. The software system consists of two primary components. The first is the vision system which uses binocular stereo ranging together with a set of heuristics to identify features likely to be part of building exteriors and to compute an estimate of the robot's motion since the previous step. The resulting visual features and the associated range measurements are software component, a particle-filter based localization system. This system uses the map and the then fed to the second primary most recent results from the vision system to update the estimate of the robot's location. This report summarizes the design of both the hardware and software and will include the results of applying the system to the global localization of a robot over an approximately half-kilometer traverse across JPL'S Pasadena campus.

  7. Lidar multi-range integrated Dewar assembly (IDA) for active-optical vision navigation sensor

    NASA Astrophysics Data System (ADS)

    Mayner, Philip; Clemet, Ed; Asbrock, Jim; Chen, Isabel; Getty, Jonathan; Malone, Neil; De Loo, John; Giroux, Mark

    2013-09-01

    A multi-range focal plane was developed and delivered by Raytheon Vision Systems for a docking system that was demonstrated on STS-134. This required state of the art focal plane and electronics synchronization to capture nanosecond length laser pulses to determine ranges with an accuracy of less than 1 inch.

  8. GARGOYLE: An environment for real-time, context-sensitive active vision

    SciTech Connect

    Prokopowicz, P.N.; Swain, M.J.; Firby, R.J.; Kahn, R.E.

    1996-12-31

    Researchers in robot vision have access to several excellent image processing packages (e.g., Khoros, Vista, Susan, MIL, and X Vision to name only a few) as a base for any new vision software needed in most navigation and recognition tasks. Our work in automonous robot control and human-robot interaction, however, has demanded a new level of run-time flexibility and performance: on-the-fly configuration of visual routines that exploit up-to-the-second context from the task, image, and environment. The result is Gargoyle: an extendible, on-board, real-time vision software package that allows a robot to configure, parameterize, and execute image-processing pipelines at run-time. Each operator in a pipeline works at a level of resolution and over regions of interest that are computed by upstream operators or set by the robot according to task constraints. Pipeline configurations and operator parameters can be stored as a library of visual methods appropriate for different sensing tasks and environmental conditions. Beyond this, a robot may reason about the current task and environmental constraints to construct novel visual routines that are too specialized to work under general conditions, but that are well-suited to the immediate environment and task. We use the RAP reactive plan-execution system to select and configure pre-compiled processing pipelines, and to modify them for specific constraints determined at run-time.

  9. Small Boats in an Ocean of School Activities: Towards a European Vision on Education

    ERIC Educational Resources Information Center

    Villalba, Ernesto

    2008-01-01

    The paper discusses the concept of schools as "multi-purpose learning centres", proposed by the European Commission in the year 2000 as part of the Lisbon Strategy to improve competitiveness. This concept was arguably the "European vision" for school education and was meant to drive the modernization of school education. However, the concept has…

  10. Processing real-time stereo video for an autonomous robot using disparity maps and sensor fusion

    NASA Astrophysics Data System (ADS)

    Rosselot, Donald W.; Hall, Ernest L.

    2004-10-01

    The Bearcat "Cub" Robot is an interactive, intelligent, Autonomous Guided Vehicle (AGV) designed to serve in unstructured environments. Recent advances in computer stereo vision algorithms that produce quality disparity and the availability of low cost high speed camera systems have simplified many of tasks associated with robot navigation and obstacle avoidance using stereo vision. Leveraging these benefits, this paper describes a novel method for autonomous navigation and obstacle avoidance currently being implemented on the UC Bearcat Robot. The core of this approach is the synthesis of multiple sources of real-time data including stereo image disparity maps, tilt sensor data, and LADAR data with standard contour, edge, color, and line detection methods to provide robust and intelligent obstacle avoidance. An algorithm is presented with Matlab code to process the disparity maps to rapidly produce obstacle size and location information in a simple format, and features cancellation of noise and correction for pitch and roll. The vision and control computers are clustered with the Parallel Virtual Machine (PVM) software. The significance of this work is in presenting the methods needed for real time navigation and obstacle avoidance for intelligent autonomous robots.

  11. Stereo Visualization and Map Comprehension

    NASA Astrophysics Data System (ADS)

    Rapp, D. N.; Culpepper, S.; Kirkby, K.; Morin, P.

    2004-12-01

    In this experiment, we assessed the use of stereo visualizations as effective tools for topographic map learning. In most Earth Science courses, students spend extended time learning how to read topographic maps, relying on the lines of the map as indicators of height and accompanying distance. These maps often necessitate extended training for students to acquire an understanding of what they represent, how they are to be used, and the implementation of these maps to solve problems. In fact instructors often comment that students fail to adequately use such maps, instead relying on prior spatial knowledge or experiences which may be inappropriate for understanding topographic displays. We asked participants to study maps that provided 3-dimensional or 2-dimensional views, and then answer a battery of questions about features and processes associated with the maps. The results will be described with respect to the cognitive utility of visualizations as tools for map comprehension tasks.

  12. Color vision.

    PubMed

    Gegenfurtner, Karl R; Kiper, Daniel C

    2003-01-01

    Color vision starts with the absorption of light in the retinal cone photoreceptors, which transduce electromagnetic energy into electrical voltages. These voltages are transformed into action potentials by a complicated network of cells in the retina. The information is sent to the visual cortex via the lateral geniculate nucleus (LGN) in three separate color-opponent channels that have been characterized psychophysically, physiologically, and computationally. The properties of cells in the retina and LGN account for a surprisingly large body of psychophysical literature. This suggests that several fundamental computations involved in color perception occur at early levels of processing. In the cortex, information from the three retino-geniculate channels is combined to enable perception of a large variety of different hues. Furthermore, recent evidence suggests that color analysis and coding cannot be separated from the analysis and coding of other visual attributes such as form and motion. Though there are some brain areas that are more sensitive to color than others, color vision emerges through the combined activity of neurons in many different areas.

  13. Automatic harvesting of asparagus: an application of robot vision to agriculture

    NASA Astrophysics Data System (ADS)

    Grattoni, Paolo; Cumani, Aldo; Guiducci, Antonio; Pettiti, Giuseppe

    1994-02-01

    This work presents a system for the automatic selective harvesting of asparagus in open field being developed in the framework of the Italian National Project on Robotics. It is composed of a mobile robot, equipped with a suitable manipulator, and driven by a stereo-vision module. In this paper we discuss in detail the problems related to the vision module.

  14. A parallel stereo reconstruction algorithm with applications in entomology (APSRA)

    NASA Astrophysics Data System (ADS)

    Bhasin, Rajesh; Jang, Won Jun; Hart, John C.

    2012-03-01

    We propose a fast parallel algorithm for the reconstruction of 3-Dimensional point clouds of insects from binocular stereo image pairs using a hierarchical approach for disparity estimation. Entomologists study various features of insects to classify them, build their distribution maps, and discover genetic links between specimens among various other essential tasks. This information is important to the pesticide and the pharmaceutical industries among others. When considering the large collections of insects entomologists analyze, it becomes difficult to physically handle the entire collection and share the data with researchers across the world. With the method presented in our work, Entomologists can create an image database for their collections and use the 3D models for studying the shape and structure of the insects thus making it easier to maintain and share. Initial feedback shows that the reconstructed 3D models preserve the shape and size of the specimen. We further optimize our results to incorporate multiview stereo which produces better overall structure of the insects. Our main contribution is applying stereoscopic vision techniques to entomology to solve the problems faced by entomologists.

  15. Phoenix Lander on Mars (Stereo)

    NASA Technical Reports Server (NTRS)

    2007-01-01

    NASA's Phoenix Mars Lander monitors the atmosphere overhead and reaches out to the soil below in this stereo illustration of the spacecraft fully deployed on the surface of Mars. The image appears three-dimensional when viewed through red-green stereo glasses.

    Phoenix has been assembled and tested for launch in August 2007 from Cape Canaveral Air Force Station, Fla., and for landing in May or June 2008 on an arctic plain of far-northern Mars. The mission responds to evidence returned from NASA's Mars Odyssey orbiter in 2002 indicating that most high-latitude areas on Mars have frozen water mixed with soil within arm's reach of the surface.

    Phoenix will use a robotic arm to dig down to the expected icy layer. It will analyze scooped-up samples of the soil and ice for factors that will help scientists evaluate whether the subsurface environment at the site ever was, or may still be, a favorable habitat for microbial life. The instruments on Phoenix will also gather information to advance understanding about the history of the water in the icy layer. A weather station on the lander will conduct the first study Martian arctic weather from ground level.

    The vertical green line in this illustration shows how the weather station on Phoenix will use a laser beam from a lidar instrument to monitor dust and clouds in the atmosphere. The dark 'wings' to either side of the lander's main body are solar panels for providing electric power.

    The Phoenix mission is led by Principal Investigator Peter H. Smith of the University of Arizona, Tucson, with project management at NASA's Jet Propulsion Laboratory and development partnership with Lockheed Martin Space Systems, Denver. International contributions for Phoenix are provided by the Canadian Space Agency, the University of Neuchatel (Switzerland), the University of Copenhagen (Denmark), the Max Planck Institute (Germany) and the Finnish Meteorological institute. JPL is a division of the California

  16. Video stereo-laparoscopy system

    NASA Astrophysics Data System (ADS)

    Xiang, Yang; Hu, Jiasheng; Jiang, Huilin

    2006-01-01

    Minimally invasive surgery (MIS) has contributed significantly to patient care by reducing the morbidity associated with more invasive procedures. MIS procedures have become standard treatment for gallbladder disease and some abdominal malignancies. The imaging system has played a major role in the evolving field of minimally invasive surgery (MIS). The image need to have good resolution, large magnification, especially, the image need to have depth cue at the same time the image have no flicker and suit brightness. The video stereo-laparoscopy system can meet the demand of the doctors. This paper introduces the 3d video laparoscopy has those characteristic, field frequency: 100Hz, the depth space: 150mm, resolution: 10pl/mm. The work principle of the system is introduced in detail, and the optical system and time-division stereo-display system are described briefly in this paper. The system has focusing image lens, it can image on the CCD chip, the optical signal can change the video signal, and through A/D switch of the image processing system become the digital signal, then display the polarized image on the screen of the monitor through the liquid crystal shutters. The doctors with the polarized glasses can watch the 3D image without flicker of the tissue or organ. The 3D video laparoscope system has apply in the MIS field and praised by doctors. Contrast to the traditional 2D video laparoscopy system, it has some merit such as reducing the time of surgery, reducing the problem of surgery and the trained time.

  17. HRSC: High resolution stereo camera

    USGS Publications Warehouse

    Neukum, G.; Jaumann, R.; Basilevsky, A.T.; Dumke, A.; Van Gasselt, S.; Giese, B.; Hauber, E.; Head, J. W.; Heipke, C.; Hoekzema, N.; Hoffmann, H.; Greeley, R.; Gwinner, K.; Kirk, R.; Markiewicz, W.; McCord, T.B.; Michael, G.; Muller, Jan-Peter; Murray, J.B.; Oberst, J.; Pinet, P.; Pischel, R.; Roatsch, T.; Scholten, F.; Willner, K.

    2009-01-01

    The High Resolution Stereo Camera (HRSC) on Mars Express has delivered a wealth of image data, amounting to over 2.5 TB from the start of the mapping phase in January 2004 to September 2008. In that time, more than a third of Mars was covered at a resolution of 10-20 m/pixel in stereo and colour. After five years in orbit, HRSC is still in excellent shape, and it could continue to operate for many more years. HRSC has proven its ability to close the gap between the low-resolution Viking image data and the high-resolution Mars Orbiter Camera images, leading to a global picture of the geological evolution of Mars that is now much clearer than ever before. Derived highest-resolution terrain model data have closed major gaps and provided an unprecedented insight into the shape of the surface, which is paramount not only for surface analysis and geological interpretation, but also for combination with and analysis of data from other instruments, as well as in planning for future missions. This chapter presents the scientific output from data analysis and highlevel data processing, complemented by a summary of how the experiment is conducted by the HRSC team members working in geoscience, atmospheric science, photogrammetry and spectrophotometry. Many of these contributions have been or will be published in peer-reviewed journals and special issues. They form a cross-section of the scientific output, either by summarising the new geoscientific picture of Mars provided by HRSC or by detailing some of the topics of data analysis concerning photogrammetry, cartography and spectral data analysis.

  18. Random telegraph signal transients in active logarithmic continuous-time vision sensors

    NASA Astrophysics Data System (ADS)

    Pardo, Fernando; Boluda, Jose A.; Vegara, Francisco

    2015-12-01

    Random Telegraph Signal (RTS) is a well-known source of noise in current submicron circuits. Its static effects have been widely studied and its noise levels are in the order of other noise sources, especially for moderate submicron transistors. Nevertheless, RTS events may produce transients many times larger than the RTS itself, and this problem seems to have not yet been addressed. In this article we present results on the transients produced by RTS events in a smart vision sensor. RTS transients in closed-loop amplifiers can be many times greater than static RTS. The duration of the RTS transient may last for several milliseconds, and can be considered almost stationary for some conditions. The RTS transient effect has been modelled, and its impact on event-based vision sensors has been studied. This analysis may be also useful for many circuits based on closed-loop amplifiers. Some hints on how to reduce RTS transient effects on these sensors are also given, which may help with the design of current and future event-based vision sensors.

  19. Recovery of stereo acuity in adults with amblyopia

    PubMed Central

    Astle, Andrew T; McGraw, Paul V; Webb, Ben S

    2011-01-01

    Disruption of visual input to one eye during early development leads to marked functional impairments of vision, commonly referred to as amblyopia. A major consequence of amblyopia is the inability to encode binocular disparity information leading to impaired depth perception or stereo acuity. If amblyopia is treated early in life (before 4 years of age), then recovery of normal stereoscopic function is possible. Treatment is rarely undertaken later in life (adulthood) because declining levels of neural plasticity are thought to limit the effectiveness of standard treatments. Here, the authors show that a learning-based therapy, designed to exploit experience-dependent plastic mechanisms, can be used to recover stereoscopic visual function in adults with amblyopia. These cases challenge the long-held dogma that the critical period for visual development and the window for treating amblyopia are one and the same. PMID:22707543

  20. Vision and Motion Pictures.

    ERIC Educational Resources Information Center

    Grambo, Gregory

    1998-01-01

    Presents activities on persistence of vision that involve students in a hands-on approach to the study of early methods of creating motion pictures. Students construct flip books, a Zoetrope, and an early movie machine. (DDR)

  1. Living with vision loss

    MedlinePlus

    Diabetes - vision loss; Retinopathy - vision loss; Low vision; Blindness - vision loss ... Low vision is a visual disability. Wearing regular glasses or contacts does not help. People with low vision have ...

  2. Aug 1 Solar Event From STEREO Ahead

    NASA Video Gallery

    Aug 1 CME - The Sun from the SECCHI instruments on the STEREO spacecraft leading the Earth in its orbit around the Sun. This video was taken in the He II 304A channels, which shows the extreme ultr...

  3. Aug 1 Solar Event From STEREO Behind

    NASA Video Gallery

    Aug 1 CME - The Sun from the SECCHI instruments on the STEREO spacecraft trailing behind the Earth in its orbit around the Sun. This video was taken in the He II 304A channels, which shows the extr...

  4. Artificial stereo presentation of meteorological data fields

    NASA Technical Reports Server (NTRS)

    Hasler, A. F.; Desjardins, M.; Negri, A. J.

    1981-01-01

    The innate capability to perceive three-dimensional stereo imagery has been exploited to present multidimensional meteorological data fields. Variations on an artificial stereo technique first discussed by Pichel et al. (1973) are used to display single and multispectral images in a vivid and easily assimilated manner. Examples of visible/infrared artificial stereo are given for Hurricane Allen and for severe thunderstorms on 10 April 1979. Three-dimensional output from a mesoscale model also is presented. The images may be viewed through the glasses inserted in the February 1981 issue of the Bulletin of the American Meteorological Society, with the red lens over the right eye. The images have been produced on the interactive Atmospheric and Oceanographic Information Processing System (AOIPS) at Goddard Space Flight Center. Stereo presentation is an important aid in understanding meteorological phenomena for operational weather forecasting, research case studies, and model simulations.

  5. Solar Coronal Cells as Seen by STEREO

    NASA Video Gallery

    The changes of a coronal cell region as solar rotation carries it across the solar disk as seen with NASA's STEREO-B spacecraft. The camera is fixed on the region (panning with it) and shows the pl...

  6. STEREO Witnesses Aug 1, 2010 Solar Event

    NASA Video Gallery

    These image sequences were taken by the twin STEREO spacecraft looking at the Sun from opposite sides. The bottom pair shows the Sun and its immediate surroundings. The top row shows events from th...

  7. STEREO as a "Planetary Hazards" Mission

    NASA Technical Reports Server (NTRS)

    Guhathakurta, M.; Thompson, B. J.

    2014-01-01

    NASA's twin STEREO probes, launched in 2006, have advanced the art and science of space weather forecasting more than any other spacecraft or solar observatory. By surrounding the Sun, they provide previously-impossible early warnings of threats approaching Earth as they develop on the solar far side. They have also revealed the 3D shape and inner structure of CMEs-massive solar storms that can trigger geomagnetic storms when they collide with Earth. This improves the ability of forecasters to anticipate the timing and severity of such events. Moreover, the unique capability of STEREO to track CMEs in three dimensions allows forecasters to make predictions for other planets, giving rise to the possibility of interplanetary space weather forecasting too. STEREO is one of those rare missions for which "planetary hazards" refers to more than one world. The STEREO probes also hold promise for the study of comets and potentially hazardous asteroids.

  8. STEREO Observations of Solar Energetic Particles

    NASA Technical Reports Server (NTRS)

    vonRosenvinge, Tycho; Christian, Eric; Cohen, Christina; Leske, Richard; Mewaldt, Richard; Stone, Edward; Wiedenbeck, Mark

    2011-01-01

    We report on observations of Solar Energetic Particle (SEP) events as observed by instruments on the STEREO Ahead and Behind spacecraft and on the ACE spacecraft. We will show observations of an electron event observed by the STEREO Ahead spacecraft on June 12, 2010 located at W74 essentially simultaneously with electrons seen at STEREO Behind at E70. Some similar events observed by Helios were ascribed to fast electron propagation in longitude close to the sun. We will look for independent verification of this possibility. We will also show observations of what appears to be a single proton event with very similar time-history profiles at both of the STEREO spacecraft at a similar wide separation. This is unexpected. We will attempt to understand all of these events in terms of corresponding CME and radio burst observations.

  9. The most dangerous IEOs in STEREO

    NASA Astrophysics Data System (ADS)

    Fuentes, C.; Trilling, D.; Knight, M.

    2011-10-01

    IEOs (inner Earth objects or interior Earth objects) are potentially the most dangerous near Earth small body population. Their study is complicated by the fact the population spends all of its time inside the orbit of the Earth, giving ground-based telescopes a small window to observe them. We introduce STEREO (Solar TErrestrial RElations Observatory) and its 5 years of archival data as our best chance of studying the IEO population and discovering possible impactor threats to Earth. We show that in our current search for IEOs in STEREO data we are capable of detecting and characterizing the orbits of 10-100 potentially dangerous IEOs. The number of expected detections by STEREO is based on the current number of known IEOs which is heavily biased by the 8 objects discovered so far [4]. STEREO is sensitive to IEOs that are not visible from the Earth and hence samples a part of the IEO population that has not been discovered yet.

  10. Preparing WIND for the STEREO Mission

    NASA Astrophysics Data System (ADS)

    Schroeder, P.; Ogilve, K.; Szabo, A.; Lin, R.; Luhmann, J.

    2006-05-01

    The upcoming STEREO mission's IMPACT and PLASTIC investigations will provide the first opportunity for long duration, detailed observations of 1 AU magnetic field structures, plasma ions and electrons, suprathermal electrons, and energetic particles at points bracketing Earth's heliospheric location. Stereoscopic/3D information from the STEREO SECCHI imagers and SWAVES radio experiment will make it possible to use both multipoint and quadrature studies to connect interplanetary Coronal Mass Ejections (ICME) and solar wind structures to CMEs and coronal holes observed at the Sun. To fully exploit these unique data sets, tight integration with similarly equipped missions at L1 will be essential, particularly WIND and ACE. The STEREO mission is building novel data analysis tools to take advantage of the mission's scientific potential. These tools will require reliable access and a well-documented interface to the L1 data sets. Such an interface already exists for ACE through the ACE Science Center. We plan to provide a similar service for the WIND mission that will supplement existing CDAWeb services. Building on tools also being developed for STEREO, we will create a SOAP application program interface (API) which will allow both our STEREO/WIND/ACE interactive browser and third-party software to access WIND data as a seamless and integral part of the STEREO mission. The API will also allow for more advanced forms of data mining than currently available through other data web services. Access will be provided to WIND-specific data analysis software as well. The development of cross-spacecraft data analysis tools will allow a larger scientific community to combine STEREO's unique in-situ data with those of other missions, particularly the L1 missions, and, therefore, to maximize STEREO's scientific potential in gaining a greater understanding of the heliosphere.

  11. Dynamically re-configurable CMOS imagers for an active vision system

    NASA Technical Reports Server (NTRS)

    Yang, Guang (Inventor); Pain, Bedabrata (Inventor)

    2005-01-01

    A vision system is disclosed. The system includes a pixel array, at least one multi-resolution window operation circuit, and a pixel averaging circuit. The pixel array has an array of pixels configured to receive light signals from an image having at least one tracking target. The multi-resolution window operation circuits are configured to process the image. Each of the multi-resolution window operation circuits processes each tracking target within a particular multi-resolution window. The pixel averaging circuit is configured to sample and average pixels within the particular multi-resolution window.

  12. Transform coding of stereo image residuals.

    PubMed

    Moellenhoff, M S; Maier, M W

    1998-01-01

    Stereo image compression is of growing interest because of new display technologies and the needs of telepresence systems. Compared to monoscopic image compression, stereo image compression has received much less attention. A variety of algorithms have appeared in the literature that make use of the cross-view redundancy in the stereo pair. Many of these use the framework of disparity-compensated residual coding, but concentrate on the disparity compensation process rather than the post compensation coding process. This paper studies specialized coding methods for the residual image produced by disparity compensation. The algorithms make use of theoretically expected and experimentally observed characteristics of the disparity-compensated stereo residual to select transforms and quantization methods. Performance is evaluated on mean squared error (MSE) and a stereo-unique metric based on image registration. Exploiting the directional characteristics in a discrete cosine transform (DCT) framework provides its best performance below 0.75 b/pixel for 8-b gray-scale imagery and below 2 b/pixel for 24-b color imagery, In the wavelet algorithm, roughly a 50% reduction in bit rate is possible by encoding only the vertical channel, where much of the stereo information is contained. The proposed algorithms do not incur substantial computational burden beyond that needed for any disparity-compensated residual algorithm. PMID:18276294

  13. The Hyperspectral Stereo Camera Project

    NASA Astrophysics Data System (ADS)

    Griffiths, A. D.; Coates, A. J.

    2006-12-01

    The MSSL Hyperspectral Stereo Camera (HSC) is developed from Beagle2 stereo camera heritage. Replaceing filter wheels with liquid crystal tuneable filters (LCTF) turns each eye into a compact hyperspectral imager. Hyperspectral imaging is defined here as acquiring 10s-100s of images in 10-20 nm spectral bands. Combined together these bands form an image `cube' (with wavelength as the third dimension) allowing a detailed spectrum to be extracted at any pixel position. A LCTF is conceptually similar to the Fabry-Perot tuneable filter design but instead of physical separation, the variable refractive index of the liquid crystal etalons is used to define the wavelength of interest. For 10 nm bandwidths, LCTFs are available covering the 400-720 nm and 650-1100 nm ranges. The resulting benefits include reduced imager mechanical complexity, no limitation on the number of filter wavelengths available and the ability to change the wavelengths of interest in response to new findings as the mission proceeds. LCTFs are currently commercially available from two US companies - Scientific Solutions Inc. and Cambridge Research Inc. (CRI). CRI distribute the `Varispec' LCTFs used in the HSC. Currently, in Earth orbit hyperspectral imagers can prospect for minerals, detect camouflaged military equipment and determine the species and state of health of crops. Therefore, we believe this instrument shows great promise for a wide range of investigations in the planetary science domain (below). MSSL will integrate and test at representative Martian temperatures the HSC development model (to determine power requirements to prevent the liquid crystals freezing). Additionally, a full radiometric calibration is required to determine the HSC sensitivity. The second phase of the project is to demonstrate (in a ground based lab) the benefit of much higher spectral resolution to the following Martian scientific investigations: - Determination of the mineralogy of rocks and soil - Detection of

  14. Impact on stereo-acuity of two presbyopia correction approaches: monovision and small aperture inlay

    PubMed Central

    Fernández, Enrique J.; Schwarz, Christina; Prieto, Pedro M.; Manzanera, Silvestre; Artal, Pablo

    2013-01-01

    Some of the different currently applied approaches that correct presbyopia may reduce stereovision. In this work, stereo-acuity was measured for two methods: (1) monovision and (2) small aperture inlay in one eye. When performing the experiment, a prototype of a binocular adaptive optics vision analyzer was employed. The system allowed simultaneous measurement and manipulation of the optics in both eyes of a subject. The apparatus incorporated two programmable spatial light modulators: one phase-only device using liquid crystal on silicon technology for wavefront manipulation and one intensity modulator for controlling the exit pupils. The prototype was also equipped with a stimulus generator for creating retinal disparity based on two micro-displays. The three-needle test was programmed for characterizing stereo-acuity. Subjects underwent a two-alternative forced-choice test. The following cases were tested for the stimulus placed at distance: (a) natural vision; (b) 1.5 D monovision; (c) 0.75 D monovision; (d) natural vision and small pupil; (e) 0.75 D monovision and small pupil. In all cases the standard pupil diameter was 4 mm and the small pupil diameter was 1.6 mm. The use of a small aperture significantly reduced the negative impact of monovision on stereopsis. The results of the experiment suggest that combining micro-monovision with a small aperture, which is currently being implemented as a corneal inlay, can yield values of stereoacuity close to those attained under normal binocular vision. PMID:23761846

  15. A stereo matching model observer for stereoscopic viewing of 3D medical images

    NASA Astrophysics Data System (ADS)

    Wen, Gezheng; Markey, Mia K.; Muralidlhar, Gautam S.

    2014-03-01

    Stereoscopic viewing of 3D medical imaging data has the potential to increase the detection of abnormalities. We present a new stereo model observer inspired by the characteristics of stereopsis in human vision. Given a stereo pair of images of an object (i.e., left and right images separated by a small displacement), the model observer rst nds the corresponding points between the two views, and then fuses them together to create a 2D cyclopean view. Assuming that the cyclopean view has extracted most of the 3D information presented in the stereo pair, a channelized Hotelling observer (CHO) can be utilized to make decisions. We conduct a simulation study that attempts to mimic the detection of breast lesions on stereoscopic viewing of breast tomosynthesis projection images. We render voxel datasets that contain random 3D power-law noise to model normal breast tissues with various breast densities. 3D Gaussian signal is added to some of the datasets to model the presence of a breast lesion. By changing the separation angle between the two views, multiple stereo pairs of projection images are generated for each voxel dataset. The performance of the model is evaluated in terms of the accuracy of binary decisions on the presence of the simulated lesions.

  16. Non-Linearity Analysis of Depth and Angular Indexes for Optimal Stereo SLAM

    PubMed Central

    Bergasa, Luis M.; Alcantarilla, Pablo F.; Schleicher, David

    2010-01-01

    In this article, we present a real-time 6DoF egomotion estimation system for indoor environments using a wide-angle stereo camera as the only sensor. The stereo camera is carried in hand by a person walking at normal walking speeds 3–5 km/h. We present the basis for a vision-based system that would assist the navigation of the visually impaired by either providing information about their current position and orientation or guiding them to their destination through different sensing modalities. Our sensor combines two different types of feature parametrization: inverse depth and 3D in order to provide orientation and depth information at the same time. Natural landmarks are extracted from the image and are stored as 3D or inverse depth points, depending on a depth threshold. This depth threshold is used for switching between both parametrizations and it is computed by means of a non-linearity analysis of the stereo sensor. Main steps of our system approach are presented as well as an analysis about the optimal way to calculate the depth threshold. At the moment each landmark is initialized, the normal of the patch surface is computed using the information of the stereo pair. In order to improve long-term tracking, a patch warping is done considering the normal vector information. Some experimental results under indoor environments and conclusions are presented. PMID:22319348

  17. Reliable Fusion of Stereo Matching and Depth Sensor for High Quality Dense Depth Maps

    PubMed Central

    Liu, Jing; Li, Chunpeng; Fan, Xuefeng; Wang, Zhaoqi

    2015-01-01

    Depth estimation is a classical problem in computer vision, which typically relies on either a depth sensor or stereo matching alone. The depth sensor provides real-time estimates in repetitive and textureless regions where stereo matching is not effective. However, stereo matching can obtain more accurate results in rich texture regions and object boundaries where the depth sensor often fails. We fuse stereo matching and the depth sensor using their complementary characteristics to improve the depth estimation. Here, texture information is incorporated as a constraint to restrict the pixel’s scope of potential disparities and to reduce noise in repetitive and textureless regions. Furthermore, a novel pseudo-two-layer model is used to represent the relationship between disparities in different pixels and segments. It is more robust to luminance variation by treating information obtained from a depth sensor as prior knowledge. Segmentation is viewed as a soft constraint to reduce ambiguities caused by under- or over-segmentation. Compared to the average error rate 3.27% of the previous state-of-the-art methods, our method provides an average error rate of 2.61% on the Middlebury datasets, which shows that our method performs almost 20% better than other “fused” algorithms in the aspect of precision. PMID:26308003

  18. Non-linearity analysis of depth and angular indexes for optimal stereo SLAM.

    PubMed

    Bergasa, Luis M; Alcantarilla, Pablo F; Schleicher, David

    2010-01-01

    In this article, we present a real-time 6DoF egomotion estimation system for indoor environments using a wide-angle stereo camera as the only sensor. The stereo camera is carried in hand by a person walking at normal walking speeds 3-5 km/h. We present the basis for a vision-based system that would assist the navigation of the visually impaired by either providing information about their current position and orientation or guiding them to their destination through different sensing modalities. Our sensor combines two different types of feature parametrization: inverse depth and 3D in order to provide orientation and depth information at the same time. Natural landmarks are extracted from the image and are stored as 3D or inverse depth points, depending on a depth threshold. This depth threshold is used for switching between both parametrizations and it is computed by means of a non-linearity analysis of the stereo sensor. Main steps of our system approach are presented as well as an analysis about the optimal way to calculate the depth threshold. At the moment each landmark is initialized, the normal of the patch surface is computed using the information of the stereo pair. In order to improve long-term tracking, a patch warping is done considering the normal vector information. Some experimental results under indoor environments and conclusions are presented.

  19. Hybrid Image-Plane/Stereo Manipulation

    NASA Technical Reports Server (NTRS)

    Baumgartner, Eric; Robinson, Matthew

    2004-01-01

    Hybrid Image-Plane/Stereo (HIPS) manipulation is a method of processing image data, and of controlling a robotic manipulator arm in response to the data, that enables the manipulator arm to place an end-effector (an instrument or tool) precisely with respect to a target (see figure). Unlike other stereoscopic machine-vision-based methods of controlling robots, this method is robust in the face of calibration errors and changes in calibration during operation. In this method, a stereoscopic pair of cameras on the robot first acquires images of the manipulator at a set of predefined poses. The image data are processed to obtain image-plane coordinates of known visible features of the end-effector. Next, there is computed an initial calibration in the form of a mapping between (1) the image-plane coordinates and (2) the nominal three-dimensional coordinates of the noted end-effector features in a reference frame fixed to the main robot body at the base of the manipulator. The nominal three-dimensional coordinates are obtained by use of the nominal forward kinematics of the manipulator arm that is, calculated by use of the currently measured manipulator joint angles and previously measured lengths of manipulator arm segments under the assumption that the arm segments are rigid, that the arm lengths are constant, and that there is no backlash. It is understood from the outset that these nominal three-dimensional coordinates are likely to contain possibly significant calibration errors, but the effects of the errors are progressively reduced, as described next. As the end-effector is moved toward the target, the calibration is updated repeatedly by use of data from newly acquired images of the end-effector and of the corresponding nominal coordinates in the manipulator reference frame. By use of the updated calibration, the coordinates of the target are computed in manipulator-reference-frame coordinates and then used to the necessary manipulator joint angles to position

  20. Segmentation of stereo terrain images

    NASA Astrophysics Data System (ADS)

    George, Debra A.; Privitera, Claudio M.; Blackmon, Theodore T.; Zbinden, Eric; Stark, Lawrence W.

    2000-06-01

    We have studied four approaches to segmentation of images: three automatic ones using image processing algorithms and a fourth approach, human manual segmentation. We were motivated toward helping with an important NASA Mars rover mission task -- replacing laborious manual path planning with automatic navigation of the rover on the Mars terrain. The goal of the automatic segmentations was to identify an obstacle map on the Mars terrain to enable automatic path planning for the rover. The automatic segmentation was first explored with two different segmentation methods: one based on pixel luminance, and the other based on pixel altitude generated through stereo image processing. The third automatic segmentation was achieved by combining these two types of image segmentation. Human manual segmentation of Martian terrain images was used for evaluating the effectiveness of the combined automatic segmentation as well as for determining how different humans segment the same images. Comparisons between two different segmentations, manual or automatic, were measured using a similarity metric, SAB. Based on this metric, the combined automatic segmentation did fairly well in agreeing with the manual segmentation. This was a demonstration of a positive step towards automatically creating the accurate obstacle maps necessary for automatic path planning and rover navigation.

  1. STEREO interplanetary shocks and foreshocks

    SciTech Connect

    Blanco-Cano, X.; Kajdic, P.; Aguilar-Rodriguez, E.; Russell, C. T.; Jian, L. K.; Luhmann, J. G.

    2013-06-13

    We use STEREO data to study shocks driven by stream interactions and the waves associated with them. During the years of the extended solar minimum 2007-2010, stream interaction shocks have Mach numbers between 1.1-3.8 and {theta}{sub Bn}{approx}20-86 Degree-Sign . We find a variety of waves, including whistlers and low frequency fluctuations. Upstream whistler waves may be generated at the shock and upstream ultra low frequency (ULF) waves can be driven locally by ion instabilities. The downstream wave spectra can be formed by both, locally generated perturbations, and shock transmitted waves. We find that many quasiperpendicular shocks can be accompanied by ULF wave and ion foreshocks, which is in contrast to Earth's bow shock. Fluctuations downstream of quasi-parallel shocks tend to have larger amplitudes than waves downstream of quasi-perpendicular shocks. Proton foreshocks of shocks driven by stream interactions have extensions dr {<=}0.05 AU. This is smaller than foreshock extensions for ICME driven shocks. The difference in foreshock extensions is related to the fact that ICME driven shocks are formed closer to the Sun and therefore begin to accelerate particles very early in their existence, while stream interaction shocks form at {approx}1 AU and have been producing suprathermal particles for a shorter time.

  2. Learning Visions.

    ERIC Educational Resources Information Center

    Phelps, Margaret S.; And Others

    This paper describes LEARNing Visions, a K-12 intervention program for at-risk youth in Jackson County, Tennessee, involving a partnership between the schools, local businesses, Tennessee Technological University, and Visions Five (a private company). Jackson County is characterized by an undereducated population, a high employment rate, and a low…

  3. Defining the V5/MT neuronal pool for perceptual decisions in a visual stereo-motion task

    PubMed Central

    2016-01-01

    In the primate visual cortex, neurons signal differences in the appearance of objects with high precision. However, not all activated neurons contribute directly to perception. We defined the perceptual pool in extrastriate visual area V5/MT for a stereo-motion task, based on trial-by-trial co-variation between perceptual decisions and neuronal firing (choice probability (CP)). Macaque monkeys were trained to discriminate the direction of rotation of a cylinder, using the binocular depth between the moving dots that form its front and rear surfaces. We manipulated the activity of single neurons trial-to-trial by introducing task-irrelevant stimulus changes: dot motion in cylinders was aligned with neuronal preference on only half the trials, so that neurons were strongly activated with high firing rates on some trials and considerably less activated on others. We show that single neurons maintain high neurometric sensitivity for binocular depth in the face of substantial changes in firing rate. CP was correlated with neurometric sensitivity, not level of activation. In contrast, for individual neurons, the correlation between perceptual choice and neuronal activity may be fundamentally different when responding to different stimulus versions. Therefore, neuronal pools supporting sensory discrimination must be structured flexibly and independently for each stimulus configuration to be discriminated. This article is part of the themed issue ‘Vision in our three-dimensional world'. PMID:27269603

  4. Defining the V5/MT neuronal pool for perceptual decisions in a visual stereo-motion task.

    PubMed

    Krug, Kristine; Curnow, Tamara L; Parker, Andrew J

    2016-06-19

    In the primate visual cortex, neurons signal differences in the appearance of objects with high precision. However, not all activated neurons contribute directly to perception. We defined the perceptual pool in extrastriate visual area V5/MT for a stereo-motion task, based on trial-by-trial co-variation between perceptual decisions and neuronal firing (choice probability (CP)). Macaque monkeys were trained to discriminate the direction of rotation of a cylinder, using the binocular depth between the moving dots that form its front and rear surfaces. We manipulated the activity of single neurons trial-to-trial by introducing task-irrelevant stimulus changes: dot motion in cylinders was aligned with neuronal preference on only half the trials, so that neurons were strongly activated with high firing rates on some trials and considerably less activated on others. We show that single neurons maintain high neurometric sensitivity for binocular depth in the face of substantial changes in firing rate. CP was correlated with neurometric sensitivity, not level of activation. In contrast, for individual neurons, the correlation between perceptual choice and neuronal activity may be fundamentally different when responding to different stimulus versions. Therefore, neuronal pools supporting sensory discrimination must be structured flexibly and independently for each stimulus configuration to be discriminated.This article is part of the themed issue 'Vision in our three-dimensional world'. PMID:27269603

  5. Real-time vision systems

    SciTech Connect

    Johnson, R.; Hernandez, J.E.; Lu, Shin-yee

    1994-11-15

    Many industrial and defence applications require an ability to make instantaneous decisions based on sensor input of a time varying process. Such systems are referred to as `real-time systems` because they process and act on data as it occurs in time. When a vision sensor is used in a real-time system, the processing demands can be quite substantial, with typical data rates of 10-20 million samples per second. A real-time Machine Vision Laboratory (MVL) was established in FY94 to extend our years of experience in developing computer vision algorithms to include the development and implementation of real-time vision systems. The laboratory is equipped with a variety of hardware components, including Datacube image acquisition and processing boards, a Sun workstation, and several different types of CCD cameras, including monochrome and color area cameras and analog and digital line-scan cameras. The equipment is reconfigurable for prototyping different applications. This facility has been used to support several programs at LLNL, including O Division`s Peacemaker and Deadeye Projects as well as the CRADA with the U.S. Textile Industry, CAFE (Computer Aided Fabric Inspection). To date, we have successfully demonstrated several real-time applications: bullet tracking, stereo tracking and ranging, and web inspection. This work has been documented in the ongoing development of a real-time software library.

  6. Stereo radar: reconstructing 3D data from 2D radar

    NASA Astrophysics Data System (ADS)

    Schmerwitz, Sven; Döhler, Hans-Ullrich; Peinecke, Niklas; Korn, Bernd

    2008-04-01

    To improve the situation awareness of an aircrew during poor visibility, different approaches emerged during the past couple of years. Enhanced vision systems (EVS - based upon sensor images) are one of those. They improve situation awareness of the crew, but at the same time introduce certain operational deficits. EVS present sensor data which might be difficult to interpret especially if the sensor used is a radar sensor. In particular an unresolved problem of fast scanning forward looking radar systems in the millimeter waveband is the inability to measure the elevation of a target. In order to circumvent this problem effort was made to reconstruct the missing elevation from a series of images. This could be described as a "Stereo radar"-attempt and is similar to the reconstruction using photography (angle-angle images) from different viewpoints to rebuilt the depth information. Two radar images (range-angle images) with different bank angles can be used to reconstruct the elevation of targets. This paper presents the fundamental idea and the methods of the reconstruction. Furthermore, experiences with real data from EADS's "HiVision" MMCW radar are discussed. Two different approaches are investigated: First, a fusion of images with variable bank angles is calculated for different elevation layers and picture processing reveals identical objects in these layers. Those objects are compared regarding contrast and dimension to extract their elevation. The second approach compares short fusion pairs of two different flights with different nearly constant bank angles. Accumulating those pairs with different offsets delivers the exact elevation.

  7. Development of Non-contact Respiratory Monitoring System for Newborn Using a FG Vision Sensor

    NASA Astrophysics Data System (ADS)

    Kurami, Yoshiyuki; Itoh, Yushi; Natori, Michiya; Ohzeki, Kazuo; Aoki, Yoshimitsu

    In recent years, development of neonatal care is strongly hoped, with increase of the low-birth-weight baby birth rate. Especially respiration of low-birth-weight baby is incertitude because central nerve and respiratory function is immature. Therefore, a low-birth-weight baby often causes a disease of respiration. In a NICU (Neonatal Intensive Care Unit), neonatal respiration is monitored using cardio-respiratory monitor and pulse oximeter at all times. These contact-type sensors can measure respiratory rate and SpO2 (Saturation of Peripheral Oxygen). However, because a contact-type sensor might damage the newborn's skin, it is a real burden to monitor neonatal respiration. Therefore, we developed the respiratory monitoring system for newborn using a FG (Fiber Grating) vision sensor. FG vision sensor is an active stereo vision sensor, it is possible for non-contact 3D measurement. A respiratory waveform is calculated by detecting the vertical motion of the thoracic and abdominal region with respiration. We attempted clinical experiment in the NICU, and confirmed the accuracy of the obtained respiratory waveform was high. Non-contact respiratory monitoring of newborn using a FG vision sensor enabled the minimally invasive procedure.

  8. Dynamic programming and graph algorithms in computer vision.

    PubMed

    Felzenszwalb, Pedro F; Zabih, Ramin

    2011-04-01

    Optimization is a powerful paradigm for expressing and solving problems in a wide range of areas, and has been successfully applied to many vision problems. Discrete optimization techniques are especially interesting since, by carefully exploiting problem structure, they often provide nontrivial guarantees concerning solution quality. In this paper, we review dynamic programming and graph algorithms, and discuss representative examples of how these discrete optimization techniques have been applied to some classical vision problems. We focus on the low-level vision problem of stereo, the mid-level problem of interactive object segmentation, and the high-level problem of model-based recognition.

  9. Auto-converging stereo cameras for 3D robotic tele-operation

    NASA Astrophysics Data System (ADS)

    Edmondson, Richard; Aycock, Todd; Chenault, David

    2012-06-01

    Polaris Sensor Technologies has developed a Stereovision Upgrade Kit for TALON robot to provide enhanced depth perception to the operator. This kit previously required the TALON Operator Control Unit to be equipped with the optional touchscreen interface to allow for operator control of the camera convergence angle adjustment. This adjustment allowed for optimal camera convergence independent of the distance from the camera to the object being viewed. Polaris has recently improved the performance of the stereo camera by implementing an Automatic Convergence algorithm in a field programmable gate array in the camera assembly. This algorithm uses scene content to automatically adjust the camera convergence angle, freeing the operator to focus on the task rather than adjustment of the vision system. The autoconvergence capability has been demonstrated on both visible zoom cameras and longwave infrared microbolometer stereo pairs.

  10. 3-dimentional measurement of cable configuration being based on feature tracking motion stereo

    NASA Astrophysics Data System (ADS)

    Domae, Yukiyasu; Okuda, Haruhisa; Takauji, Hidenori; Kaneko, Shun'ichi; Tanaka, Takayuki

    2007-10-01

    We propose a novel three-Dimensional measurement approach of flexible cables for factory automation appliations, such as cable handling, connecter insertion without conflicts with cables by using robotic arms. The approach is based on motion stereo with a vision sensor. Laser slit beams are irradiated and make landmalks on the cables to solve stereo correspondence problem efficiently. These landmark points and interpolated points having rich texture are tracked in a image sequence, and reconstructed as the cable shape. For stable feature point tracking, a robust texture matching method which is Orientation Code Matching and tracking stability analysis are applied. In our experiments, arch-like cables have been reconstructed with an uncertainty of 1.5 % by this method.

  11. An on-line calibration algorithm for external parameters of visual system based on binocular stereo cameras

    NASA Astrophysics Data System (ADS)

    Wang, Liqiang; Liu, Zhen; Zhang, Zhonghua

    2014-11-01

    Stereo vision is the key in the visual measurement, robot vision, and autonomous navigation. Before performing the system of stereo vision, it needs to calibrate the intrinsic parameters for each camera and the external parameters of the system. In engineering, the intrinsic parameters remain unchanged after calibrating cameras, and the positional relationship between the cameras could be changed because of vibration, knocks and pressures in the vicinity of the railway or motor workshops. Especially for large baselines, even minute changes in translation or rotation can affect the epipolar geometry and scene triangulation to such a degree that visual system becomes disabled. A technology including both real-time examination and on-line recalibration for the external parameters of stereo system becomes particularly important. This paper presents an on-line method for checking and recalibrating the positional relationship between stereo cameras. In epipolar geometry, the external parameters of cameras can be obtained by factorization of the fundamental matrix. Thus, it offers a method to calculate the external camera parameters without any special targets. If the intrinsic camera parameters are known, the external parameters of system can be calculated via a number of random matched points. The process is: (i) estimating the fundamental matrix via the feature point correspondences; (ii) computing the essential matrix from the fundamental matrix; (iii) obtaining the external parameters by decomposition of the essential matrix. In the step of computing the fundamental matrix, the traditional methods are sensitive to noise and cannot ensure the estimation accuracy. We consider the feature distribution situation in the actual scene images and introduce a regional weighted normalization algorithm to improve accuracy of the fundamental matrix estimation. In contrast to traditional algorithms, experiments on simulated data prove that the method improves estimation

  12. Dictionary learning for stereo image representation.

    PubMed

    Tošić, Ivana; Frossard, Pascal

    2011-04-01

    One of the major challenges in multi-view imaging is the definition of a representation that reveals the intrinsic geometry of the visual information. Sparse image representations with overcomplete geometric dictionaries offer a way to efficiently approximate these images, such that the multi-view geometric structure becomes explicit in the representation. However, the choice of a good dictionary in this case is far from obvious. We propose a new method for learning overcomplete dictionaries that are adapted to the joint representation of stereo images. We first formulate a sparse stereo image model where the multi-view correlation is described by local geometric transforms of dictionary elements (atoms) in two stereo views. A maximum-likelihood (ML) method for learning stereo dictionaries is then proposed, where a multi-view geometry constraint is included in the probabilistic model. The ML objective function is optimized using the expectation-maximization algorithm. We apply the learning algorithm to the case of omnidirectional images, where we learn scales of atoms in a parametric dictionary. The resulting dictionaries provide better performance in the joint representation of stereo omnidirectional images as well as improved multi-view feature matching. We finally discuss and demonstrate the benefits of dictionary learning for distributed scene representation and camera pose estimation.

  13. Precision calibration method for binocular vision measurement systems based on arbitrary translations and 3D-connection information

    NASA Astrophysics Data System (ADS)

    Yang, Jinghao; Jia, Zhenyuan; Liu, Wei; Fan, Chaonan; Xu, Pengtao; Wang, Fuji; Liu, Yang

    2016-10-01

    Binocular vision systems play an important role in computer vision, and high-precision system calibration is a necessary and indispensable process. In this paper, an improved calibration method for binocular stereo vision measurement systems based on arbitrary translations and 3D-connection information is proposed. First, a new method for calibrating the intrinsic parameters of binocular vision system based on two translations with an arbitrary angle difference is presented, which reduces the effect of the deviation of the motion actuator on calibration accuracy. This method is simpler and more accurate than existing active-vision calibration methods and can provide a better initial value for the determination of extrinsic parameters. Second, a 3D-connection calibration and optimization method is developed that links the information of the calibration target in different positions, further improving the accuracy of the system calibration. Calibration experiments show that the calibration error can be reduced to 0.09%, outperforming traditional methods for the experiments of this study.

  14. Self-adaptive Vision System

    NASA Astrophysics Data System (ADS)

    Stipancic, Tomislav; Jerbic, Bojan

    Light conditions represent an important part of every vision application. This paper describes one active behavioral scheme of one particular active vision system. This behavioral scheme enables an active system to adapt to current environmental conditions by constantly validating the amount of the reflected light using luminance meter and dynamically changed significant vision parameters. The purpose of the experiment was to determine the connections between light conditions and inner vision parameters. As a part of the experiment, Response Surface Methodology (RSM) was used to predict values of vision parameters with respect to luminance input values. RSM was used to approximate an unknown function for which only few values were computed. The main output validation system parameter is called Match Score. Match Score indicates how well the found object matches the learned model. All obtained data are stored in the local database. By timely applying new parameters predicted by the RSM, the vision application works in a stabile and robust manner.

  15. Computational vision

    NASA Technical Reports Server (NTRS)

    Barrow, H. G.; Tenenbaum, J. M.

    1981-01-01

    The range of fundamental computational principles underlying human vision that equally apply to artificial and natural systems is surveyed. There emerges from research a view of the structuring of vision systems as a sequence of levels of representation, with the initial levels being primarily iconic (edges, regions, gradients) and the highest symbolic (surfaces, objects, scenes). Intermediate levels are constrained by information made available by preceding levels and information required by subsequent levels. In particular, it appears that physical and three-dimensional surface characteristics provide a critical transition from iconic to symbolic representations. A plausible vision system design incorporating these principles is outlined, and its key computational processes are elaborated.

  16. Efficient hybrid tree-based stereo matching with applications to postcapture image refocusing.

    PubMed

    Vu, Dung T; Chidester, Benjamin; Yang, Hongsheng; Do, Minh N; Lu, Jiangbo

    2014-08-01

    Estimating dense correspondence or depth information from a pair of stereoscopic images is a fundamental problem in computer vision, which finds a range of important applications. Despite intensive past research efforts in this topic, it still remains challenging to recover the depth information both reliably and efficiently, especially when the input images contain weakly textured regions or are captured under uncontrolled, real-life conditions. Striking a desired balance between computational efficiency and estimation quality, a hybrid minimum spanning tree-based stereo matching method is proposed in this paper. Our method performs efficient nonlocal cost aggregation at pixel-level and region-level, and then adaptively fuses the resulting costs together to leverage their respective strength in handling large textureless regions and fine depth discontinuities. Experiments on the standard Middlebury stereo benchmark show that the proposed stereo method outperforms all prior local and nonlocal aggregation-based methods, achieving particularly noticeable improvements for low texture regions. To further demonstrate the effectiveness of the proposed stereo method, also motivated by the increasing desire to generate expressive depth-induced photo effects, this paper is tasked next to address the emerging application of interactive depth-of-field rendering given a real-world stereo image pair. To this end, we propose an accurate thin-lens model for synthetic depth-of-field rendering, which considers the user-stroke placement and camera-specific parameters and performs the pixel-adapted Gaussian blurring in a principled way. Taking ~1.5 s to process a pair of 640×360 images in the off-line step, our system named Scribble2focus allows users to interactively select in-focus regions by simple strokes using the touch screen and returns the synthetically refocused images instantly to the user. PMID:24919201

  17. The XH-map algorithm: A method to process stereo video to produce a real-time obstacle map

    NASA Astrophysics Data System (ADS)

    Rosselot, Donald; Hall, Ernest L.

    2005-10-01

    This paper presents a novel, simple and fast algorithm to produce a "floor plan" obstacle map in real time using video. The XH-map algorithm is a transformation of stereo vision data in disparity map space into a two dimensional obstacle map space using a method that can be likened to a histogram reduction of image information. The classic floor-ground background noise problem is addressed with a simple one-time semi-automatic calibration method incorporated into the algorithm. This implementation of this algorithm utilizes the Intel Performance Primitives library and OpenCV libraries for extremely fast and efficient execution, creating a scaled obstacle map from a 480x640x256 stereo pair in 1.4 milliseconds. This algorithm has many applications in robotics and computer vision including enabling an "Intelligent Robot" robot to "see" for path planning and obstacle avoidance.

  18. Vision Underwater.

    ERIC Educational Resources Information Center

    Levine, Joseph S.

    1980-01-01

    Provides information regarding underwater vision. Includes a discussion of optically important interfaces, increased eye size of organisms at greater depths, visual peculiarities regarding the habitat of the coastal environment, and various pigment visual systems. (CS)

  19. Improving Vision

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Many people are familiar with the popular science fiction series Star Trek: The Next Generation, a show featuring a blind character named Geordi La Forge, whose visor-like glasses enable him to see. What many people do not know is that a product very similar to Geordi's glasses is available to assist people with vision conditions, and a NASA engineer's expertise contributed to its development. The JORDY(trademark) (Joint Optical Reflective Display) device, designed and manufactured by a privately-held medical device company known as Enhanced Vision, enables people with low vision to read, write, and watch television. Low vision, which includes macular degeneration, diabetic retinopathy, and glaucoma, describes eyesight that is 20/70 or worse, and cannot be fully corrected with conventional glasses.

  20. [Evaluation of condition and factors affecting activity effectiveness and visual performance of pilots who use night vision goggles during the helicopter flights].

    PubMed

    Aleksandrov, A S; Davydov, V V; Lapa, V V; Minakov, A A; Sukhanov, V V; Chistov, S D

    2014-07-01

    According to analysis of questionnaire authors revealed factors, which affect activity effectiveness, and visual performance of pilots who use night vision goggles during the helicopter flights. These are: difficulty of flight tasks, flying conditions, illusion of attitude. Authors gave possible ways to reduce an impact of these factors.

  1. Photometric stereo sensor for robot-assisted industrial quality inspection of coated composite material surfaces

    NASA Astrophysics Data System (ADS)

    Weigl, Eva; Zambal, Sebastian; Stöger, Matthias; Eitzinger, Christian

    2015-04-01

    While composite materials are increasingly used in modern industry, the quality control in terms of vision-based surface inspection remains a challenging task. Due to the often complex and three-dimensional structures, a manual inspection of these components is nearly impossible. We present a photometric stereo sensor system including an industrial robotic arm for positioning the sensor relative to the inspected part. Two approaches are discussed: stop-and-go positioning and continuous positioning. Results are presented on typical defects that appear on various composite material surfaces in the production process.

  2. System for clinical photometric stereo endoscopy

    NASA Astrophysics Data System (ADS)

    Durr, Nicholas J.; González, Germán.; Lim, Daryl; Traverso, Giovanni; Nishioka, Norman S.; Vakoc, Benjamin J.; Parot, Vicente

    2014-02-01

    Photometric stereo endoscopy is a technique that captures information about the high-spatial-frequency topography of the field of view simultaneously with a conventional color image. Here we describe a system that will enable photometric stereo endoscopy to be clinically evaluated in the large intestine of human patients. The clinical photometric stereo endoscopy system consists of a commercial gastroscope, a commercial video processor, an image capturing and processing unit, custom synchronization electronics, white light LEDs, a set of four fibers with diffusing tips, and an alignment cap. The custom pieces that come into contact with the patient are composed of biocompatible materials that can be sterilized before use. The components can then be assembled in the endoscopy suite before use. The resulting endoscope has the same outer diameter as a conventional colonoscope (14 mm), plugs into a commercial video processor, captures topography and color images at 15 Hz, and displays the conventional color image to the gastroenterologist in real-time. We show that this system can capture a color and topographical video in a tubular colon phantom, demonstrating robustness to complex geometries and motion. The reported system is suitable for in vivo evaluation of photometric stereo endoscopy in the human large intestine.

  3. Mono versus Stereo: Bilingualism's Double Face.

    ERIC Educational Resources Information Center

    Grutman, Rainier

    1993-01-01

    Offers an application of Mikhail Bakhtin's heteroglossia model, describing literature from a diversified point of view. Analyzes two examples to show nevertheless that Bakhtin unilaterally celebrates "stereo" qualities of language blending, and leaves no room for "mono" texts, which use polyglot devices as borders much more than as bridges between…

  4. STEREO Captures Fastest CME to Date

    NASA Video Gallery

    This movie shows a coronal mass ejection (CME) on the sun from July 22, 2012 at 10:00 PM EDT until 2 AM on July 23 as captured by NASA's Solar TErrestrial RElations Observatory-Ahead (STEREO-A). Be...

  5. Vision Problems: How Teachers Can Help.

    ERIC Educational Resources Information Center

    Desrochers, Joyce

    1999-01-01

    Describes common vision problems in young children such as myopia, strabismus, and amblyopia. Presents suggestions for helping children with vision problems in the early childhood classroom and in outdoor activities. Lists related resources and children's books. (KB)

  6. Community Vision and Interagency Alignment: A Community Planning Process to Promote Active Transportation.

    PubMed

    DeGregory, Sarah Timmins; Chaudhury, Nupur; Kennedy, Patrick; Noyes, Philip; Maybank, Aletha

    2016-04-01

    In 2010, the Brooklyn Active Transportation Community Planning Initiative launched in 2 New York City neighborhoods. Over a 2-year planning period, residents participated in surveys, school and community forums, neighborhood street assessments, and activation events-activities that highlighted the need for safer streets locally. Consensus among residents and key multisectoral stakeholders, including city agencies and community-based organizations, was garnered in support of a planned expansion of bicycling infrastructure. The process of building on community assets and applying a collective impact approach yielded changes in the built environment, attracted new partners and resources, and helped to restore a sense of power among residents.

  7. Community Vision and Interagency Alignment: A Community Planning Process to Promote Active Transportation.

    PubMed

    DeGregory, Sarah Timmins; Chaudhury, Nupur; Kennedy, Patrick; Noyes, Philip; Maybank, Aletha

    2016-04-01

    In 2010, the Brooklyn Active Transportation Community Planning Initiative launched in 2 New York City neighborhoods. Over a 2-year planning period, residents participated in surveys, school and community forums, neighborhood street assessments, and activation events-activities that highlighted the need for safer streets locally. Consensus among residents and key multisectoral stakeholders, including city agencies and community-based organizations, was garnered in support of a planned expansion of bicycling infrastructure. The process of building on community assets and applying a collective impact approach yielded changes in the built environment, attracted new partners and resources, and helped to restore a sense of power among residents. PMID:26959270

  8. One-eyed stereo: a general approach to modeling 3-d scene geometry.

    PubMed

    Strat, T M; Fischler, M A

    1986-06-01

    A single two-dimensional image is an ambiguous representation of the three-dimensional world¿many different scenes could have produced the same image¿yet the human visual system is ex-tremely successful at recovering a qualitatively correct depth model from this type of representation. Workers in the field of computational vision have devised a number of distinct schemes that attempt to emulate this human capability; these schemes are collectively known as ``shape from...'' methods (e.g., shape from shading, shape from texture, or shape from contour). In this paper we contend that the distinct assumptions made in each of these schemes is tantamount to providing a second (virtual) image of the original scene, and that each of these approaches can be translated into a conventional stereo formalism. In particular, we show that it is frequently possible to structure the problem as one of recovering depth from a stereo pair consisting of the supplied perspective image (the original image) and an hypothesized orthographic image (the virtual image). We present a new algorithm of the form required to accomplish this type of stereo reconstruction task. PMID:21869368

  9. Presidential Visions.

    ERIC Educational Resources Information Center

    Gallin, Alice, Ed.

    1992-01-01

    This journal issue is devoted to the theme of university presidents and their visions of the future. It presents the inaugural addresses and speeches of 16 Catholic college and university presidents focusing on their goals, ambitions, and reasons for choosing to become higher education leaders at this particular time in the history of education in…

  10. Agrarian Visions.

    ERIC Educational Resources Information Center

    Theobald, Paul

    A new feature in "Country Teacher,""Agrarian Visions" reminds rural teachers that they can do something about rural decline. Like to populism of the 1890s, the "new populism" advocates rural living. Current attempts to address rural decline are contrary to agrarianism because: (1) telecommunications experts seek to solve problems of rural…

  11. Visions 2001.

    ERIC Educational Resources Information Center

    Rivero, Victor; Norman, Michele

    2001-01-01

    Reports on the views of 18 educational leaders regarding their vision on the future of education in an information age. Topics include people's diverse needs; relationships between morality, ethics, values, and technology; leadership; parental involvement; online courses from multiple higher education institutions; teachers' role; technology…

  12. Training Visions

    ERIC Educational Resources Information Center

    Training, 2011

    2011-01-01

    In this article, "Training" asks the 2011 winners to give their predictions for what training--either in general or specifically at their companies--will look like in the next five to 10 years. Perhaps their "training visions" will spark some ideas in one's organization--or at least help prepare for what might be coming in the next decade or so.

  13. Single-neuron activity and eye movements during human REM sleep and awake vision

    PubMed Central

    Andrillon, Thomas; Nir, Yuval; Cirelli, Chiara; Tononi, Giulio; Fried, Itzhak

    2015-01-01

    Are rapid eye movements (REMs) in sleep associated with visual-like activity, as during wakefulness? Here we examine single-unit activities (n=2,057) and intracranial electroencephalography across the human medial temporal lobe (MTL) and neocortex during sleep and wakefulness, and during visual stimulation with fixation. During sleep and wakefulness, REM onsets are associated with distinct intracranial potentials, reminiscent of ponto-geniculate-occipital waves. Individual neurons, especially in the MTL, exhibit reduced firing rates before REMs as well as transient increases in firing rate immediately after, similar to activity patterns observed upon image presentation during fixation without eye movements. Moreover, the selectivity of individual units is correlated with their response latency, such that units activated after a small number of images or REMs exhibit delayed increases in firing rates. Finally, the phase of theta oscillations is similarly reset following REMs in sleep and wakefulness, and after controlled visual stimulation. Our results suggest that REMs during sleep rearrange discrete epochs of visual-like processing as during wakefulness. PMID:26262924

  14. Vision Loss, Sudden

    MedlinePlus

    ... of age-related macular degeneration. Spotlight on Aging: Vision Loss in Older People Most commonly, vision loss ... Some Causes and Features of Sudden Loss of Vision Cause Common Features* Tests Sudden loss of vision ...

  15. Blindness and vision loss

    MedlinePlus

    ... eye ( chemical burns or sports injuries) Diabetes Glaucoma Macular degeneration The type of partial vision loss may differ, ... tunnel vision and missing areas of vision With macular degeneration, the side vision is normal but the central ...

  16. Colour, vision and ergonomics.

    PubMed

    Pinheiro, Cristina; da Silva, Fernando Moreira

    2012-01-01

    This paper is based on a research project - Visual Communication and Inclusive Design-Colour, Legibility and Aged Vision, developed at the Faculty of Architecture of Lisbon. The research has the aim of determining specific design principles to be applied to visual communication design (printed) objects, in order to be easily read and perceived by all. This study target group was composed by a selection of socially active individuals, between 55 and 80 years, and we used cultural events posters as objects of study and observation. The main objective is to overlap the study of areas such as colour, vision, older people's colour vision, ergonomics, chromatic contrasts, typography and legibility. In the end we will produce a manual with guidelines and information to apply scientific knowledge into the communication design projectual practice. Within the normal aging process, visual functions gradually decline; the quality of vision worsens, colour vision and contrast sensitivity are also affected. As people's needs change along with age, design should help people and communities, and improve life quality in the present. Applying principles of visually accessible design and ergonomics, the printed design objects, (or interior spaces, urban environments, products, signage and all kinds of visually information) will be effective, easier on everyone's eyes not only for visually impaired people but also for all of us as we age.

  17. SEPServer Solar Energetic Particle event Catalogues at 1 AU based on STEREO recordings: selected solar cycle 24 SEP event analysis

    NASA Astrophysics Data System (ADS)

    Papaioannou, Athanasios; Malandraki, Olga E.; Dresing, Nina; Klein, Karl-Ludwig; Heber, Bernd; Vainio, Rami; Nindos, Alexander; Rodríguez-Gasén, Rosa; Klassen, Andreas; Gómez Herrero, Raúl; Vilmer, Nicole; Mewaldt, Richard A.

    2014-05-01

    STEREO (Solar TErrestrial RElations Observatory) recordings provide an unprecedented opportunity to identify the evolution of Solar Energetic Particles (SEPs) at different observing points in the heliosphere. In this work, two instruments onboard STEREO have been used in order to identify all SEP events observed within the deciding phase of solar cycle 23 and the rising phase of solar cycle 24 from 2007 to 2012, namely: the Low Energy Telescope (LET) and the Solar Electron Proton Telescope (SEPT). A scan over STEREO/LET protons within the energy range 6-10 MeV has been performed for each of the two STEREO spacecraft. Furthermore, parallel scanning of the STEREO/SEPT electrons in order to pinpoint the presence (or not) of an electron event has been performed in the energy range of 55-85 keV, for all of the aforementioned proton events, included in our lists. We provide the onset and peak times as well as the peak value of all events for both protons and electrons. Time-shifting analysis for near relativistic electrons leads to the inferred solar release time and to the relevant solar associations from radio spectrographs (Nançay Decametric Array; STEREO/WAVES) to GOES Soft X-rays and hard X-rays from RHESSI. The aforementioned information materializes the STEREO SEPServer catalogues that recently have been released to the scientific community. In order to demonstrate the exploitation of the STEREO catalogues, we then focus at the series of SEP events that were recorded onboard STEREO A & B as well as at L1 (ACE, SOHO) from March 4-14, 2012. We track the activity of active region (AR) 1429 during its passage from the East to the West which produced a number of intense solar flares and coronal mass ejections and we compare the magnetic connectivity of each spacecraft in association with the corresponding SEP signatures. During this period the longitudinal separation of the STEREO spacecraft was > 220 degrees, yet both of them recorded SEP events. These complex multi

  18. Merged Shape from Shading and Shape from Stereo for Planetary Topographic Mapping

    NASA Astrophysics Data System (ADS)

    Tyler, Laurence; Cook, Tony; Barnes, Dave; Parr, Gerhard; Kirk, Randolph

    2014-05-01

    Digital Elevation Models (DEMs) of the Moon and Mars have traditionally been produced from stereo imagery from orbit, or from the surface landers or rovers. One core component of image-based DEM generation is stereo matching to find correspondences between images taken from different viewpoints. Stereo matchers that rely mostly on textural features in the images can fail to find enough matched points in areas lacking in contrast or surface texture. This can lead to blank or topographically noisy areas in resulting DEMs. Fine depth detail may also be lacking due to limited precision and quantisation of the pixel matching process. Shape from shading (SFS), a two dimensional version of photoclinometry, utilizes the properties of light reflecting off surfaces to build up localised slope maps, which can subsequently be combined to extract topography. This works especially well on homogeneous surfaces and can recover fine detail. However the cartographic accuracy can be affected by changes in brightness due to differences in surface material, albedo and light scattering properties, and also by the presence of shadows. We describe here experimental research for the Planetary Robotics Vision Data Exploitation EU FP7 project (PRoViDE) into using stereo generated depth maps in conjunction with SFS to recover both coarse and fine detail of planetary surface DEMs. Our Large Deformation Optimisation Shape From Shading (LDOSFS) algorithm uses image data, illumination, viewing geometry and camera parameters to produce a DEM. A stereo-derived depth map can be used as an initial seed if available. The software uses separate Bidirectional Reflectance Distribution Function (BRDF) and SFS modules for iterative processing and to make the code more portable for future development. Three BRDF models are currently implemented: Lambertian, Blinn-Phong, and Oren-Nayar. A version of the Hapke reflectance function, which is more appropriate for planetary surfaces, is under development

  19. Characterizing the influence of surface roughness and inclination on 3D vision sensor performance

    NASA Astrophysics Data System (ADS)

    Hodgson, John R.; Kinnell, Peter; Justham, Laura; Jackson, Michael R.

    2015-12-01

    This paper reports a methodology to evaluate the performance of 3D scanners, focusing on the influence of surface roughness and inclination on the number of acquired data points and measurement noise. Point clouds were captured of samples mounted on a robotic pan-tilt stage using an Ensenso active stereo 3D scanner. The samples have isotropic texture and range in surface roughness (Ra) from 0.09 to 0.46 μm. By extracting the point cloud quality indicators, point density and standard deviation, at a multitude of inclinations, maps of scanner performance are created. These maps highlight the performance envelopes of the sensor, the aim being to predict and compare scanner performance on real-world surfaces, rather than idealistic artifacts. The results highlight the need to characterize 3D vision sensors by their measurement limits as well as best-case performance, determined either by theoretical calculation or measurements in ideal circumstances.

  20. A Customized Vision System for Tracking Humans Wearing Reflective Safety Clothing from Industrial Vehicles and Machinery

    PubMed Central

    Mosberger, Rafael; Andreasson, Henrik; Lilienthal, Achim J.

    2014-01-01

    This article presents a novel approach for vision-based detection and tracking of humans wearing high-visibility clothing with retro-reflective markers. Addressing industrial applications where heavy vehicles operate in the vicinity of humans, we deploy a customized stereo camera setup with active illumination that allows for efficient detection of the reflective patterns created by the worker's safety garments. After segmenting reflective objects from the image background, the interest regions are described with local image feature descriptors and classified in order to discriminate safety garments from other reflective objects in the scene. In a final step, the trajectories of the detected humans are estimated in 3D space relative to the camera. We evaluate our tracking system in two industrial real-world work environments on several challenging video sequences. The experimental results indicate accurate tracking performance and good robustness towards partial occlusions, body pose variation, and a wide range of different illumination conditions. PMID:25264956

  1. Students' Research-Informed Socio-scientific Activism: Re/Visions for a Sustainable Future

    NASA Astrophysics Data System (ADS)

    Bencze, Larry; Sperling, Erin; Carter, Lyn

    2012-01-01

    In many educational contexts throughout the world, increasing focus has been placed on socio-scientific issues; that is, disagreements about potential personal, social and/or environmental problems associated with fields of science and technology. Some suggest (as do we) that many of these potential problems, such as those associated with climate change, are so serious that education needs to be oriented towards encouraging and enabling students to become citizen activists, ready and willing to take personal and social actions to reduce risks associated with the issues. Towards this outcome, teachers we studied encouraged and enabled students to direct open-ended primary (e.g., correlational studies), as well as secondary (e.g., internet searches), research as sources of motivation and direction for their activist projects. In this paper, we concluded, based on constant comparative analyses of qualitative data, that school students' tendencies towards socio-political activism appeared to depend on myriad, possibly interacting, factors. We focused, though, on curriculum policy statements, school culture, teacher characteristics and student-generated research findings. Our conclusions may be useful to those promoting education for sustainability, generally, and, more specifically, to those encouraging activism on such issues informed by student-led research.

  2. Present Vision--Future Vision.

    ERIC Educational Resources Information Center

    Fitterman, L. Jeffrey

    This paper addresses issues of current and future technology use for and by individuals with visual impairments and blindness in Florida. Present technology applications used in vision programs in Florida are individually described, including video enlarging, speech output, large inkprint, braille print, paperless braille, and tactual output…

  3. Theoretical modeling for the stereo mission

    NASA Astrophysics Data System (ADS)

    Aschwanden, Markus J.; Burlaga, L. F.; Kaiser, M. L.; Ng, C. K.; Reames, D. V.; Reiner, M. J.; Gombosi, T. I.; Lugaz, N.; Manchester, W.; Roussev, I. I.; Zurbuchen, T. H.; Farrugia, C. J.; Galvin, A. B.; Lee, M. A.; Linker, J. A.; Mikić, Z.; Riley, P.; Alexander, D.; Sandman, A. W.; Cook, J. W.; Howard, R. A.; Odstrčil, D.; Pizzo, V. J.; Kóta, J.; Liewer, P. C.; Luhmann, J. G.; Inhester, B.; Schwenn, R. W.; Solanki, S. K.; Vasyliunas, V. M.; Wiegelmann, T.; Blush, L.; Bochsler, P.; Cairns, I. H.; Robinson, P. A.; Bothmer, V.; Kecskemety, K.; Llebaria, A.; Maksimovic, M.; Scholer, M.; Wimmer-Schweingruber, R. F.

    2008-04-01

    We summarize the theory and modeling efforts for the STEREO mission, which will be used to interpret the data of both the remote-sensing (SECCHI, SWAVES) and in-situ instruments (IMPACT, PLASTIC). The modeling includes the coronal plasma, in both open and closed magnetic structures, and the solar wind and its expansion outwards from the Sun, which defines the heliosphere. Particular emphasis is given to modeling of dynamic phenomena associated with the initiation and propagation of coronal mass ejections (CMEs). The modeling of the CME initiation includes magnetic shearing, kink instability, filament eruption, and magnetic reconnection in the flaring lower corona. The modeling of CME propagation entails interplanetary shocks, interplanetary particle beams, solar energetic particles (SEPs), geoeffective connections, and space weather. This review describes mostly existing models of groups that have committed their work to the STEREO mission, but is by no means exhaustive or comprehensive regarding alternative theoretical approaches.

  4. Topographic mapping for stereo and motion processing

    NASA Astrophysics Data System (ADS)

    Mallot, Hanspeter A.; Zielke, Thomas; Storjohann, Kai; von Seelen, Werner

    1991-02-01

    Topographic mappings are neigbourhood preserving transformations between twodimensional data structures. Mappings of this type are a general means of information processing in the vertebrate visual system. In this paper we present an application of a special topographic mapping termed the inverse perspective mapping for the computation of stereo and motion. More specifically we study a class of algorithms for the detection of deviations from an expected " normal" situation. These expectations concern the global spacevariance of certain image parameters (e. g. disparity or speed of feature motion) and can thus be implemented in the mapping rule. The resulting algorithms are minimal in the sense that no irrelevant information is extracted from the scene. In a technical application we use topographic mappings for a stereo obstacle detection system. The implementation has been tested on an automatically guided vehicle (AGV) in an industrial environment. 1

  5. A Chang'e-4 mission concept and vision of future Chinese lunar exploration activities

    NASA Astrophysics Data System (ADS)

    Wang, Qiong; Liu, Jizhong

    2016-10-01

    A novel concept for Chinese Chang'e-4 lunar exploration mission is presented in this paper at first. After the success of Chang'e-3, its backup probe, Chang'e-4 lander/rover combination, would be upgraded and land on the unexplored lunar farside by the aid of a relay satellite near the second Earth-Moon Lagrange point. Mineralogical and geochemical surveys on the farside to study the formation and evolution of lunar crust and observations at low radio frequencies to track the signals of the Universe's Dark Ages are priorities. Follow-up Chinese lunar exploration activities before 2030 are envisioned as building a robotic lunar science station by three to five missions. Finally several methods of international cooperation are proposed.

  6. Observing atmospheric clouds through stereo reconstruction

    NASA Astrophysics Data System (ADS)

    Öktem, Ruşen; Romps, David M.

    2015-03-01

    Observing cloud lifecycles and obtaining measurements on cloud features are significant problems in atmospheric cloud research. Scanning radars have been the most capable instruments to provide such measurements, but they have shortcomings when it comes to spatial and temporal resolution. High spatial and temporal resolution is particularly important to capture the variations in developing convections. Stereo photogrammetry can complement scanning radars with the potential to observe clouds as distant as tens of kilometers and to provide high temporal and spatial resolution, although it comes with the calibration challenges peculiar to various outdoor settings required to collect measurements on atmospheric clouds. This work explores the use of stereo photogrammetry in atmospheric cloud research, focusing on tracking vertical motion in developing convections. Calibration challenges and strategies to overcome these challenges are addressed within two different stereo settings in Miami, Florida and in the plains of Oklahoma. A feature extraction and matching algorithm is developed and implemented to identify cloud features of interest. A two-level resolution hierarchy is exploited in feature extraction and matching. 3D positions of cloud features are reconstructed from matched pixel pairs, and cloud tops of developing turrets in shallow to deep convection are tracked in time to estimate vertical accelerations. Results show that stereophotogrammetry provides a useful tool to observe cloud lifecycles and track the vertical acceleration of turrets exceeding 10 km height.

  7. Direct calibration methodology for stereo cameras

    NASA Astrophysics Data System (ADS)

    Do, Yongtae; Yoo, Seog-Hwan; Lee, Dae-Sik

    1998-10-01

    This paper describes a new technique for stereoscopic 3D position measurement. By defining stereo cameras as a system for image-to-world mapping, the mapping function is determined. The direct representation of 3D coordinates of a world point with corresponding stereo image coordinates is derived using the pin-hole model. One camera frame is related to the other before being related to the world frame so that the stereo itself rather than each camera can be directly related with the 3D world. The equations obtained are simple, straightforward, and closed-form. However, since the nonlinearity of actual imaging system is not considered, high accuracy is difficult to be expected when the equation is employed for 3D measurements. For tackling this problem a multilayer feedforward neural network trained by back- propagation algorithm is used. The network played the role of fine correction satisfactorily using its function learning capability after rough mapping by the linear equations in our experiment.

  8. Pleiades Visions

    NASA Astrophysics Data System (ADS)

    Whitehouse, M.

    2016-01-01

    Pleiades Visions (2012) is my new musical composition for organ that takes inspiration from traditional lore and music associated with the Pleiades (Seven Sisters) star cluster from Australian Aboriginal, Native American, and Native Hawaiian cultures. It is based on my doctoral dissertation research incorporating techniques from the fields of ethnomusicology and cultural astronomy; this research likely represents a new area of inquiry for both fields. This large-scale work employs the organ's vast sonic resources to evoke the majesty of the night sky and the expansive landscapes of the homelands of the above-mentioned peoples. Other important themes in Pleiades Visions are those of place, origins, cosmology, and the creation of the world.

  9. Optoelectronic vision

    NASA Astrophysics Data System (ADS)

    Ren, Chunye; Parel, Jean-Marie A.

    1993-06-01

    Scientists have searched every discipline to find effective methods of treating blindness, such as using aids based on conversion of the optical image, to auditory or tactile stimuli. However, the limited performance of such equipment and difficulties in training patients have seriously hampered practical applications. A great edification has been given by the discovery of Foerster (1929) and Krause & Schum (1931), who found that the electrical stimulation of the visual cortex evokes the perception of a small spot of light called `phosphene' in both blind and sighted subjects. According to this principle, it is possible to invite artificial vision by using stimulation with electrodes placed on the vision neural system, thereby developing a prosthesis for the blind that might be of value in reading and mobility. In fact, a number of investigators have already exploited this phenomena to produce a functional visual prosthesis, bringing about great advances in this area.

  10. Computer vision

    SciTech Connect

    Not Available

    1982-01-01

    This paper discusses material from areas such as artificial intelligence, psychology, computer graphics, and image processing. The intent is to assemble a selection of this material in a form that will serve both as a senior/graduate-level academic text and as a useful reference to those building vision systems. This book has a strong artificial intelligence flavour, emphasising the belief that both the intrinsic image information and the internal model of the world are important in successful vision systems. The book is organised into four parts, based on descriptions of objects at four different levels of abstraction. These are: generalised images-images and image-like entities; segmented images-images organised into subimages that are likely to correspond to interesting objects; geometric structures-quantitative models of image and world structures; relational structures-complex symbolic descriptions of image and world structures. The book contains author and subject indexes.

  11. Signatures of interchange reconnection: STEREO, ACE and Hinode observations combined

    NASA Astrophysics Data System (ADS)

    Baker, D.; Rouillard, A. P.; van Driel-Gesztelyi, L.; Démoulin, P.; Harra, L. K.; Lavraud, B.; Davies, J. A.; Opitz, A.; Luhmann, J. G.; Sauvaud, J.-A.; Galvin, A. B.

    2009-10-01

    Combining STEREO, ACE and Hinode observations has presented an opportunity to follow a filament eruption and coronal mass ejection (CME) on 17 October 2007 from an active region (AR) inside a coronal hole (CH) into the heliosphere. This particular combination of "open" and closed magnetic topologies provides an ideal scenario for interchange reconnection to take place. With Hinode and STEREO data we were able to identify the emergence time and type of structure seen in the in-situ data four days later. On the 21st, ACE observed in-situ the passage of an ICME with "open" magnetic topology. The magnetic field configuration of the source, a mature AR located inside an equatorial CH, has important implications for the solar and interplanetary signatures of the eruption. We interpret the formation of an "anemone" structure of the erupting AR and the passage in-situ of the ICME being disconnected at one leg, as manifested by uni-directional suprathermal electron flux in the ICME, to be a direct result of interchange reconnection between closed loops of the CME originating from the AR and "open" field lines of the surrounding CH.

  12. Revisiting Intrinsic Curves for Efficient Dense Stereo Matching

    NASA Astrophysics Data System (ADS)

    Shahbazi, M.; Sohn, G.; Théau, J.; Ménard, P.

    2016-06-01

    Dense stereo matching is one of the fundamental and active areas of photogrammetry. The increasing image resolution of digital cameras as well as the growing interest in unconventional imaging, e.g. unmanned aerial imagery, has exposed stereo image pairs to serious occlusion, noise and matching ambiguity. This has also resulted in an increase in the range of disparity values that should be considered for matching. Therefore, conventional methods of dense matching need to be revised to achieve higher levels of efficiency and accuracy. In this paper, we present an algorithm that uses the concepts of intrinsic curves to propose sparse disparity hypotheses for each pixel. Then, the hypotheses are propagated to adjoining pixels by label-set enlargement based on the proximity in the space of intrinsic curves. The same concepts are applied to model occlusions explicitly via a regularization term in the energy function. Finally, a global optimization stage is performed using belief-propagation to assign one of the disparity hypotheses to each pixel. By searching only through a small fraction of the whole disparity search space and handling occlusions and ambiguities, the proposed framework could achieve high levels of accuracy and efficiency.

  13. Cartesian visions.

    PubMed

    Fara, Patricia

    2008-12-01

    Few original portraits exist of René Descartes, yet his theories of vision were central to Enlightenment thought. French philosophers combined his emphasis on sight with the English approach of insisting that ideas are not innate, but must be built up from experience. In particular, Denis Diderot criticised Descartes's views by describing how Nicholas Saunderson--a blind physics professor at Cambridge--relied on touch. Diderot also made Saunderson the mouthpiece for some heretical arguments against the existence of God.

  14. Artificial vision.

    PubMed

    Zarbin, M; Montemagno, C; Leary, J; Ritch, R

    2011-09-01

    A number treatment options are emerging for patients with retinal degenerative disease, including gene therapy, trophic factor therapy, visual cycle inhibitors (e.g., for patients with Stargardt disease and allied conditions), and cell transplantation. A radically different approach, which will augment but not replace these options, is termed neural prosthetics ("artificial vision"). Although rewiring of inner retinal circuits and inner retinal neuronal degeneration occur in association with photoreceptor degeneration in retinitis pigmentosa (RP), it is possible to create visually useful percepts by stimulating retinal ganglion cells electrically. This fact has lead to the development of techniques to induce photosensitivity in cells that are not light sensitive normally as well as to the development of the bionic retina. Advances in artificial vision continue at a robust pace. These advances are based on the use of molecular engineering and nanotechnology to render cells light-sensitive, to target ion channels to the appropriate cell type (e.g., bipolar cell) and/or cell region (e.g., dendritic tree vs. soma), and on sophisticated image processing algorithms that take advantage of our knowledge of signal processing in the retina. Combined with advances in gene therapy, pathway-based therapy, and cell-based therapy, "artificial vision" technologies create a powerful armamentarium with which ophthalmologists will be able to treat blindness in patients who have a variety of degenerative retinal diseases.

  15. Two new macro-stereo cameras for medical photography with special reference to the eye.

    PubMed

    AandeKerk, A L

    1991-04-01

    Information is given on two new macro-stereo cameras for simultaneous stereo photography designed by members of the Dutch Stereo Society. These cameras can be used for medical photography. The first camera takes half frame stereo pictures and utilizes frames for positioning and focusing. The second camera takes full frame stereo pictures and utilizes spots projection for positioning and focusing. Examples are shown.

  16. Augmented reality to enhance an active telepresence system

    NASA Astrophysics Data System (ADS)

    Wheeler, Alison; Pretlove, John R. G.; Parker, Graham A.

    1996-12-01

    Tasks carried out remotely via a telerobotic system are typically complex, occur in hazardous environments and require fine control of the robot's movements. Telepresence systems provide the teleoperator with a feeling of being physically present at the remote site. Stereoscopic video has been successfully applied to telepresence vision systems to increase the operator's perception of depth in the remote scene and this sense of presence can be further enhanced using computer generated stereo graphics to augment the visual information presented to the operator. The Mechatronic Systems and Robotics Research Group have over seven years developed a number of high performance active stereo vision systems culminating in the latest, a four degree-of-freedom stereohead. This carries two miniature color cameras and is controlled in real time by the motion of the operator's head, who views the stereoscopic video images on an immersive head mounted display or stereo monitor. The stereohead is mounted on a mobile robot, the movement of which is controlled by a joystick interface. This paper describes the active telepresence system and the development of a prototype augmented reality (AR) application to enhance the operator's sense of presence at the remote site. The initial enhancements are a virtual map and compass to aid navigation in degraded visual conditions and a virtual cursor that provides a means for the operator to interact with the remote environment. The results of preliminary experiments using the initial enhancements are presented.

  17. Graphics for Stereo Visualization Theater for Supercomputing 1998

    NASA Technical Reports Server (NTRS)

    Antipuesto, Joel; Reid, Lisa (Technical Monitor)

    1998-01-01

    The Stereo Visualization Theater is a high-resolution graphics demonstration that prides a review of current research being performed at NASA. Using a stereoscopic projection, multiple participants can explore scientific data in new ways. The pre-processed audio and video are being played in real-time off of a workstation. A stereo graphics filter for the projector and passive polarized glasses worn by audience members are used to create the stereo effect.

  18. Quasi-microscope concept for planetary missions - Stereo

    NASA Technical Reports Server (NTRS)

    Burcher, E. E.; Sinclair, A. R.; Huck, F. O.

    1978-01-01

    The quasi-microscope has been used for stereo pictures using a small aperture placed at the right and left over the entrance aperture. A 16-degree stereo view angle yields an enhanced stereo effect. When viewed upward through a transparent support, all grains come into focus. The technique may be used for determining mineral constituents on the basis of cleavage and fracture patterns, grain details, and surface slopes used in estimating single-particle albedo and the illumination of scattering profiles.

  19. Stereoacuity of Preschool Children with and without Vision Disorders

    PubMed Central

    Ciner, Elise B.; Ying, Gui-shuang; Kulp, Marjean Taylor; Maguire, Maureen G.; Quinn, Graham E.; Orel-Bixler, Deborah; Cyert, Lynn A.; Moore, Bruce; Huang, Jiayan

    2014-01-01

    Purpose To evaluate associations between stereoacuity and presence, type, and severity of vision disorders in Head Start preschool children and determine testability and levels of stereoacuity by age in children without vision disorders. Methods Stereoacuity of children aged 3 to 5 years (n = 2898) participating in the Vision in Preschoolers (VIP) Study was evaluated using the Stereo Smile II test during a comprehensive vision examination. This test uses a two-alternative forced-choice paradigm with four stereoacuity levels (480 to 60 seconds of arc). Children were classified by the presence (n = 871) or absence (n = 2027) of VIP Study–targeted vision disorders (amblyopia, strabismus, significant refractive error, or unexplained reduced visual acuity), including type and severity. Median stereoacuity between groups and among severity levels of vision disorders was compared using Wilcoxon rank sum and Kruskal-Wallis tests. Testability and stereoacuity levels were determined for children without VIP Study–targeted disorders overall and by age. Results Children with VIP Study–targeted vision disorders had significantly worse median stereoacuity than that of children without vision disorders (120 vs. 60 seconds of arc, p < 0.001). Children with the most severe vision disorders had worse stereoacuity than that of children with milder disorders (median 480 vs. 120 seconds of arc, p < 0.001). Among children without vision disorders, testability was 99.6% overall, increasing with age to 100% for 5-year-olds (p = 0.002). Most of the children without vision disorders (88%) had stereoacuity at the two best disparities (60 or 120 seconds of arc); the percentage increasing with age (82% for 3-, 89% for 4-, and 92% for 5-year-olds; p < 0.001). Conclusions The presence of any VIP Study–targeted vision disorder was associated with significantly worse stereoacuity in preschool children. Severe vision disorders were more likely associated with poorer stereopsis than milder

  20. Activating a Vision

    ERIC Educational Resources Information Center

    Wilson, Carroll L.

    1973-01-01

    International Center of Insect Physiology and Ecology (ICIPE) is an organized effort to study physiology, endocrinology, genetics, and related processes of five insects. Location of the center in Kenya encourages developing countries to conduct research for the control of harmful insects. (PS)

  1. Web Based Interactive Anaglyph Stereo Visualization of 3D Model of Geoscience Data

    NASA Astrophysics Data System (ADS)

    Han, J.

    2014-12-01

    The objectives of this study were to create interactive online tool for generating and viewing the anaglyph 3D stereo image on a Web browser via Internet. To achieve this, we designed and developed the prototype system. Three-dimensional visualization is well known and becoming popular in recent years to understand the target object and the related physical phenomena. Geoscience data have the complex data model, which combines large extents with rich small scale visual details. So, the real-time visualization of 3D geoscience data model on the Internet is a challenging work. In this paper, we show the result of creating which can be viewed in 3D anaglyph of geoscience data in any web browser which supports WebGL. We developed an anaglyph image viewing prototype system, and some representative results are displayed by anaglyph 3D stereo image generated in red-cyan colour from pairs of air-photo/digital elevation model and geological map/digital elevation model respectively. The best viewing is achieved by using suitable 3D red-cyan glasses, although alternatively red-blue or red-green spectacles can be also used. The middle mouse wheel can be used to zoom in/out the anaglyph image on a Web browser. Application of anaglyph 3D stereo image is a very important and easy way to understand the underground geologic system and active tectonic geomorphology. The integrated strata with fine three-dimensional topography and geologic map data can help to characterise the mineral potential area and the active tectonic abnormal characteristics. To conclude, it can be stated that anaglyph 3D stereo image provides a simple and feasible method to improve the relief effect of geoscience data such as geomorphology and geology. We believe that with further development, the anaglyph 3D stereo imaging system could as a complement to 3D geologic modeling, constitute a useful tool for better understanding of the underground geology and the active tectonic

  2. Use of 3D vision for fine robot motion

    NASA Technical Reports Server (NTRS)

    Lokshin, Anatole; Litwin, Todd

    1989-01-01

    An integration of 3-D vision systems with robot manipulators will allow robots to operate in a poorly structured environment by visually locating targets and obstacles. However, by using computer vision for objects acquisition makes the problem of overall system calibration even more difficult. Indeed, in a CAD based manipulation a control architecture has to find an accurate mapping between the 3-D Euclidean work space and a robot configuration space (joint angles). If a stereo vision is involved, then one needs to map a pair of 2-D video images directly into the robot configuration space. Neural Network approach aside, a common solution to this problem is to calibrate vision and manipulator independently, and then tie them via common mapping into the task space. In other words, both vision and robot refer to some common Absolute Euclidean Coordinate Frame via their individual mappings. This approach has two major difficulties. First a vision system has to be calibrated over the total work space. And second, the absolute frame, which is usually quite arbitrary, has to be the same with a high degree of precision for both robot and vision subsystem calibrations. The use of computer vision to allow robust fine motion manipulation in a poorly structured world which is currently in progress is described along with the preliminary results and encountered problems.

  3. Pediatric Low Vision

    MedlinePlus

    ... Asked Questions Español Condiciones Chinese Conditions Pediatric Low Vision What is Low Vision? Partial vision loss that cannot be corrected causes ... and play. What are the signs of Low Vision? Some signs of low vision include difficulty recognizing ...

  4. Vision Therapy News Backgrounder.

    ERIC Educational Resources Information Center

    American Optometric Association, St. Louis, MO.

    The booklet provides an overview on vision therapy to aid writers, editors, and broadcasters help parents, teachers, older adults, and all consumers learn more about vision therapy. Following a description of vision therapy or vision training, information is provided on how and why vision therapy works. Additional sections address providers of…

  5. From geospatial observations of ocean currents to causal predictors of spatio-economic activity using computer vision and machine learning

    NASA Astrophysics Data System (ADS)

    Popescu, Florin; Ayache, Stephane; Escalera, Sergio; Baró Solé, Xavier; Capponi, Cecile; Panciatici, Patrick; Guyon, Isabelle

    2016-04-01

    The big data transformation currently revolutionizing science and industry forges novel possibilities in multi-modal analysis scarcely imaginable only a decade ago. One of the important economic and industrial problems that stand to benefit from the recent expansion of data availability and computational prowess is the prediction of electricity demand and renewable energy generation. Both are correlates of human activity: spatiotemporal energy consumption patterns in society are a factor of both demand (weather dependent) and supply, which determine cost - a relation expected to strengthen along with increasing renewable energy dependence. One of the main drivers of European weather patterns is the activity of the Atlantic Ocean and in particular its dominant Northern Hemisphere current: the Gulf Stream. We choose this particular current as a test case in part due to larger amount of relevant data and scientific literature available for refinement of analysis techniques. This data richness is due not only to its economic importance but also to its size being clearly visible in radar and infrared satellite imagery, which makes it easier to detect using Computer Vision (CV). The power of CV techniques makes basic analysis thus developed scalable to other smaller and less known, but still influential, currents, which are not just curves on a map, but complex, evolving, moving branching trees in 3D projected onto a 2D image. We investigate means of extracting, from several image modalities (including recently available Copernicus radar and earlier Infrared satellites), a parameterized representation of the state of the Gulf Stream and its environment that is useful as feature space representation in a machine learning context, in this case with the EC's H2020-sponsored 'See.4C' project, in the context of which data scientists may find novel predictors of spatiotemporal energy flow. Although automated extractors of Gulf Stream position exist, they differ in methodology

  6. FM Stereo and AM Stereo: Government Standard-Setting vs. the Marketplace.

    ERIC Educational Resources Information Center

    Huff, W. A. Kelly

    The emergence of frequency modulation or FM radio signals, which arose from the desire to free broadcasting of static noise common to amplitude modulation or AM, has produced the controversial development of stereo broadcasting. The resulting enhancement of sound quality helped FM pass AM in audience shares in less than two decades. The basic…

  7. Small Orbital Stereo Tracking Camera Technology Development

    NASA Technical Reports Server (NTRS)

    Bryan, Tom; MacLeod, Todd; Gagliano, Larry

    2016-01-01

    On-Orbit Small Debris Tracking and Characterization is a technical gap in the current National Space Situational Awareness necessary to safeguard orbital assets and crew. This poses a major risk of MOD damage to ISS and Exploration vehicles. In 2015 this technology was added to NASA's Office of Chief Technologist roadmap. For missions flying in or assembled in or staging from LEO, the physical threat to vehicle and crew is needed in order to properly design the proper level of MOD impact shielding and proper mission design restrictions. Need to verify debris flux and size population versus ground RADAR tracking. Use of ISS for In-Situ Orbital Debris Tracking development provides attitude, power, data and orbital access without a dedicated spacecraft or restricted operations on-board a host vehicle as a secondary payload. Sensor Applicable to in-situ measuring orbital debris in flux and population in other orbits or on other vehicles. Could enhance safety on and around ISS. Some technologies extensible to monitoring of extraterrestrial debris as well To help accomplish this, new technologies must be developed quickly. The Small Orbital Stereo Tracking Camera is one such up and coming technology. It consists of flying a pair of intensified megapixel telephoto cameras to evaluate Orbital Debris (OD) monitoring in proximity of International Space Station. It will demonstrate on-orbit optical tracking (in situ) of various sized objects versus ground RADAR tracking and small OD models. The cameras are based on Flight Proven Advanced Video Guidance Sensor pixel to spot algorithms (Orbital Express) and military targeting cameras. And by using twin cameras we can provide Stereo images for ranging & mission redundancy. When pointed into the orbital velocity vector (RAM), objects approaching or near the stereo camera set can be differentiated from the stars moving upward in background.

  8. Small Orbital Stereo Tracking Camera Technology Development

    NASA Technical Reports Server (NTRS)

    Bryan, Tom; Macleod, Todd; Gagliano, Larry

    2015-01-01

    On-Orbit Small Debris Tracking and Characterization is a technical gap in the current National Space Situational Awareness necessary to safeguard orbital assets and crew. This poses a major risk of MOD damage to ISS and Exploration vehicles. In 2015 this technology was added to NASA's Office of Chief Technologist roadmap. For missions flying in or assembled in or staging from LEO, the physical threat to vehicle and crew is needed in order to properly design the proper level of MOD impact shielding and proper mission design restrictions. Need to verify debris flux and size population versus ground RADAR tracking. Use of ISS for In-Situ Orbital Debris Tracking development provides attitude, power, data and orbital access without a dedicated spacecraft or restricted operations on-board a host vehicle as a secondary payload. Sensor Applicable to in-situ measuring orbital debris in flux and population in other orbits or on other vehicles. Could enhance safety on and around ISS. Some technologies extensible to monitoring of extraterrestrial debris as well to help accomplish this, new technologies must be developed quickly. The Small Orbital Stereo Tracking Camera is one such up and coming technology. It consists of flying a pair of intensified megapixel telephoto cameras to evaluate Orbital Debris (OD) monitoring in proximity of International Space Station. It will demonstrate on-orbit optical tracking (in situ) of various sized objects versus ground RADAR tracking and small OD models. The cameras are based on Flight Proven Advanced Video Guidance Sensor pixel to spot algorithms (Orbital Express) and military targeting cameras. And by using twin cameras we can provide Stereo images for ranging & mission redundancy. When pointed into the orbital velocity vector (RAM), objects approaching or near the stereo camera set can be differentiated from the stars moving upward in background.

  9. Pancam Peek into 'Victoria Crater' (Stereo)

    NASA Technical Reports Server (NTRS)

    2006-01-01

    [figure removed for brevity, see original site] Left-eye view of a stereo pair for PIA08776

    [figure removed for brevity, see original site] Right-eye view of a stereo pair for PIA08776

    A drive of about 60 meters (about 200 feet) on the 943rd Martian day, or sol, of Opportunity's exploration of Mars' Meridiani Planum region (Sept. 18, 2006) brought the NASA rover to within about 50 meters (about 160 feet) of the rim of 'Victoria Crater.' This crater has been the mission's long-term destination for the past 21 Earth months. Opportunity reached a location from which the cameras on top of the rover's mast could begin to see into the interior of Victoria. This stereo anaglyph was made from frames taken on sol 943 by the panoramic camera (Pancam) to offer a three-dimensional view when seen through red-blue glasses. It shows the upper portion of interior crater walls facing toward Opportunity from up to about 850 meters (half a mile) away. The amount of vertical relief visible at the top of the interior walls from this angle is about 15 meters (about 50 feet). The exposures were taken through a Pancam filter selecting wavelengths centered on 750 nanometers.

    Victoria Crater is about five times wider than 'Endurance Crater,' which Opportunity spent six months examining in 2004, and about 40 times wider than 'Eagle Crater,' where Opportunity first landed. The great lure of Victoria is the expectation that a thick stack of geological layers will be exposed in the crater walls, potentially several times the thickness that was previously studied at Endurance and therefore, potentially preserving several times the historical record.

  10. Practical Real-Time Imaging Stereo Matcher

    NASA Astrophysics Data System (ADS)

    Nishihara, H. K.

    1984-10-01

    A binocular-stereo-matching algorithm for making rapid visual range measurements in noisy images is described. This technique is developed for application to problems in robotics where noise tolerance, reliability, and speed are predominant issues. A high speed pipelined convolver for preprocessing images and an unstructured light technique for improving signal quality are introduced to help enhance performance to meet the demands of this task domain. These optimizations, however, are not sufficient. A closer examination of the problems encountered suggests that broader interpretations of both the objective of binocular stereo and of the zero-crossing theory of Marr and Poggio [Proc. R. Soc. Lond. B 204, 301 (1979)] are required. In this paper, we restrict ourselves to the problem of making a single primitive surface measurement for example, to determine whether or not a specified volume of space is occupied, to measure the range to a surface at an indicated image location, or to determine the elevation gradient at that position. In this framework we make a subtle but important shift from the explicit use of zero-crossing contours (in bandpass-filtered images) as the elements matched between left and right images, to the use of the signs between zero crossings. With this change, we obtain a simpler algorithm with a reduced sensitivity to noise and a more predictable behavior. The practical real-time imaging stereo matcher (PRISM) system incorporates this algorithm with the unstructured light technique and a high speed digital convolver. It has been used successfully by others as a sensor in a path-planning system and a bin-picking system.

  11. Solar Eclipse Video Captured by STEREO-B

    NASA Technical Reports Server (NTRS)

    2007-01-01

    No human has ever witnessed a solar eclipse quite like the one captured on this video. The NASA STEREO-B spacecraft, managed by the Goddard Space Center, was about a million miles from Earth , February 25, 2007, when it photographed the Moon passing in front of the sun. The resulting movie looks like it came from an alien solar system. The fantastically-colored star is our own sun as STEREO sees it in four wavelengths of extreme ultraviolet light. The black disk is the Moon. When we observe a lunar transit from Earth, the Moon appears to be the same size as the sun, a coincidence that produces intoxicatingly beautiful solar eclipses. The silhouette STEREO-B saw, on the other hand, was only a fraction of the Sun. The Moon seems small because of the STEREO-B location. The spacecraft circles the sun in an Earth-like orbit, but it lags behind Earth by one million miles. This means STEREO-B is 4.4 times further from the Moon than we are, and so the Moon looks 4.4 times smaller. This version of the STEREO-B eclipse movie is a composite of data from the coronagraph and extreme ultraviolet imager of the spacecraft. STEREO-B has a sister ship named STEREO-A. Both are on a mission to study the sun. While STEREO-B lags behind Earth, STEREO-A orbits one million miles ahead ('B' for behind, 'A' for ahead). The gap is deliberate as it allows the two spacecraft to capture offset views of the sun. Researchers can then combine the images to produce 3D stereo movies of solar storms. The two spacecraft were launched in Oct. 2006 and reached their stations on either side of Earth in January 2007.

  12. Synchronized observations by using the STEREO and the largest ground-based decametre radio telescope

    NASA Astrophysics Data System (ADS)

    Konovalenko, A. A.; Stanislavsky, A. A.; Rucker, H. O.; Lecacheux, A.; Mann, G.; Bougeret, J.-L.; Kaiser, M. L.; Briand, C.; Zarka, P.; Abranin, E. P.; Dorovsky, V. V.; Koval, A. A.; Mel'nik, V. N.; Mukha, D. V.; Panchenko, M.

    2013-08-01

    We consider the approach to simultaneous (synchronous) solar observations of radio emission by using the STEREO-WAVES instruments (frequency range 0.125-16 MHz) and the largest ground-based low-frequency radio telescope. We illustrate it by the UTR-2 radio telescope implementation (10-30 MHz). The antenna system of the radio telescope is a T-shape-like array of broadband dipoles and is located near the village Grakovo in the Kharkiv region (Ukraine). The third observation point on the ground in addition to two space-based ones improves the space-mission performance capabilities for the determination of radio-emission source directivity. The observational results from the high sensitivity antenna UTR-2 are particularly useful for analysis of STEREO data in the condition of weak event appearances during solar activity minima. In order to improve the accuracy of flux density measurements, we also provide simultaneous observations with a large part of the UTR-2 radio telescope array and its single dipole close to the STEREO-WAVES antennas in sensitivity. This concept has been studied by comparing the STEREO data with ground-based records from 2007-2011 and shown to be effective. The capabilities will be useful in the implementation of new instruments (LOFAR, LWA, MWA, etc.) and during the future Solar Orbiter mission.

  13. The World Water Vision: From Developing a Vision to Action

    NASA Astrophysics Data System (ADS)

    Gangopadhyay, S.; Cosgrove, W.; Rijsberman, F.; Strzepek, K.; Strzepek, K.

    2001-05-01

    The World Water Vision exercise was initiated by the World Water Commission under the auspices of the World Water Council. The goal of the World Water Vision project was to develop a widely shared vision on the actions required to achieve a common set of water-related goals and the necessary commitment to carry out these actions. The Vision should be participatory in nature, including input from both developed and developing regions, with a special focus on the needs of the poor, women, youth, children and the environment. Three overall objectives were to: (i)raise awareness of water issues among both the general population and decision-makers so as to foster the necessary political will and leadership to tackle the problems seriously and systematically; (ii) develop a vision of water management for 2025 that is shared by water sector specialists as well as international, national and regional decision-makers in government, the private sector and civil society; and (iii) provide input to a Framework for Action to be elaborated by the Global Water Partnership, with steps to go from vision to action, including recommendations to funding agencies for investment priorities. This exercise was characterized by the principles of: (i) a participatory approach with extensive consultation; (ii) Innovative thinking; (iii) central analysis to assure integration and co-ordination; and (iv) emphasis on communication with groups outside the water sector. The primary activities included, developing global water scenarios that fed into regional consultations and sectoral consultations as water for food, water for people - water supply and sanitation, and water and environment. These consultations formulated the regional and sectoral visions that were synthesized to form the World Water Vision. The findings from this exercise were reported and debated at the Second World Water Forum and the Ministerial Conference held in The Hague, The Netherlands during April 2000. This paper

  14. STEREO's Extreme UltraViolet Imager (EUVI)

    NASA Technical Reports Server (NTRS)

    2007-01-01

    At a pixel resolution of 2048x2048, the STEREO EUVI instrument provides views of the Sun in ultraviolet light that rivals the full-disk views of SOHO/EIT. This image is through the 171 Angstrom (ultraviolet) filter which is characteristic of iron ions (missing eight and nine electrons) at 1 million degrees. There is a short data gap in the latter half of the movie that creates a freeze and then jump in the data view. This is a movie of the Sun in 171 Angstrom ultraviolet light. The time frame is late January, 2007

  15. Surface Stereo Imager on Mars, Side View

    NASA Technical Reports Server (NTRS)

    2008-01-01

    This image is a view of NASA's Phoenix Mars Lander's Surface Stereo Imager (SSI) as seen by the lander's Robotic Arm Camera. This image was taken on the afternoon of the 116th Martian day, or sol, of the mission (September 22, 2008). The mast-mounted SSI, which provided the images used in the 360 degree panoramic view of Phoenix's landing site, is about 4 inches tall and 8 inches long.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  16. Developing stereo image based robot control system

    SciTech Connect

    Suprijadi,; Pambudi, I. R.; Woran, M.; Naa, C. F; Srigutomo, W.

    2015-04-16

    Application of image processing is developed in various field and purposes. In the last decade, image based system increase rapidly with the increasing of hardware and microprocessor performance. Many fields of science and technology were used this methods especially in medicine and instrumentation. New technique on stereovision to give a 3-dimension image or movie is very interesting, but not many applications in control system. Stereo image has pixel disparity information that is not existed in single image. In this research, we proposed a new method in wheel robot control system using stereovision. The result shows robot automatically moves based on stereovision captures.

  17. Dramatic Improvements to Feature Based Stereo

    NASA Technical Reports Server (NTRS)

    Smelyansky, V. N.; Morris, R. D.; Kuehnel, F. O.; Maluf, D. A.; Cheeseman, P.

    2004-01-01

    The camera registration extracted from feature based stereo is usually considered sufficient to accurately localize the 3D points. However, for natural scenes the feature localization is not as precise as in man-made environments. This results in small camera registration errors. We show that even very small registration errors result in large errors in dense surface reconstruction. We describe a method for registering entire images to the inaccurate surface model. This gives small, but crucially important improvements to the camera parameters. The new registration gives dramatically better dense surface reconstruction.

  18. Venus surface roughness and Magellan stereo data

    NASA Technical Reports Server (NTRS)

    Maurice, Kelly E.; Leberl, Franz W.; Norikane, L.; Hensley, Scott

    1994-01-01

    Presented are results of some studies to develop tools useful for the analysis of Venus surface shape and its roughness. Actual work was focused on Maxwell Montes. The analyses employ data acquired by means of NASA's Magellan satellite. The work is primarily concerned with deriving measurements of the Venusian surface using Magellan stereo SAR. Roughness was considered by means of a theoretical analyses based on digital elevation models (DEM's), on single Magellan radar images combined with radiometer data, and on the use of multiple overlapping Magellan radar images from cycles 1, 2, and 3, again combined with collateral radiometer data.

  19. Human gene therapy for RPE65 isomerase deficiency activates the retinoid cycle of vision but with slow rod kinetics.

    PubMed

    Cideciyan, Artur V; Aleman, Tomas S; Boye, Sanford L; Schwartz, Sharon B; Kaushal, Shalesh; Roman, Alejandro J; Pang, Ji-Jing; Sumaroka, Alexander; Windsor, Elizabeth A M; Wilson, James M; Flotte, Terence R; Fishman, Gerald A; Heon, Elise; Stone, Edwin M; Byrne, Barry J; Jacobson, Samuel G; Hauswirth, William W

    2008-09-30

    The RPE65 gene encodes the isomerase of the retinoid cycle, the enzymatic pathway that underlies mammalian vision. Mutations in RPE65 disrupt the retinoid cycle and cause a congenital human blindness known as Leber congenital amaurosis (LCA). We used adeno-associated virus-2-based RPE65 gene replacement therapy to treat three young adults with RPE65-LCA and measured their vision before and up to 90 days after the intervention. All three patients showed a statistically significant increase in visual sensitivity at 30 days after treatment localized to retinal areas that had received the vector. There were no changes in the effect between 30 and 90 days. Both cone- and rod-photoreceptor-based vision could be demonstrated in treated areas. For cones, there were increases of up to 1.7 log units (i.e., 50 fold); and for rods, there were gains of up to 4.8 log units (i.e., 63,000 fold). To assess what fraction of full vision potential was restored by gene therapy, we related the degree of light sensitivity to the level of remaining photoreceptors within the treatment area. We found that the intervention could overcome nearly all of the loss of light sensitivity resulting from the biochemical blockade. However, this reconstituted retinoid cycle was not completely normal. Resensitization kinetics of the newly treated rods were remarkably slow and required 8 h or more for the attainment of full sensitivity, compared with <1 h in normal eyes. Cone-sensitivity recovery time was rapid. These results demonstrate dramatic, albeit imperfect, recovery of rod- and cone-photoreceptor-based vision after RPE65 gene therapy.

  20. Machine Vision Giving Eyes to Robots. Resources in Technology.

    ERIC Educational Resources Information Center

    Technology Teacher, 1990

    1990-01-01

    This module introduces machine vision, which can be used for inspection, robot guidance and part sorting. The future for machine vision will include new technology and will bring vision systems closer to the ultimate vision processor, the human eye. Includes a student quiz, outcomes, and activities. (JOW)

  1. Image matching for digital close-range stereo photogrammetry based on constraints of Delaunay triangulated network and epipolar-line

    NASA Astrophysics Data System (ADS)

    Zhang, K.; Sheng, Y. H.; Li, Y. Q.; Han, B.; Liang, Ch.; Sha, W.

    2006-10-01

    In the field of digital photogrammetry and computer vision, the determination of conjugate points in a stereo image pair, referred to as "image matching," is the critical step to realize automatic surveying and recognition. Traditional matching methods encounter some problems in the digital close-range stereo photogrammetry, because the change of gray-scale or texture is not obvious in the close-range stereo images. The main shortcoming of traditional matching methods is that geometric information of matching points is not fully used, which will lead to wrong matching results in regions with poor texture. To fully use the geometry and gray-scale information, a new stereo image matching algorithm is proposed in this paper considering the characteristics of digital close-range photogrammetry. Compared with the traditional matching method, the new algorithm has three improvements on image matching. Firstly, shape factor, fuzzy maths and gray-scale projection are introduced into the design of synthetical matching measure. Secondly, the topology connecting relations of matching points in Delaunay triangulated network and epipolar-line are used to decide matching order and narrow the searching scope of conjugate point of the matching point. Lastly, the theory of parameter adjustment with constraint is introduced into least square image matching to carry out subpixel level matching under epipolar-line constraint. The new algorithm is applied to actual stereo images of a building taken by digital close-range photogrammetric system. The experimental result shows that the algorithm has a higher matching speed and matching accuracy than pyramid image matching algorithm based on gray-scale correlation.

  2. Stereo Pair: Wellington, New Zealand

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Wellington, the capital city of New Zealand, is located on the shores of Port Nicholson, a natural harbor at the south end of North Island. The city was founded in 1840 by British emigrants and now has a regional population of more than 400,000 residents. As seen here, the natural terrain imposes strong control over the urban growth pattern (urban features generally appear gray or white in this view). Rugged hills generally rising to 300 meters (1,000 feet) help protect the city and harbor from strong winter winds

    New Zealand is seismically active and faults are readily seen in the topography. The Wellington Fault forms the straight northwestern (left) shoreline of the harbor. Toward the southwest (down) the fault crosses through the city, then forms linear canyons in the hills before continuing offshore at the bottom. Toward the northeast (upper right) the fault forms the sharp mountain front along the northern edge of the heavily populated Hutt Valley.

    This stereoscopic image pair was generated using topographic data from the Shuttle Radar Topography Mission, combined with an enhanced true color Landsat7 satellite image. The topography data are used to create two differing perspectives of a single image, one perspective for each eye. In doing so, each point in the image is shifted slightly, depending on its elevation. When stereoscopically merged, the result is a vertically exaggerated view of the Earth's surface in its full three dimensions.

    Landsat satellites have provided visible light and infrared images of the Earth continuously since 1972. SRTM topographic data match the 30 meter (99 foot) spatial resolution of most Landsat images and will provide a valuable complement for studying the historic and growing Landsat data archive. The Landsat 7 Thematic Mapper image used here was provided to the SRTM project by the United States Geological Survey, Earth Resources Observation Systems (EROS) data Center, Sioux Falls, South Dakota.

    Elevation data

  3. SRTM Stereo Pair: Fiji Islands

    NASA Technical Reports Server (NTRS)

    2000-01-01

    The Sovereign Democratic Republic of the Fiji Islands, commonly known as Fiji, is an independent nation consisting of some 332 islands surrounding the Koro Sea in the South Pacific Ocean. This topographic image shows Viti Levu, the largest island in the group. With an area of 10,429 square kilometers (about 4000 square miles), it comprises more than half the area of the Fiji Islands. Suva, the capital city, lies on the southeast shore. The Nakauvadra, the rugged mountain range running from north to south, has several peaks rising above 900 meters (about 3000 feet). Mount Tomanivi, in the upper center, is the highest peak at 1324 meters (4341 feet). The distinct circular feature on the north shore is the Tavua Caldera, the remnant of a large shield volcano that was active about 4 million years ago. Gold has been mined on the margin of the caldera since the 1930s. The Nadrau plateau is the low relief highland in the center of the mountain range. The coastal plains in the west, northwest and southeast account for only 15 percent of Viti Levu's area but are the main centers of agriculture and settlement.

    This stereoscopic view was generated using preliminary topographic data from the Shuttle Radar Topography Mission. A computer-generated artificial light source illuminates the elevation data from the top (north) to produce a pattern of light and shadows. Slopes facing the light appear bright, while those facing away are shaded. Also, colors show the elevation as measured by SRTM. Colors range from green at the lowest elevations to pink at the highest elevations. This image contains about 1300 meters (4300 feet) of total relief. The stereoscopic effect was created by first draping the shading and colors back over the topographic data and then generating two differing perspectives, one for each eye. The 3-D perception is achieved by viewing the left image with the right eye and the right image with the left eye (cross-eyed viewing), or by downloading and printing the

  4. Vision Impairment and Blindness

    MedlinePlus

    ... TV may be hard to do. The leading causes of low vision and blindness in the United ... disorders, eye injuries and birth defects can also cause vision loss. Whatever the cause, lost vision cannot ...

  5. Impairments to Vision

    MedlinePlus

    ... an external Non-Government web site. Impairments to Vision Normal Vision Diabetic Retinopathy Age-related Macular Degeneration In this ... pictures, fixate on the nose to simulate the vision loss. In diabetic retinopathy, the blood vessels in ...

  6. Retinal Detachment Vision Simulator

    MedlinePlus

    ... Retina Treatment Retinal Detachment Vision Simulator Retinal Detachment Vision Simulator Mar. 01, 2016 How does a detached or torn retina affect your vision? If a retinal tear is occurring, you may ...

  7. Characteristics of EUV Coronal Jets Observed with STEREO/SECCHI

    NASA Astrophysics Data System (ADS)

    Nisticò, G.; Bothmer, V.; Patsourakos, S.; Zimbardo, G.

    2009-10-01

    In this paper we present the first comprehensive statistical study of EUV coronal jets observed with the SECCHI (Sun Earth Connection Coronal and Heliospheric Investigation) imaging suites of the two STEREO spacecraft. A catalogue of 79 polar jets is presented, identified from simultaneous EUV and white-light coronagraph observations, taken during the time period March 2007 to April 2008, when solar activity was at a minimum. The twin spacecraft angular separation increased during this time interval from 2 to 48 degrees. The appearances of the coronal jets were always correlated with underlying small-scale chromospheric bright points. A basic characterization of the morphology and identification of the presence of helical structure were established with respect to recently proposed models for their origin and temporal evolution. Though each jet appeared morphologically similar in the coronagraph field of view, in the sense of a narrow collimated outward flow of matter, at the source region in the low corona the jet showed different characteristics, which may correspond to different magnetic structures. A classification of the events with respect to previous jet studies shows that amongst the 79 events there were 37 Eiffel tower-type jet events, commonly interpreted as a small-scale (˜35 arc sec) magnetic bipole reconnecting with the ambient unipolar open coronal magnetic fields at its loop tops, and 12 lambda-type jet events commonly interpreted as reconnection with the ambient field happening at the bipole footpoints. Five events were termed micro-CME-type jet events because they resembled the classical coronal mass ejections (CMEs) but on much smaller scales. The remaining 25 cases could not be uniquely classified. Thirty-one of the total number of events exhibited a helical magnetic field structure, indicative for a torsional motion of the jet around its axis of propagation. A few jets are also found in equatorial coronal holes. In this study we present sample

  8. Binocular Vision

    PubMed Central

    Blake, Randolph; Wilson, Hugh

    2010-01-01

    This essay reviews major developments –empirical and theoretical –in the field of binocular vision during the last 25 years. We limit our survey primarily to work on human stereopsis, binocular rivalry and binocular contrast summation, with discussion where relevant of single-unit neurophysiology and human brain imaging. We identify several key controversies that have stimulated important work on these problems. In the case of stereopsis those controversies include position versus phase encoding of disparity, dependence of disparity limits on spatial scale, role of occlusion in binocular depth and surface perception, and motion in 3D. In the case of binocular rivalry, controversies include eye versus stimulus rivalry, role of “top-down” influences on rivalry dynamics, and the interaction of binocular rivalry and stereopsis. Concerning binocular contrast summation, the essay focuses on two representative models that highlight the evolving complexity in this field of study. PMID:20951722

  9. Robot Vision

    NASA Technical Reports Server (NTRS)

    Sutro, L. L.; Lerman, J. B.

    1973-01-01

    The operation of a system is described that is built both to model the vision of primate animals, including man, and serve as a pre-prototype of possible object recognition system. It was employed in a series of experiments to determine the practicability of matching left and right images of a scene to determine the range and form of objects. The experiments started with computer generated random-dot stereograms as inputs and progressed through random square stereograms to a real scene. The major problems were the elimination of spurious matches, between the left and right views, and the interpretation of ambiguous regions, on the left side of an object that can be viewed only by the left camera, and on the right side of an object that can be viewed only by the right camera.

  10. Vision Screening

    NASA Technical Reports Server (NTRS)

    1993-01-01

    The Visi Screen OSS-C, marketed by Vision Research Corporation, incorporates image processing technology originally developed by Marshall Space Flight Center. Its advantage in eye screening is speed. Because it requires no response from a subject, it can be used to detect eye problems in very young children. An electronic flash from a 35 millimeter camera sends light into a child's eyes, which is reflected back to the camera lens. The photorefractor then analyzes the retinal reflexes generated and produces an image of the child's eyes, which enables a trained observer to identify any defects. The device is used by pediatricians, day care centers and civic organizations that concentrate on children with special needs.

  11. Stereo topography of Valhalla and Gilgamesh

    NASA Astrophysics Data System (ADS)

    Schenk, P.; McKinnon, W.; Moore, J.

    1997-03-01

    The geology and morphology of the large multiring impact structures Valhalla and Gilgamesh have been used to infer ways in which the interior structure and properties of the large icy satellites Callisto and Ganymede differ from rocky bodies. These earlier studies were made in the absence of topographic data showing the depths of large impact basins and the degree to which relief has been preserved at large and small scales. Using Voyager stereo images of these basins, we have constructed the first detailed topographic maps of these large basins. These maps reveal the absence of deep topographic depressions, but show that multi-kilometer relief is preserved near the center of Valhalla. Digital Elevation Models (DEM) of these basins were produced using an automated digital stereogrammetry program developed at LPI for use with Voyager and Viking images. The Voyager images used here were obtained from distances of 80,000 to 125,000 km. As a result, the formal vertical resolution for both Valhalla and Gilgamesh maps is about 0.5 km. Relative elevations only are mapped as no global topographic datum exists for the Galilean satellites. In addition, the stereo image models were used to remap the geology and structure of these multiring basins in detail.

  12. Deep 'Stone Soup' Trenching by Phoenix (Stereo)

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Digging by NASA's Phoenix Mars Lander on Aug. 23, 2008, during the 88th sol (Martian day) since landing, reached a depth about three times greater than in any trench Phoenix has excavated. The deep trench, informally called 'Stone Soup' is at the borderline between two of the polygon-shaped hummocks that characterize the arctic plain where Phoenix landed.

    Stone Soup is in the center foreground of this stereo view, which appears three dimensional when seen through red-blue glasses. The view combines left-eye and right-eye images taken by the lander's Surface Stereo Imager on Sol 88 after the day's digging. The trench is about 25 centimeters (10 inches) wide and about 18 centimeters (7 inches) deep.

    When digging trenches near polygon centers, Phoenix has hit a layer of icy soil, as hard as concrete, about 5 centimeters or 2 inches beneath the ground surface. In the Stone Soup trench at a polygon margin, the digging has not yet hit an icy layer like that.

    Stone Soup is toward the left, or west, end of the robotic arm's work area on the north side of the lander.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  13. Infrared stereo camera for human machine interface

    NASA Astrophysics Data System (ADS)

    Edmondson, Richard; Vaden, Justin; Chenault, David

    2012-06-01

    Improved situational awareness results not only from improved performance of imaging hardware, but also when the operator and human factors are considered. Situational awareness for IR imaging systems frequently depends on the contrast available. A significant improvement in effective contrast for the operator can result when depth perception is added to the display of IR scenes. Depth perception through flat panel 3D displays are now possible due to the number of 3D displays entering the consumer market. Such displays require appropriate and human friendly stereo IR video input in order to be effective in the dynamic military environment. We report on a stereo IR camera that has been developed for integration on to an unmanned ground vehicle (UGV). The camera has auto-convergence capability that significantly reduces ill effects due to image doubling, minimizes focus-convergence mismatch, and eliminates the need for the operator to manually adjust camera properties. Discussion of the size, weight, and power requirements as well as integration onto the robot platform will be given along with description of the stand alone operation.

  14. Characteristics of stereo reproduction with parametric loudspeakers

    NASA Astrophysics Data System (ADS)

    Aoki, Shigeaki; Toba, Masayoshi; Tsujita, Norihisa

    2012-05-01

    A parametric loudspeaker utilizes nonlinearity of a medium and is known as a super-directivity loudspeaker. The parametric loudspeaker is one of the prominent applications of nonlinear ultrasonics. So far, the applications have been limited monaural reproduction sound system for public address in museum, station and street etc. In this paper, we discussed characteristics of stereo reproduction with two parametric loudspeakers by comparing with those with two ordinary dynamic loudspeakers. In subjective tests, three typical listening positions were selected to investigate the possibility of correct sound localization in a wide listening area. The binaural information was ILD (Interaural Level Difference) or ITD (Interaural Time Delay). The parametric loudspeaker was an equilateral hexagon. The inner and outer diameters were 99 and 112 mm, respectively. Signals were 500 Hz, 1 kHz, 2 kHz and 4 kHz pure tones and pink noise. Three young males listened to test signals 10 times in each listening condition. Subjective test results showed that listeners at the three typical listening positions perceived correct sound localization of all signals using the parametric loudspeakers. It was almost similar to those using the ordinary dynamic loudspeakers, however, except for the case of sinusoidal waves with ITD. It was determined the parametric loudspeaker could exclude the contradiction between the binaural information ILD and ITD that occurred in stereo reproduction with ordinary dynamic loudspeakers because the super directivity of parametric loudspeaker suppressed the cross talk components.

  15. Stereo Image of Mt. Usu Volcano

    NASA Technical Reports Server (NTRS)

    2002-01-01

    On April 3, the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) on NASA's Terra Satellite captured this image of the erupting Mt. Usu volcano in Hokkaido, Japan. This anaglyph stereo image is of Mt Usu volcano. On Friday, March 31, more than 15,000 people were evacuated by helicopter, truck and boat from the foot of Usu, that began erupting from the northwest flank, shooting debris and plumes of smoke streaked with blue lightning thousands of feet in the air. Although no lava gushed from the mountain, rocks and ash continued to fall after the eruption. The region was shaken by thousands of tremors before the eruption. People said they could taste grit from the ash that was spewed as high as 2,700 meters (8,850 ft) into the sky and fell to coat surrounding towns with ash. A 3-D view can be obtained by looking through stereo glasses, with the blue film through your left eye and red film with your right eye at the same time. North is on your right hand side. For more information, see When Rivers of Rock Flow ASTER web page Image courtesy of MITI, ERSDAC, JAROS, and the U.S./Japan ASTER Science Team

  16. Opportunity's Surroundings After Sol 1820 Drive (Stereo)

    NASA Technical Reports Server (NTRS)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11841 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11841

    NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this full-circle view of the rover's surroundings during the 1,820th to 1,822nd Martian days, or sols, of Opportunity's surface mission (March 7 to 9, 2009).

    This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left.

    The rover had driven 20.6 meters toward the northwest on Sol 1820 before beginning to take the frames in this view. Tracks from that drive recede southwestward. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches).

    The terrain in this portion of Mars' Meridiani Planum region includes dark-toned sand ripples and small exposures of lighter-toned bedrock.

    This view is presented as a cylindrical-perspective projection with geometric seam correction.

  17. Low Vision Aids and Low Vision Rehabilitation

    MedlinePlus

    ... The future will offer even more solutions. Newer technology for low vision aids While low vision devices ... magnifiers have long been the standard in assistive technology, advances in consumer electronics are also improving quality ...

  18. Feasibility of remote evaporation and precipitation estimates. [by stereo images

    NASA Technical Reports Server (NTRS)

    Sadeh, W. Z.

    1974-01-01

    Remote sensing by means of stereo images obtained from flown cameras and scanners provides the potential to monitor the dynamics of pollutant mixing over large areas. Moreover, stereo technology may permit monitoring of pollutant concentration and mixing with sufficient detail to ascertain the structure of a polluted air mass. Consequently, stereo remote systems can be employed to supply data to set forth adequate regional standards on air quality. A method of remote sensing using stereo images is described. Preliminary results concerning the planar extent of a plume based on comparison with ground measurements by an alternate method, e.g., remote hot-wire anemometer technique, are supporting the feasibility of using stereo remote sensing systems.

  19. Unexpected spatial intensity distributions and onset timing of solar electron events observed by closely spaced STEREO spacecraft

    NASA Astrophysics Data System (ADS)

    Klassen, A.; Dresing, N.; Gómez-Herrero, R.; Heber, B.; Müller-Mellin, R.

    2016-09-01

    We present multi-spacecraft observations of four solar electron events using measurements from the Solar Electron Proton Telescope (SEPT) and the Electron Proton Helium INstrument (EPHIN) on board the STEREO and SOHO spacecraft, respectively, occurring between 11 October 2013 and 1 August 2014, during the approaching superior conjunction period of the two STEREO spacecraft. At this time the longitudinal separation angle between STEREO-A (STA) and STEREO-B (STB) was less than 72°. The parent particle sources (flares) of the four investigated events were situated close to, in between, or to the west of the STEREO's magnetic footpoints. The STEREO measurements revealed a strong difference in electron peak intensities (factor ≤12) showing unexpected intensity distributions at 1 AU, although the two spacecraft had nominally nearly the same angular magnetic footpoint separation from the flaring active region (AR) or their magnetic footpoints were both situated eastwards from the parent particle source. Furthermore, the events detected by the two STEREO imply a strongly unexpected onset timing with respect to each other: the spacecraft magnetically best connected to the flare detected a later arrival of electrons than the other one. This leads us to suggest the concept of a rippled peak intensity distribution at 1 AU formed by narrow peaks (fingers) superposed on a quasi-uniform Gaussian distribution. Additionally, two of the four investigated solar energetic particle (SEP) events show a so-called circumsolar distribution and their characteristics make it plausible to suggest a two-component particle injection scenario forming an unusual, non-uniform intensity distribution at 1 AU.

  20. Current state of the art of vision based SLAM

    NASA Astrophysics Data System (ADS)

    Muhammad, Naveed; Fofi, David; Ainouz, Samia

    2009-02-01

    The ability of a robot to localise itself and simultaneously build a map of its environment (Simultaneous Localisation and Mapping or SLAM) is a fundamental characteristic required for autonomous operation of the robot. Vision Sensors are very attractive for application in SLAM because of their rich sensory output and cost effectiveness. Different issues are involved in the problem of vision based SLAM and many different approaches exist in order to solve these issues. This paper gives a classification of state-of-the-art vision based SLAM techniques in terms of (i) imaging systems used for performing SLAM which include single cameras, stereo pairs, multiple camera rigs and catadioptric sensors, (ii) features extracted from the environment in order to perform SLAM which include point features and line/edge features, (iii) initialisation of landmarks which can either be delayed or undelayed, (iv) SLAM techniques used which include Extended Kalman Filtering, Particle Filtering, biologically inspired techniques like RatSLAM, and other techniques like Local Bundle Adjustment, and (v) use of wheel odometry information. The paper also presents the implementation and analysis of stereo pair based EKF SLAM for synthetic data. Results prove the technique to work successfully in the presence of considerable amounts of sensor noise. We believe that state of the art presented in the paper can serve as a basis for future research in the area of vision based SLAM. It will permit further research in the area to be carried out in an efficient and application specific way.

  1. Comparison of experimental vision performance testing techniques, including the implementation of an active matrix electrophoretic ink display

    NASA Astrophysics Data System (ADS)

    Swinney, Mathew W.; Marasco, Peter L.; Heft, Eric L.

    2007-04-01

    Standard black and white printed targets have been used for numerous vision related experiments, and are ideal with respect to contrast and spectral uniformity in the visible and near-infrared (NIR) regions of the electromagnetic (EM) spectrum. However, these targets lack the ability to refresh, update, or perform as a real-time, dynamic stimulus. This impacts their ability to be used in various standard vision performance measurement techniques. Emissive displays, such as a LCD's, possess some of the attributes printed targets lack, but come with a disadvantage of their own: LCD's lack the spectral uniformity of printed targets, making them of debatable value for presenting test targets in the near and short wave infrared regions of the spectrum. Yet a new option has recently become viable that may retain favorable attributes of both of the previously mentioned alternatives. The electrophoretic ink display is a dynamic, refreshable, and easily manipulated display that performs much like printed targets with respect to spectral uniformity. This paper will compare and contrast the various techniques that can be used to measure observer visual performance through night vision devices and imagers - focusing on the visible to infrared region of the EM spectrum. Furthermore, it will quantify the electrophoretic ink display option, determining its advantages and situations that it would be best suited for.

  2. Analysis of Performance of Stereoscopic-Vision Software

    NASA Technical Reports Server (NTRS)

    Kim, Won; Ansar, Adnan; Steele, Robert; Steinke, Robert

    2007-01-01

    A team of JPL researchers has analyzed stereoscopic vision software and produced a document describing its performance. This software is of the type used in maneuvering exploratory robotic vehicles on Martian terrain. The software in question utilizes correlations between portions of the images recorded by two electronic cameras to compute stereoscopic disparities, which, in conjunction with camera models, are used in computing distances to terrain points to be included in constructing a three-dimensional model of the terrain. The analysis included effects of correlation- window size, a pyramidal image down-sampling scheme, vertical misalignment, focus, maximum disparity, stereo baseline, and range ripples. Contributions of sub-pixel interpolation, vertical misalignment, and foreshortening to stereo correlation error were examined theoretically and experimentally. It was found that camera-calibration inaccuracy contributes to both down-range and cross-range error but stereo correlation error affects only the down-range error. Experimental data for quantifying the stereo disparity error were obtained by use of reflective metrological targets taped to corners of bricks placed at known positions relative to the cameras. For the particular 1,024-by-768-pixel cameras of the system analyzed, the standard deviation of the down-range disparity error was found to be 0.32 pixel.

  3. Intertwining Of Teleoperation And Computer Vision

    NASA Astrophysics Data System (ADS)

    Bloom, B. C.; Duane, G. S.; Epstein, M. A.; Magee, M.; Mathis, D. W.; Nathan, M. J.; Wolfe, W. J.

    1987-01-01

    In the rapid pursuit of automation, it is sometimes overlooked that an elaborate human-machine interplay is still necessary, despite the fact that a fully automated system, by definition, would not require a human interface. In the future, real-time sensing, intelligent processing, and dextrous manipulation will become more viable, but until then it is necessary to use humans for many critical processes. It is not obvious, however, how automated subsystems could account for human intervention, especially if a philosophy of "pure" automation dominates the design. Teleoperation, by contrast, emphasizes the creation of hardware pathways (e.g., hand-controllers, exoskeletons) to quickly communicate low-level control data to various mechanisms, while providing sensory feedback in a format suitable for human consumption (e.g., stereo displays, force reflection), leaving the "intelligence" to the human. These differences in design strategy, both hardware and software, make it difficult to tie automation and teleoperation together, while allowing for graceful transitions at the appropriate times. In no area of artifical intelligence is this problem more evident than in computer vision. Teleoperation typically uses video displays (monochrome/color, monoscopic/ stereo) with contrast enhancement and gain control without any digital processing of the images. However, increases in system performance such as automatic collision avoidance, path finding, and object recognition depend on computer vision techniques. Basically, computer vision relies on the digital processing of the images to extract low-level primitives such as boundaries and regions that are used in higher-level processes for object recognition and positions. Real-time processing of complex environments is currently unattainable, but there are many aspects of the processing that are useful for situation assessment, provided it is understood the human can assist in the more time-consuming steps.

  4. Axial-Stereo 3-D Optical Metrology for Inner Profile of Pipes Using a Scanning Laser Endoscope

    PubMed Central

    Gong, Yuanzheng; Johnston, Richard S.; Melville, C. David; Seibel, Eric J.

    2015-01-01

    As the rapid progress in the development of optoelectronic components and computational power, 3D optical metrology becomes more and more popular in manufacturing and quality control due to its flexibility and high speed. However, most of the optical metrology methods are limited to external surfaces. This paper proposed a new approach to measure tiny internal 3D surfaces with a scanning fiber endoscope and axial-stereo vision algorithm. A dense, accurate point cloud of internally machined threads was generated to compare with its corresponding X-ray 3D data as ground truth, and the quantification was analyzed by Iterative Closest Points algorithm. PMID:26640425

  5. Axial-Stereo 3-D Optical Metrology for Inner Profile of Pipes Using a Scanning Laser Endoscope

    NASA Astrophysics Data System (ADS)

    Gong, Yuanzheng; Johnston, Richard S.; Melville, C. David; Seibel, Eric J.

    2015-07-01

    As the rapid progress in the development of optoelectronic components and computational power, 3-D optical metrology becomes more and more popular in manufacturing and quality control due to its flexibility and high speed. However, most of the optical metrology methods are limited to external surfaces. This article proposed a new approach to measure tiny internal 3-D surfaces with a scanning fiber endoscope and axial-stereo vision algorithm. A dense, accurate point cloud of internally machined threads was generated to compare with its corresponding X-ray 3-D data as ground truth, and the quantification was analyzed by Iterative Closest Points algorithm.

  6. Spirit Near 'Stapledon' on Sol 1802 (Stereo)

    NASA Technical Reports Server (NTRS)

    2009-01-01

    [figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11781 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11781

    NASA Mars Exploration Rover Spirit used its navigation camera for the images assembled into this stereo, full-circle view of the rover's surroundings during the 1,802nd Martian day, or sol, (January 26, 2009) of Spirit's mission on the surface of Mars. South is at the center; north is at both ends.

    This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left.

    Spirit had driven down off the low plateau called 'Home Plate' on Sol 1782 (January 6, 2009) after spending 12 months on a north-facing slope on the northern edge of Home Plate. The position on the slope (at about the 9-o'clock position in this view) tilted Spirit's solar panels toward the sun, enabling the rover to generate enough electricity to survive its third Martian winter. Tracks at about the 11-o'clock position of this panorama can be seen leading back to that 'Winter Haven 3' site from the Sol 1802 position about 10 meters (33 feet) away. For scale, the distance between the parallel wheel tracks is about one meter (40 inches).

    Where the receding tracks bend to the left, a circular pattern resulted from Spirit turning in place at a soil target informally named 'Stapledon' after William Olaf Stapledon, a British philosopher and science-fiction author who lived from 1886 to 1950. Scientists on the rover team suspected that the soil in that area might have a high concentration of silica, resembling a high-silica soil patch discovered east of Home Plate in 2007. Bright material visible in the track furthest to the right was examined with Spirit's alpha partical X-ray spectrometer and found, indeed, to be rich in silica.

    The team laid plans to drive Spirit from

  7. Multi-view horizon-driven sea plane estimation for stereo wave imaging on moving vessels

    NASA Astrophysics Data System (ADS)

    Bergamasco, Filippo; Benetazzo, Alvise; Barbariol, Francesco; Carniel, Sandro; Sclavo, Mauro

    2016-10-01

    In the last few years we faced an increased popularity of stereo imaging as an effective tool to investigate wind sea waves at short and medium scales. Given the advances of computer vision techniques, the recovery of a scattered point-cloud from a sea surface area is nowadays a well consolidated technique producing excellent results both in terms of wave data resolution and accuracy. Nevertheless, almost all the subsequent analyses tasks, from the recovery of directional wave spectra to the estimation of significant wave height, are bound to two limiting conditions. First, wave data are required to be aligned to the mean sea plane. Second, a uniform distribution of 3D point samples is assumed. Since the stereo-camera rig is placed tilted with respect to the sea surface, perspective distortion do not allow these conditions to be met. Errors due to this problem are even more challenging if the optical instrumentation is mounted on a moving vessel, so that the mean sea plane cannot be simply obtained by averaging data from multiple subsequent frames. We address the first problem with two main contributions. First, we propose a novel horizon estimation technique to recover the attitude of a moving stereo rig with respect to the sea plane. Second, an effective weighting scheme is described to account for the non-uniform sampling of the scattered data in the estimation of the sea-plane distance. The interplay of the two allows us to provide a precise point cloud alignment without any external positioning sensor or rig viewpoint pre-calibration. The advantages of the proposed technique are evaluated throughout an experimental section spanning both synthetic and real-world scenarios.

  8. STEREO Observations of Solar Wind in 2007-2014

    NASA Astrophysics Data System (ADS)

    Jian, Lan; Luhmann, Janet; Russell, Christopher; Blanco-Cano, Xochitl; Kilpua, Emilia; Li, Yan

    2016-04-01

    Since the launch of twin STEREO spacecraft, we have been monitoring the solar wind and providing the Level 3 event lists of large-scale solar wind and particle events to public (http://www-ssc.igpp.ucla.edu/forms/stereo/stereo_level_3.html). The interplanetary coronal mass ejections (ICMEs), stream interaction regions (SIRs), interplanetary shocks, and solar energetic particles (based on high energy telescope data) have been surveyed for 2007-2014 before STEREO A went to the superior solar conjunction and STEREO B was lost in contact. In conjunction with our previous observations of same solar wind structures in 1995-2009 using Wind/ACE data and the same identification criteria, we study the solar cycle variations of these structures, especially compare the same phase of solar cycles 23 and 24. Although the sunspot number at solar maximum 24 is only 60% of the level at last solar maximum, Gopalswamy et al. (2015a, b) found there were more halo CMEs in cycle 24 and the number of magnetic clouds did not decline either. We examine if the two vantage points of STEREO provide a consistent view with the above finding. In addition, because the twin STEREO spacecraft have experienced the full-range longitudinal separation of 0-360 degree, they have provided us numerous opportunities for multipoint observations. We will report the findings on the spatial scope of ICMEs including their driven shocks, and the stability of SIRs from the large event base.

  9. STEREO Space Weather and the Space Weather Beacon

    NASA Technical Reports Server (NTRS)

    Biesecker, D. A.; Webb, D F.; SaintCyr, O. C.

    2007-01-01

    The Solar Terrestrial Relations Observatory (STEREO) is first and foremost a solar and interplanetary research mission, with one of the natural applications being in the area of space weather. The obvious potential for space weather applications is so great that NOAA has worked to incorporate the real-time data into their forecast center as much as possible. A subset of the STEREO data will be continuously downlinked in a real-time broadcast mode, called the Space Weather Beacon. Within the research community there has been considerable interest in conducting space weather related research with STEREO. Some of this research is geared towards making an immediate impact while other work is still very much in the research domain. There are many areas where STEREO might contribute and we cannot predict where all the successes will come. Here we discuss how STEREO will contribute to space weather and many of the specific research projects proposed to address STEREO space weather issues. We also discuss some specific uses of the STEREO data in the NOAA Space Environment Center.

  10. Stereo Imaging Velocimetry System and Method

    NASA Technical Reports Server (NTRS)

    McDowell, Mark (Inventor)

    2003-01-01

    A system and a method is provided for measuring three dimensional velocities at a plurality of points in a fluid employing at least two cameras positioned approximately perpendicular to one another. Image frames captured by the cameras may be filtered using background subtraction with outlier rejection with spike-removal filtering. The cameras may calibrated to accurately represent image coordinates in a world coordinate system using calibration grids modified using warp transformations. The two-dimensional views of the cameras may be recorded fur image processing and particle track determination. The tracer particles may be tracked on a two-dimensional basis and then stereo matched to obtain three-dimensional locations of the particles as a function of time so that velocities can be measured there from.

  11. 'Snow White' Trench After Scraping (Stereo View)

    NASA Technical Reports Server (NTRS)

    2008-01-01

    This 3D view from the Surface Stereo Imager on NASA's Phoenix Mars Lander shows the trench informally named 'Snow White.' This anaglyph was taken after a series of scrapings by the lander's Robotic Arm on the 58th Martian day, or sol, of the mission (July 23, 2008). The scrapings were done in preparation for collecting a sample for analysis from a hard subsurface layer where soil may contain frozen water.

    The trench is 4 to 5 centimeters (about 2 inches) deep, about 23 centimeters (9 inches) wide and about 60 centimeters (24 inches) long.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  12. Stereo View of Phoenix Test Sample Site

    NASA Technical Reports Server (NTRS)

    2008-01-01

    This anaglyph image, acquired by NASA's Phoenix Lander's Surface Stereo Imager on Sol 7, the seventh day of the mission (June 1, 2008), shows a stereoscopic 3D view of the so-called 'Knave of Hearts' first-dig test area to the north of the lander. The Robotic Arm's scraping blade left a small horizontal depression above where the sample was taken.

    Scientists speculate that white material in the depression left by the dig could represent ice or salts that precipitated into the soil. This material is likely the same white material observed in the sample in the Robotic Arm's scoop.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  13. Brightness increase in an LCD stereo display

    NASA Astrophysics Data System (ADS)

    Rallison, Richard D.; Schicker, Scott R.

    1994-05-01

    A practical head mounted display (HMD) has to be light enough and bright enough to wear and view without undue strain on the users head or eyes. A 10 pound CRT based helmet is not always out of the question but binocular or stereo HMDs using LCDs rather than CRTs need only weigh in at around one pound complete with electronics and are far more comfortable to wear. The space bandwidth product or pixel count of LCDs is now approaching that of CRT type displays but LCDs could use a big boost in brightness, especially for see thru designs. The see thru or head up style has many user advantages and this paper addresses ways to more efficiently transmit photons from the source to the eye in one such design. All of the components that are used to improve performance may be made holographically or in an alternate fashion. The most practical method of construction is probably a toss up for some components.

  14. A Study of quiescent prominences using SDO and STEREO data

    NASA Astrophysics Data System (ADS)

    Panesar, Navdeep Kaur

    2014-05-01

    In this dissertation, we have studied the structure, dynamics and evolution of two quiescent prominences. Quiescent prominences are large structures and mainly associated with the quiet Sun region. For the analysis, we have used the high spatial and temporal cadence data from the Solar Dynamic Observatory (SDO), and the Solar Terrestrial Relations Observatory (STEREO). We combined the observations from two different directions and studied the prominence in 3D. In the study of polar crown prominence, we mainly investigated the prominence flows on limb and found its association with on-disk brightenings. The merging of diffused active region flux in the already formed chain of prominence caused the several brightenings in the filament channel and also injected the plasma upward with an average velocity of 15 km/s. In another study, we investigated the triggering mechanism of a quiescent tornado-like prominence. Flares from the neighboring active region triggered the tornado-like motions of the top of the prominence. Active region field contracts after the flare which results in the expansion of prominence cavity. The prominence helical magnetic field expands and plasma moves along the field lines which appear as a tornado-like activity. In addition, the thermal structure of the tornado-like prominence and neighbouring active region was investigated by analysing emission in six of the seven EUV channels from the SDO. These observational investigations led to our understanding of structure and dynamics of quiescent prominences, which could be useful for theoretical prominence models.

  15. Enhanced operator perception through 3D vision and haptic feedback

    NASA Astrophysics Data System (ADS)

    Edmondson, Richard; Light, Kenneth; Bodenhamer, Andrew; Bosscher, Paul; Wilkinson, Loren

    2012-06-01

    Polaris Sensor Technologies (PST) has developed a stereo vision upgrade kit for TALON® robot systems comprised of a replacement gripper camera and a replacement mast zoom camera on the robot, and a replacement display in the Operator Control Unit (OCU). Harris Corporation has developed a haptic manipulation upgrade for TALON® robot systems comprised of a replacement arm and gripper and an OCU that provides haptic (force) feedback. PST and Harris have recently collaborated to integrate the 3D vision system with the haptic manipulation system. In multiple studies done at Fort Leonard Wood, Missouri it has been shown that 3D vision and haptics provide more intuitive perception of complicated scenery and improved robot arm control, allowing for improved mission performance and the potential for reduced time on target. This paper discusses the potential benefits of these enhancements to robotic systems used for the domestic homeland security mission.

  16. FPGA Implementation of Stereo Disparity with High Throughput for Mobility Applications

    NASA Technical Reports Server (NTRS)

    Villalpando, Carlos Y.; Morfopolous, Arin; Matthies, Larry; Goldberg, Steven

    2011-01-01

    High speed stereo vision can allow unmanned robotic systems to navigate safely in unstructured terrain, but the computational cost can exceed the capacity of typical embedded CPUs. In this paper, we describe an end-to-end stereo computation co-processing system optimized for fast throughput that has been implemented on a single Virtex 4 LX160 FPGA. This system is capable of operating on images from a 1024 x 768 3CCD (true RGB) camera pair at 15 Hz. Data enters the FPGA directly from the cameras via Camera Link and is rectified, pre-filtered and converted into a disparity image all within the FPGA, incurring no CPU load. Once complete, a rectified image and the final disparity image are read out over the PCI bus, for a bandwidth cost of 68 MB/sec. Within the FPGA there are 4 distinct algorithms: Camera Link capture, Bilinear rectification, Bilateral subtraction pre-filtering and the Sum of Absolute Difference (SAD) disparity. Each module will be described in brief along with the data flow and control logic for the system. The system has been successfully fielded upon the Carnegie Mellon University's National Robotics Engineering Center (NREC) Crusher system during extensive field trials in 2007 and 2008 and is being implemented for other surface mobility systems at JPL.

  17. Explaining Polarization Reversals in STEREO Wave Data

    NASA Technical Reports Server (NTRS)

    Breneman, A.; Cattell, C.; Wygant, J.; Kersten, K.; Wilson, L, B., III; Dai, L.; Colpitts, C.; Kellogg, P. J.; Goetz, K.; Paradise, A.

    2012-01-01

    Recently Breneman et al. reported observations of large amplitude lightning and transmitter whistler mode waves from two STEREO passes through the inner radiation belt (L<2). Hodograms of the electric field in the plane transverse to the magnetic field showed that the transmitter waves underwent periodic polarization reversals. Specifically, their polarization would cycle through a pattern of right-hand to linear to left-hand polarization at a rate of roughly 200 Hz. The lightning whistlers were observed to be left-hand polarized at frequencies greater than the lower hybrid frequency and less than the transmitter frequency (21.4 kHz) and right-hand polarized otherwise. Only righthand polarized waves in the inner radiation belt should exist in the frequency range of the whistler mode and these reversals were not explained in the previous paper. We show, with a combination of observations and simulated wave superposition, that these polarization reversals are due to the beating of an incident electromagnetic whistler mode wave at 21.4 kHz and linearly polarized, symmetric lower hybrid sidebands Doppler-shifted from the incident wave by +/-200 Hz. The existence of the lower hybrid waves is consistent with the parametric decay mechanism of Lee and Kuo whereby an incident whistler mode wave decays into symmetric, short wavelength lower hybrid waves and a purely growing (zero-frequency) mode. Like the lower hybrid waves, the purely growing mode is Doppler-shifted by 200 Hz as observed on STEREO. This decay mechanism in the upper ionosphere has been previously reported at equatorial latitudes and is thought to have a direct connection with explosive spread F enhancements. As such it may represent another dissipation mechanism of VLF wave energy in the ionosphere and may help to explain a deficit of observed lightning and transmitter energy in the inner radiation belts as reported by Starks et al.

  18. On the Rim of 'Victoria Crater' (Stereo)

    NASA Technical Reports Server (NTRS)

    2006-01-01

    [figure removed for brevity, see original site] Left-eye view of a stereo pair for PIA08780

    [figure removed for brevity, see original site] Right-eye view of a stereo pair for PIA08780

    NASA's Mars rover Opportunity reached the rim of 'Victoria Crater' in Mars' Meridiani Planum region with a 26-meter (85-foot) drive during the rover's 951st Martian day, or sol (Sept. 26, 2006). After the drive, the rover's navigation camera took the three exposures combined into this view of the crater's interior. This crater has been the mission's long-term destination for the past 21 Earth months.

    A half mile in the distance one can see about 20 percent of the far side of the crater framed by the rocky cliffs in the foreground to the left and right of the image. The rim of the crater is composed of alternating promontories, rocky points towering approximately 70 meters (230 feet) above the crater floor, and recessed alcoves. The bottom of the crater is covered by sand that has been shaped into ripples by the Martian wind.

    The position at the end of the sol 951 drive is about six meters from the lip of an alcove called 'Duck Bay.' The rover team planned a drive for sol 952 that would move a few more meters forward, plus more imaging of the near and far walls of the crater.

    Victoria Crater is about five times wider than 'Endurance Crater,' which Opportunity spent six months examining in 2004, and about 40 times wider than 'Eagle Crater,' where Opportunity first landed.

    This view is presented as a cylindrical-perspective projection with geometric seam correction.

  19. STEREO database of interplanetary Langmuir electric waveforms

    NASA Astrophysics Data System (ADS)

    Briand, C.; Henri, P.; Génot, V.; Lormant, N.; Dufourg, N.; Cecconi, B.; Nguyen, Q. N.; Goetz, K.

    2016-02-01

    This paper describes a database of electric waveforms that is available at the Centre de Données de la Physique des Plasmas (CDPP, http://cdpp.eu/). This database is specifically dedicated to waveforms of Langmuir/Z-mode waves. These waves occur in numerous kinetic processes involving electrons in space plasmas. Statistical analysis from a large data set of such waves is then of interest, e.g., to study the relaxation of high-velocity electron beams generated at interplanetary shock fronts, in current sheets and magnetic reconnection region, the transfer of energy between high and low frequencies, the generation of electromagnetic waves. The Langmuir waveforms were recorded by the Time Domain Sampler (TDS) of the WAVES radio instrument on board the STEREO mission. In this paper, we detail the criteria used to identify the Langmuir/Z-mode waves among the whole set of waveforms of the STEREO spacecraft. A database covering the November 2006 to August 2014 period is provided. It includes electric waveforms expressed in the normalized frame (B,B × Vsw,B × (B × Vsw)) with B and Vsw the local magnetic field and solar wind velocity vectors, and the local magnetic field in the variance frame, in an interval of ±1.5 min around the time of the Langmuir event. Quicklooks are also provided that display the three components of the electric waveforms together with the spectrum of E∥, together with the magnitude and components of the magnetic field in the 3 min interval, in the variance frame. Finally, the distribution of the Langmuir/Z-mode waves peak amplitude is also analyzed.

  20. 'Lyell' Panorama inside Victoria Crater (Stereo)

    NASA Technical Reports Server (NTRS)

    2008-01-01

    During four months prior to the fourth anniversary of its landing on Mars, NASA's Mars Exploration Rover Opportunity examined rocks inside an alcove called 'Duck Bay' in the western portion of Victoria Crater. The main body of the crater appears in the upper right of this stereo panorama, with the far side of the crater lying about 800 meters (half a mile) away. Bracketing that part of the view are two promontories on the crater's rim at either side of Duck Bay. They are 'Cape Verde,' about 6 meters (20 feet) tall, on the left, and 'Cabo Frio,' about 15 meters (50 feet) tall, on the right. The rest of the image, other than sky and portions of the rover, is ground within Duck Bay.

    Opportunity's targets of study during the last quarter of 2007 were rock layers within a band exposed around the interior of the crater, about 6 meters (20 feet) from the rim. Bright rocks within the band are visible in the foreground of the panorama. The rover science team assigned informal names to three subdivisions of the band: 'Steno,' 'Smith,' and 'Lyell.'

    This view incorporates many images taken by Opportunity's panoramic camera (Pancam) from the 1,332nd through 1,379th Martian days, or sols, of the mission (Oct. 23 to Dec. 11, 2007). It combines a stereo pair so that it appears three-dimensional when seen through blue-red glasses. Some visible patterns in dark and light tones are the result of combining frames that were affected by dust on the front sapphire window of the rover's camera.

    Opportunity landed on Jan. 25, 2004, Universal Time, (Jan. 24, Pacific Time) inside a much smaller crater about 6 kilometers (4 miles) north of Victoria Crater, to begin a surface mission designed to last 3 months and drive about 600 meters (0.4 mile).

  1. MISR Stereo Imaging Distinguishes Smoke from Cloud

    NASA Technical Reports Server (NTRS)

    2000-01-01

    These views of western Alaska were acquired by MISR on June 25, 2000 during Terra orbit 2775. The images cover an area of about 150 kilometers x 225 kilometers, and have been oriented with north to the left. The left image is from the vertical-viewing (nadir) camera, whereas the right image is a stereo 'anaglyph' that combines data from the forward-viewing 45-degree and 60-degree cameras. This image appears three-dimensional when viewed through red/blue glasses with the red filter over the left eye. It may help to darken the room lights when viewing the image on a computer screen.

    The Yukon River is seen wending its way from upper left to lower right. A forest fire in the Kaiyuh Mountains produced the long smoke plume that originates below and to the right of image center. In the nadir view, the high cirrus clouds at the top of the image and the smoke plume are similar in appearance, and the lack of vertical information makes them hard to differentiate. Viewing the righthand image with stereo glasses, on the other hand, demonstrates that the scene consists of several vertically-stratified layers, including the surface terrain, the smoke, some scattered cumulus clouds, and streaks of high, thin cirrus. This added dimensionality is one of the ways MISR data helps scientists identify and classify various components of terrestrial scenes.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

  2. A re-evaluation of the role of vision in the activity and communication of nocturnal primates.

    PubMed

    Bearder, S K; Nekaris, K A I; Curtis, D J

    2006-01-01

    This paper examines the importance of vision in the lives of nocturnal primates in comparison to diurnal and cathemeral species. Vision is the major sense in all primates and there is evidence that the eyesight of nocturnal species is more acute and variable than has previously been recognized. Case studies of the behaviour of a galago and a loris in open woodland habitats in relation to ambient light show that Galago moholi males are more likely to travel between clumps of vegetation along the ground when the moon is up, and during periods of twilight, whereas they retreat to more continuous vegetation and travel less when the moon sets. This is interpreted as a strategy for avoiding predators that hunt on the ground when it is dark. The travel distances of Loris lydekkerianus are not affected by moonlight but this species reduces its choice of food items from more mobile prey to mainly ants when the moon sets, indicating the importance of light when searching for high-energy supplements to its staple diet. Evidence is presented for the first time to indicate key aspects of nocturnal vision that would benefit from further research. It is suggested that the light and dark facial markings of many species convey information about species and individual identity when animals approach each other at night. Differences in the colour of the reflective eye-shine, and behavioural responses displayed when exposed to white torchlight, point to different kinds of nocturnal vision that are suited to each niche, including the possibility of some degree of colour discrimination. The ability of even specialist nocturnal species to see well in broad daylight demonstrates an inherent flexibility that would enable movement into diurnal niches. The major differences in the sensitivity and perceptual anatomy of diurnal lemurs compared to diurnal anthropoids, and the emergence of cathemerality in lemurs, is interpreted as a reflection of evolution from different ancestral stocks in very

  3. Method for Stereo Mapping Based on Objectarx and Pipeline Technology

    NASA Astrophysics Data System (ADS)

    Liu, F.; Chen, T.; Lin, Z.; Yang, Y.

    2012-07-01

    Stereo mapping is an important way to acquire 4D production. Based on the development of the stereo mapping and the characteristics of ObjectARX and pipeline technology, a new stereo mapping scheme which can realize the interaction between the AutoCAD and digital photogrammetry system is offered by ObjectARX and pipeline technology. An experiment is made in order to make sure the feasibility with the example of the software MAP-AT (Modern Aerial Photogrammetry Automatic Triangulation), the experimental results show that this scheme is feasible and it has very important meaning for the realization of the acquisition and edit integration.

  4. The infection algorithm: an artificial epidemic approach for dense stereo correspondence.

    PubMed

    Olague, Gustavo; Fernández, Francisco; Pérez, Cynthia B; Lutton, Evelyne

    2006-01-01

    We present a new bio-inspired approach applied to a problem of stereo image matching. This approach is based on an artificial epidemic process, which we call the infection algorithm. The problem at hand is a basic one in computer vision for 3D scene reconstruction. It has many complex aspects and is known as an extremely difficult one. The aim is to match the contents of two images in order to obtain 3D information that allows the generation of simulated projections from a viewpoint that is different from the ones of the initial photographs. This process is known as view synthesis. The algorithm we propose exploits the image contents in order to produce only the necessary 3D depth information, while saving computational time. It is based on a set of distributed rules, which propagate like an artificial epidemic over the images. Experiments on a pair of real images are presented, and realistic reprojected images have been generated. PMID:16953787

  5. Reverse engineering physical models employing a sensor integration between 3D stereo detection and contact digitization

    NASA Astrophysics Data System (ADS)

    Chen, Liang-Chia; Lin, Grier C. I.

    1997-12-01

    A vision-drive automatic digitization process for free-form surface reconstruction has been developed, with a coordinate measurement machine (CMM) equipped with a touch-triggered probe and a CCD camera, in reverse engineering physical models. The process integrates 3D stereo detection, data filtering, Delaunay triangulation, adaptive surface digitization into a single process of surface reconstruction. By using this innovative approach, surface reconstruction can be implemented automatically and accurately. Least-squares B- spline surface models with the controlled accuracy of digitization can be generated for further application in product design and manufacturing processes. One industrial application indicates that this approach is feasible, and the processing time required in reverse engineering process can be significantly reduced up to more than 85%.

  6. Brief Daily Periods of Unrestricted Vision Preserve Stereopsis in Strabismus

    PubMed Central

    Smith, Earl L.; Hung, Li-Fang; Harwerth, Ronald S.

    2011-01-01

    Purpose. This study examines whether brief periods of binocular vision could preserve stereopsis in monkeys reared with optical strabismus. Methods. Starting at 4 weeks of age, six infant monkeys were reared with a total of 30 prism diopters base-in split between the eyes. Two of the six monkeys wore prisms continuously, one for 4 weeks and one for 6 weeks. Four of the six monkeys wore prisms but had 2 hours of binocular vision daily, one for 4, one for 6, and two for 16 weeks. Five normally reared monkeys provided control data. Behavioral methods were used to measure spatial contrast sensitivity, eye alignment, and stereopsis with Gabor and random dot targets. Results. The same pattern of results was evident for both local and global stereopsis. For monkeys treated for 4 weeks, daily periods of binocular vision rescued stereopsis from the 10-fold reduction observed with continuous optical strabismus. Six weeks of continuous strabismus resulted in stereo blindness, whereas daily periods of binocular vision limited the reduction to a twofold loss from normal. Daily periods of binocular vision preserved stereopsis over 16 weeks of optical strabismus for one of the two monkeys. Conclusions. Two hours of daily binocular vision largely preserves local and global stereopsis in monkeys reared with optical strabismus. During early development, the effects of normal vision are weighed more heavily than those of abnormal vision. The manner in which the effects of visual experience are integrated over time reduces the likelihood that brief episodes of abnormal vision will cause abnormal binocular vision development. PMID:21398285

  7. (Computer vision and robotics)

    SciTech Connect

    Jones, J.P.

    1989-02-13

    The traveler attended the Fourth Aalborg International Symposium on Computer Vision at Aalborg University, Aalborg, Denmark. The traveler presented three invited lectures entitled, Concurrent Computer Vision on a Hypercube Multicomputer'', The Butterfly Accumulator and its Application in Concurrent Computer Vision on Hypercube Multicomputers'', and Concurrency in Mobile Robotics at ORNL'', and a ten-minute editorial entitled, It Concurrency an Issue in Computer Vision.'' The traveler obtained information on current R D efforts elsewhere in concurrent computer vision.

  8. Automated, highly reproducible, wide-field, light-based cortical mapping method using a commercial stereo microscope and its applications

    PubMed Central

    Jiang, Su; Liu, Ya-Feng; Wang, Xiao-Min; Liu, Ke-Fei; Zhang, Ding-Hong; Li, Yi-Ding; Yu, Ai-Ping; Zhang, Xiao-Hui; Zhang, Jia-Yi; Xu, Jian-Guang; Gu, Yu-Dong; Xu, Wen-Dong; Zeng, Shao-Qun

    2016-01-01

    We introduce a more flexible optogenetics-based mapping system attached on a stereo microscope, which offers automatic light stimulation to individual regions of interest in the cortex that expresses light-activated channelrhodopsin-2 in vivo. Combining simultaneous recording of electromyography from specific forelimb muscles, we demonstrate that this system offers much better efficiency and precision in mapping distinct domains for controlling limb muscles in the mouse motor cortex. Furthermore, the compact and modular design of the system also yields a simple and flexible implementation to different commercial stereo microscopes, and thus could be widely used among laboratories.

  9. Automated, highly reproducible, wide-field, light-based cortical mapping method using a commercial stereo microscope and its applications

    PubMed Central

    Jiang, Su; Liu, Ya-Feng; Wang, Xiao-Min; Liu, Ke-Fei; Zhang, Ding-Hong; Li, Yi-Ding; Yu, Ai-Ping; Zhang, Xiao-Hui; Zhang, Jia-Yi; Xu, Jian-Guang; Gu, Yu-Dong; Xu, Wen-Dong; Zeng, Shao-Qun

    2016-01-01

    We introduce a more flexible optogenetics-based mapping system attached on a stereo microscope, which offers automatic light stimulation to individual regions of interest in the cortex that expresses light-activated channelrhodopsin-2 in vivo. Combining simultaneous recording of electromyography from specific forelimb muscles, we demonstrate that this system offers much better efficiency and precision in mapping distinct domains for controlling limb muscles in the mouse motor cortex. Furthermore, the compact and modular design of the system also yields a simple and flexible implementation to different commercial stereo microscopes, and thus could be widely used among laboratories. PMID:27699114

  10. Space environment robot vision system

    NASA Technical Reports Server (NTRS)

    Wood, H. John; Eichhorn, William L.

    1990-01-01

    A prototype twin-camera stereo vision system for autonomous robots has been developed at Goddard Space Flight Center. Standard charge coupled device (CCD) imagers are interfaced with commercial frame buffers and direct memory access to a computer. The overlapping portions of the images are analyzed using photogrammetric techniques to obtain information about the position and orientation of objects in the scene. The camera head consists of two 510 x 492 x 8-bit CCD cameras mounted on individually adjustable mounts. The 16 mm efl lenses are designed for minimum geometric distortion. The cameras can be rotated in the pitch, roll, and yaw (pan angle) directions with respect to their optical axes. Calibration routines have been developed which automatically determine the lens focal lengths and pan angle between the two cameras. The calibration utilizes observations of a calibration structure with known geometry. Test results show the precision attainable is plus or minus 0.8 mm in range at 2 m distance using a camera separation of 171 mm. To demonstrate a task needed on Space Station Freedom, a target structure with a movable I beam was built. The camera head can autonomously direct actuators to dock the I-beam to another one so that they could be bolted together.

  11. Observing Mercury: from Galileo to the stereo camera on the BepiColombo mission

    NASA Astrophysics Data System (ADS)

    Cremonese, Gabriele; Da Deppo, Vania; Naletto, Giampiero; Martellato, Elena; Debei, Stefano; Barbieri, Cesare; Bettanini, Carlo; Capria, Maria T.; Massironi, Matteo; Zaccariotto, Mirko

    2010-01-01

    After having observed the planets from his house in Padova using his telescope, in January 1611 Galileo wrote to Giuliano de Medici that Venus is moving around the Sun as Mercury. Forty years ago, Giuseppe Colombo, professor of Celestial Mechanics in Padova, made a decisive step to clarify the rotational period of Mercury. Today, scientists and engineers of the Astronomical Observatory of Padova and of the University of Padova, reunited in the Center for Space Studies and Activities (CISAS) named after Giuseppe Colombo, are busy to realize a stereo camera (STC) that will be on board the European (ESA) and Japanese (JAXA) space mission BepiColombo, devoted to the observation and exploration of the innermost planet. This paper will describe the stereo camera, which is one of the channels of the SIMBIOSYS instrument, aiming to produce the global mapping of the surface with 3D images.

  12. Infrared stereo calibration for unmanned ground vehicle navigation

    NASA Astrophysics Data System (ADS)

    Harguess, Josh; Strange, Shawn

    2014-06-01

    The problem of calibrating two color cameras as a stereo pair has been heavily researched and many off-the-shelf software packages, such as Robot Operating System and OpenCV, include calibration routines that work in most cases. However, the problem of calibrating two infrared (IR) cameras for the purposes of sensor fusion and point could generation is relatively new and many challenges exist. We present a comparison of color camera and IR camera stereo calibration using data from an unmanned ground vehicle. There are two main challenges in IR stereo calibration; the calibration board (material, design, etc.) and the accuracy of calibration pattern detection. We present our analysis of these challenges along with our IR stereo calibration methodology. Finally, we present our results both visually and analytically with computed reprojection errors.

  13. STEREO Watches as Comet Encke Loses Its Tail

    NASA Video Gallery

    As comet Encke dipped inside the orbit of Mercury, STEREO A recorded its tail getting ripped off by a solar eruption on April 20, 2007. The eruption that hit Encke was a coronal mass ejection (CME)...

  14. STEREO Tracks Solar Storms From Sun To Earth

    NASA Video Gallery

    NASA's STEREO spacecraft and new data processing techniques have succeeded in tracking space weather events from their origin in the sun's corona to impact with the Earth, resolving a 40-year myste...

  15. [Beaded molecule imprinted polymer for stereo isomer separation].

    PubMed

    Meng, Z; Wang, J; Zhou, L; Wang, Q; Zhu, D

    1999-07-01

    Beaded molecule imprinted polymer (MIP) was made by suspension polymerization. Particles with the size of 50-70 microns in diameter were collected and evaluated in HPLC mode to separate stereo isomers. Stereo isomers cinchonine and cinchonidine were successfully discriminated with selectivity factor of 2.89 and resolution factor of 0.76. Stereo selectivity of the MIP was found to come from both the interaction between the analyte and carboxyl group on the MIP and the similarity between the stereo structure of imprinted molecule and the MIP. The thermal analysis results showed that the MIP had high thermal stability with initial thermal decomposition temperature of 320 degrees C. The pore volume of the MIP was 0.1849 mL/g, the specific surface area was 126.84 sqm/g and the average pore diameter was 5.8 nanometer. Scanning electron microscopy showed that MIP had perfect spherical morphology.

  16. #9 Using STEREO/SDO Data to Model Space Weather

    NASA Video Gallery

    The critical observations from STEREO and SDO will help provide accurate and timely space weather storm warnings, and will aid greatly in our efforts to protect the technologies we have become so d...

  17. Loudness in listening to music with portable headphone stereos.

    PubMed

    Kageyama, T

    1999-04-01

    The usual listening levels of music (loudness) using portable headphone stereos were measured for 46 young volunteers. Loudness was associated with sex, Extraversion scores, a subjective mental health state, and impression of the music.

  18. NASA Vision

    NASA Technical Reports Server (NTRS)

    Fenton, Mary (Editor); Wood, Jennifer (Editor)

    2003-01-01

    This newsletter contains several articles, primarily on International Space Station (ISS) crewmembers and their activities, as well as the activities of NASA administrators. Other subjects covered in the articles include the investigation of the Space Shuttle Columbia accident, activities at NASA centers, Mars exploration, a collision avoidance test on a unmanned aerial vehicle (UAV). The ISS articles cover landing in a Soyuz capsule, photography from the ISS, and the Expedition Seven crew.

  19. Development of image processing LSI "SuperVchip" for real-time vision systems

    NASA Astrophysics Data System (ADS)

    Muramatsu, Shoji; Kobayashi, Yoshiki; Otsuka, Yasuo; Shojima, Hiroshi; Tsutsumi, Takayuki; Imai, Toshihiko; Yamada, Shigeyoshi

    2002-03-01

    A new image processing LSI SuperVchip with high-performance computing power has been developed. The SuperVchip has powerful capability for vision systems as follows: 1. General image processing by 3x3, 5x5, 7x7 kernel for high speed filtering function. 2. 16-parallel gray search engine units for robust template matching. 3. 49 block matching Pes to calculate the summation of the absolution difference in parallel for stereo vision function. 4. A color extraction unit for color object recognition. The SuperVchip also has peripheral function of vision systems, such as video interface, PCI extended interface, RISC engine interface and image memory controller on a chip. Therefore, small and high performance vision systems are realized via SuperVchip. In this paper, the above specific circuits are presented, and an architecture of a vision device equipped with SuperVchip and its performance are also described.

  20. Solar Energetic Particles within the STEREO era: 2007-2012

    NASA Astrophysics Data System (ADS)

    Papaioannou, A.; Malandraki, O. E.; Heber, B.; Dresing, N.; Klein, K. L.; Vainio, R.; Rodriguez-Gasen, R.; Klassen, A.; Gomez-Herrero, R.; Vilmer, N.; Mewaldt, R. A.; Tziotziou, K.; Tsiropoula, G.

    2013-09-01

    STEREO (Solar TErrestrial RElations Observatory) recordings provide an unprecedented opportunity to identify the evolution of Solar Energetic Particles (SEPs) at different observing points in the heliosphere, which is expected to provide new insight on the physics of solar particle genesis, propagation and acceleration as well as on the properties of the interplanetary magnetic field that control these acceleration and propagation processes. In this work, two instruments onboard STEREO have been used in order to identify all SEP events observed within the rising phase of solar cycle 24 from 2007 to 2011, namely: the Low Energy Telescope (LET) and the Solar Electron Proton Telescope (SEPT). A scan over STEREO/LET protons within the energy range 6-10 MeV has been performed for each of the two STEREO spacecraft. We have tracked all enhancements that have been observed above the background level of this particular channel and cross checked with available lists on STEREO/ICMEs, SIRs and shocks as well as with the reported events in literature. Furthermore, parallel scanning of the STEREO/SEPT electrons in order to pinpoint the presence (or not) of an electron event has been performed in the energy range of 55-85 keV, for all of the aforementioned proton events, included in our lists. We provide the onset of all events for both protons and electrons, time-shifting analysis for near relativistic electrons which lead to the inferred solar release time and the relevant solar associations from radio spectrographs (Nancay Decametric Array; STEREO/WAVES) to GOES Soft X-rays and coronal mass ejections spotted by both SOHO/LASCO and STEREO Coronographs

  1. Development of a stereo 3-D pictorial primary flight display

    NASA Technical Reports Server (NTRS)

    Nataupsky, Mark; Turner, Timothy L.; Lane, Harold; Crittenden, Lucille

    1989-01-01

    Computer-generated displays are becoming increasingly popular in aerospace applications. The use of stereo 3-D technology provides an opportunity to present depth perceptions which otherwise might be lacking. In addition, the third dimension could also be used as an additional dimension along which information can be encoded. Historically, the stereo 3-D displays have been used in entertainment, in experimental facilities, and in the handling of hazardous waste. In the last example, the source of the stereo images generally has been remotely controlled television camera pairs. The development of a stereo 3-D pictorial primary flight display used in a flight simulation environment is described. The applicability of stereo 3-D displays for aerospace crew stations to meet the anticipated needs for 2000 to 2020 time frame is investigated. Although, the actual equipment that could be used in an aerospace vehicle is not currently available, the lab research is necessary to determine where stereo 3-D enhances the display of information and how the displays should be formatted.

  2. Hybrid machine vision method for autonomous guided vehicles

    NASA Astrophysics Data System (ADS)

    Lu, Jian; Hamajima, Kyoko; Ishihara, Koji

    2003-05-01

    As a prospective intelligent sensing method for Autonomous Guided Vehicle (AGV), machine vision is expected to have balanced ability of covering a large space and also recognizing details of important objects. For this purpose, the proposed hybrid machine method here combines the stereo vision method and the traditional 2D method. The former implements coarse recognition to extract object over a large space, and the later implement fine recognition about some sub-areas corresponding to important and/or special objects. This paper is mainly about the coarse recognition. In order to extract objects in the coarse recognition stage, the disparity image calculated according to stereo vision principle is segmented by two consequent steps of region expansion and convex split. Then the 3D measurement about the rough positions and sizes of extracted objects is performed according to the disparity information of the corresponding segmentation, and is used for recognizing the objects' attributes by means of pattern learning/recognition. The attribute information resulted is further used to assist fine recognition in the way of performing gaze control to input suitable image of the interested objects, or to directly control AGV's travel. In our example AGV application, some navigation-signs are introduced to indicate the travel route. When the attribute shows that the object is a navigation-sign, the 3D measurement is used to gaze the navigation-sign, in order for the fine recognition to analyze the specific meaning by means of traditional 2D method.

  3. Computer vision research with new imaging technology

    NASA Astrophysics Data System (ADS)

    Hou, Guangqi; Liu, Fei; Sun, Zhenan

    2015-12-01

    Light field imaging is capable of capturing dense multi-view 2D images in one snapshot, which record both intensity values and directions of rays simultaneously. As an emerging 3D device, the light field camera has been widely used in digital refocusing, depth estimation, stereoscopic display, etc. Traditional multi-view stereo (MVS) methods only perform well on strongly texture surfaces, but the depth map contains numerous holes and large ambiguities on textureless or low-textured regions. In this paper, we exploit the light field imaging technology on 3D face modeling in computer vision. Based on a 3D morphable model, we estimate the pose parameters from facial feature points. Then the depth map is estimated through the epipolar plane images (EPIs) method. At last, the high quality 3D face model is exactly recovered via the fusing strategy. We evaluate the effectiveness and robustness on face images captured by a light field camera with different poses.

  4. Incremental Multi-view 3D Reconstruction Starting from Two Images Taken by a Stereo Pair of Cameras

    NASA Astrophysics Data System (ADS)

    El hazzat, Soulaiman; Saaidi, Abderrahim; Karam, Antoine; Satori, Khalid

    2015-03-01

    In this paper, we present a new method for multi-view 3D reconstruction based on the use of a binocular stereo vision system constituted of two unattached cameras to initialize the reconstruction process. Afterwards , the second camera of stereo vision system (characterized by varying parameters) moves to capture more images at different times which are used to obtain an almost complete 3D reconstruction. The first two projection matrices are estimated by using a 3D pattern with known properties. After that, 3D scene points are recovered by triangulation of the matched interest points between these two images. The proposed approach is incremental. At each insertion of a new image, the camera projection matrix is estimated using the 3D information already calculated and new 3D points are recovered by triangulation from the result of the matching of interest points between the inserted image and the previous image. For the refinement of the new projection matrix and the new 3D points, a local bundle adjustment is performed. At first, all projection matrices are estimated, the matches between consecutive images are detected and Euclidean sparse 3D reconstruction is obtained. So, to increase the number of matches and have a more dense reconstruction, the Match propagation algorithm, more suitable for interesting movement of the camera, was applied on the pairs of consecutive images. The experimental results show the power and robustness of the proposed approach.

  5. Local Surface Reconstruction from MER images using Stereo Workstation

    NASA Astrophysics Data System (ADS)

    Shin, Dongjoe; Muller, Jan-Peter

    2010-05-01

    The authors present a semi-automatic workflow that reconstructs the 3D shape of the martian surface from local stereo images delivered by PnCam or NavCam on systems such as the NASA Mars Exploration Rover (MER) Mission and in the future the ESA-NASA ExoMars rover PanCam. The process is initiated with manually selected tiepoints on a stereo workstation which is then followed by a tiepoint refinement, stereo-matching using region growing and Levenberg-Marquardt Algorithm (LMA)-based bundle adjustment processing. The stereo workstation, which is being developed by UCL in collaboration with colleagues at the Jet Propulsion Laboratory (JPL) within the EU FP7 ProVisG project, includes a set of practical GUI-based tools that enable an operator to define a visually correct tiepoint via a stereo display. To achieve platform and graphic hardware independence, the stereo application has been implemented using JPL's JADIS graphic library which is written in JAVA and the remaining processing blocks used in the reconstruction workflow have also been developed as a JAVA package to increase the code re-usability, portability and compatibility. Although initial tiepoints from the stereo workstation are reasonably acceptable as true correspondences, it is often required to employ an optional validity check and/or quality enhancing process. To meet this requirement, the workflow has been designed to include a tiepoint refinement process based on the Adaptive Least Square Correlation (ALSC) matching algorithm so that the initial tiepoints can be further enhanced to sub-pixel precision or rejected if they fail to pass the ALSC matching threshold. Apart from the accuracy of reconstruction, it is obvious that the other criterion to assess the quality of reconstruction is the density (or completeness) of reconstruction, which is not attained in the refinement process. Thus, we re-implemented a stereo region growing process, which is a core matching algorithm within the UCL

  6. Simplified stereo-optical ultrasound plane calibration

    NASA Astrophysics Data System (ADS)

    Hoßbach, Martin; Noll, Matthias; Wesarg, Stefan

    2013-03-01

    Image guided therapy is a natural concept and commonly used in medicine. In anesthesia, a common task is the injection of an anesthetic close to a nerve under freehand ultrasound guidance. Several guidance systems exist using electromagnetic tracking of the ultrasound probe as well as the needle, providing the physician with a precise projection of the needle into the ultrasound image. This, however, requires additional expensive devices. We suggest using optical tracking with miniature cameras attached to a 2D ultrasound probe to achieve a higher acceptance among physicians. The purpose of this paper is to present an intuitive method to calibrate freehand ultrasound needle guidance systems employing a rigid stereo camera system. State of the art methods are based on a complex series of error prone coordinate system transformations which makes them susceptible to error accumulation. By reducing the amount of calibration steps to a single calibration procedure we provide a calibration method that is equivalent, yet not prone to error accumulation. It requires a linear calibration object and is validated on three datasets utilizing di erent calibration objects: a 6mm metal bar and a 1:25mm biopsy needle were used for experiments. Compared to existing calibration methods for freehand ultrasound needle guidance systems, we are able to achieve higher accuracy results while additionally reducing the overall calibration complexity. Ke

  7. Identifying the computational requirements of an integrated top-down-bottom-up model for overt visual attention within an active vision system.

    PubMed

    McBride, Sebastian; Huelse, Martin; Lee, Mark

    2013-01-01

    Computational visual attention systems have been constructed in order for robots and other devices to detect and locate regions of interest in their visual world. Such systems often attempt to take account of what is known of the human visual system and employ concepts, such as 'active vision', to gain various perceived advantages. However, despite the potential for gaining insights from such experiments, the computational requirements for visual attention processing are often not clearly presented from a biological perspective. This was the primary objective of this study, attained through two specific phases of investigation: 1) conceptual modeling of a top-down-bottom-up framework through critical analysis of the psychophysical and neurophysiological literature, 2) implementation and validation of the model into robotic hardware (as a representative of an active vision system). Seven computational requirements were identified: 1) transformation of retinotopic to egocentric mappings, 2) spatial memory for the purposes of medium-term inhibition of return, 3) synchronization of 'where' and 'what' information from the two visual streams, 4) convergence of top-down and bottom-up information to a centralized point of information processing, 5) a threshold function to elicit saccade action, 6) a function to represent task relevance as a ratio of excitation and inhibition, and 7) derivation of excitation and inhibition values from object-associated feature classes. The model provides further insight into the nature of data representation and transfer between brain regions associated with the vertebrate 'active' visual attention system. In particular, the model lends strong support to the functional role of the lateral intraparietal region of the brain as a primary area of information consolidation that directs putative action through the use of a 'priority map'.

  8. The Medawar Lecture 2001 Knowledge for vision: vision for knowledge

    PubMed Central

    Gregory, Richard L

    2005-01-01

    An evolutionary development of perception is suggested—from passive reception to active perception to explicit conception—earlier stages being largely retained and incorporated in later species. A key is innate and then individually learned knowledge, giving meaning to sensory signals. Inappropriate or misapplied knowledge produces rich cognitive phenomena of illusions, revealing normally hidden processes of vision, tentatively classified here in a ‘peeriodic table’. Phenomena of physiology are distinguished from phenomena of general rules and specific object knowledge. It is concluded that vision uses implicit knowledge, and provides knowledge for intelligent behaviour and for explicit conceptual understanding including science. PMID:16147519

  9. Multiocular image sensor with on-chip beam-splitter and inner meta-micro-lens for single-main-lens stereo camera.

    PubMed

    Koyama, Shinzo; Onozawa, Kazutoshi; Tanaka, Keisuke; Saito, Shigeru; Kourkouss, Sahim Mohamed; Kato, Yoshihisa

    2016-08-01

    We developed multiocular 1/3-inch 2.75-μm-pixel-size 2.1M- pixel image sensors by co-design of both on-chip beam-splitter and 100-nm-width 800-nm-depth patterned inner meta-micro-lens for single-main-lens stereo camera systems. A camera with the multiocular image sensor can capture horizontally one-dimensional light filed by both the on-chip beam-splitter horizontally dividing ray according to incident angle, and the inner meta-micro-lens collecting the divided ray into pixel with small optical loss. Cross-talks between adjacent light field images of a fabricated binocular image sensor and of a quad-ocular image sensor are as low as 6% and 7% respectively. With the selection of two images from one-dimensional light filed images, a selective baseline for stereo vision is realized to view close objects with single-main-lens. In addition, by adding multiple light field images with different ratios, baseline distance can be tuned within an aperture of a main lens. We suggest the electrically selective or tunable baseline stereo vision to reduce 3D fatigue of viewers.

  10. Eyeglasses for Vision Correction

    MedlinePlus

    ... Stories Español Eye Health / Glasses & Contacts Eyeglasses for Vision Correction Dec. 12, 2015 Wearing eyeglasses is an easy way to correct refractive errors. Improving your vision with eyeglasses offers the opportunity to select from ...

  11. Chemicals Industry Vision

    SciTech Connect

    none,

    1996-12-01

    Chemical industry leaders articulated a long-term vision for the industry, its markets, and its technology in the groundbreaking 1996 document Technology Vision 2020 - The U.S. Chemical Industry. (PDF 310 KB).

  12. Observations of energetic particles with STEREO: events with large longitudinal spread

    NASA Astrophysics Data System (ADS)

    Dresing, Nina; Droege, Wolfgang; Kartavykh, Yulia; Klassen, Andreas; Malandraki, Olga; Gomez-Herrero, Raul; Heber, Bernd

    The two STEREO spacecraft perform Earth-like orbits around the Sun with an increasing longitudinal separation to the Earth of ~22 degrees per year. A 360 degree view of the Sun was reached in February 2011, providing multi-point in-situ and remote-sensing observations of unprecedented quality. Together with close to Earth measurements, the STEREO spacecraft build an optimal platform to study solar energetic particles (SEPs) and its longitudinal variations with minimal radial gradient effects. While solar activity finally began to rise after the very deep minimum in 2010 to 2011, the STEREO spacecraft had reached a sufficient longitudinal separation to detect and investigate events with large longitudinal spreads. The mechanisms producing these unexpected wide particle spreads are subject to recent research. Comprehensive observations and modeling tools are put forth to disentangle source and transport processes. The efficiency of perpendicular diffusion in the interplanetary medium versus coronal transport, as well as the role of coronal shocks, EUV waves, and CMEs will be discussed.

  13. Solar origin of in-situ near-relativistic electron spikes observed with SEPT/STEREO

    NASA Astrophysics Data System (ADS)

    Klassen, A.; Gómez-Herrero, R.; Heber, B.; Kartavykh, Y.; Dröge, W.; Klein, K.-L.

    2012-06-01

    During 2010-2011 the Solar Electron Proton Telescope (SEPT) onboard the twin STEREO spacecraft detected a number of typical impulsive electron events showing a prompt intensity onset followed by a long decay, as well as several near-relativistic so-called electron spike events. These spikes are characterized by a very short duration of below 10-20 min at FWHM, almost symmetric time profiles, velocity dispersion and strong anisotropy, revealing a very weak scattering during particle propagation from the Sun to STEREO. Spikes are detected at energies below 300 keV and appear simulateneously with type III radio bursts detected by SWAVES/STEREO and narrow EUV jets in active regions. Using particle, EUV and radio imaging observations we found that near-relativistic electrons were accelerated simultaneously and at the same location as the electrons emitting the accompanying type III radio bursts and together with coronal EUV jets. Furthermore, the sources of type III radio bursts match very well the locations and the trajectories of the associated EUV jet. Applying a particle propagation model we demonstrate that the spike characteristics reflect both, properties of the accelerator and effects of interplanetary propagation.

  14. THREE-DIMENSIONAL RECONSTRUCTION OF AN ERUPTING FILAMENT WITH SOLAR DYNAMICS OBSERVATORY AND STEREO OBSERVATIONS

    SciTech Connect

    Li Ting; Zhang Jun; Zhang Yuzong; Yang Shuhong E-mail: zjun@nao.cas.cn

    2011-09-20

    On 2010 August 1, a global solar event was launched involving almost the entire Earth-facing side of the Sun. This event mainly consisted of a C3.2 flare, a polar crown filament eruption, and two Earth-directed coronal mass ejections. The observations from the Solar Dynamics Observatory (SDO) and STEREO showed that all the activities were coupled together, suggesting a global character of the magnetic eruption. We reconstruct the three-dimensional geometry of the polar crown filament using observations from three different viewpoints (STEREO A, STEREO B, and SDO) for the first time. The filament undergoes two eruption processes. First, the main body of the filament rises up, while it also moves toward the low-latitude region with a change in inclination by {approx}48{sup 0} and expands only in the altitudinal and latitudinal direction in the field of view of the Atmospheric Imaging Assembly. We investigate the true velocities and accelerations of different locations along the filament and find that the highest location always has the largest acceleration during this eruption process. During the late phase of the first eruption, part of the filament material separates from the eastern leg. This material displays a projectile motion and moves toward the west at a constant velocity of 141.8 km s{sup -1}. This may imply that the polar crown filament consists of at least two groups of magnetic systems.

  15. In situ Observations of CIRs on STEREO, Wind, and ACE During 2007 - 2008

    NASA Astrophysics Data System (ADS)

    Mason, G. M.; Desai, M. I.; Mall, U.; Korth, A.; Bucik, R.; von Rosenvinge, T. T.; Simunac, K. D.

    2009-05-01

    During the 2007 and 2008 solar minimum period, STEREO, Wind, and ACE observed numerous Corotating Interaction Regions (CIRs) over spatial separations that began with all the spacecraft close to Earth, through STEREO separation angles of ˜ 80 degrees in the fall of 2008. Over 35 CIR events were of sufficient intensity to allow measurement of He and heavy ion spectra using the IMPACT/SIT, EPACT/STEP and ACE/ULEIS instruments on STEREO, Wind, and ACE, respectively. In addition to differences between the spacecraft expected on the basis of simple corotation, we observed several events where there were markedly different time-intensity profiles from one spacecraft to the next. By comparing the energetic particle intensities and spectral shapes along with solar wind speed we examine the extent to which these differences are due to temporal evolution of the CIR or due to variations in connection to a relatively stable interaction region. Comparing CIRs in the 1996 - 1997 solar minimum period vs. 2007 - 2008, we find that the 2007 - 2008 period had many more CIRs, reflecting the presence of more high-speed solar wind streams, whereas 1997 had almost no CIR activity.

  16. Motorcycle detection and counting using stereo camera, IR camera, and microphone array

    NASA Astrophysics Data System (ADS)

    Ling, Bo; Gibson, David R. P.; Middleton, Dan

    2013-03-01

    Detection, classification, and characterization are the key to enhancing motorcycle safety, motorcycle operations and motorcycle travel estimation. Average motorcycle fatalities per Vehicle Mile Traveled (VMT) are currently estimated at 30 times those of auto fatalities. Although it has been an active research area for many years, motorcycle detection still remains a challenging task. Working with FHWA, we have developed a hybrid motorcycle detection and counting system using a suite of sensors including stereo camera, thermal IR camera and unidirectional microphone array. The IR thermal camera can capture the unique thermal signatures associated with the motorcycle's exhaust pipes that often show bright elongated blobs in IR images. The stereo camera in the system is used to detect the motorcyclist who can be easily windowed out in the stereo disparity map. If the motorcyclist is detected through his or her 3D body recognition, motorcycle is detected. Microphones are used to detect motorcycles that often produce low frequency acoustic signals. All three microphones in the microphone array are placed in strategic locations on the sensor platform to minimize the interferences of background noises from sources such as rain and wind. Field test results show that this hybrid motorcycle detection and counting system has an excellent performance.

  17. Acute Vision Loss.

    PubMed

    Bagheri, Nika; Mehta, Sonia

    2015-09-01

    Acute vision loss can be transient (lasting <24 hours) or persistent (lasting >24 hours). When patients present with acute vision loss, it is important to ascertain the duration of vision loss and whether it is a unilateral process affecting one eye or a bilateral process affecting both eyes. This article focuses on causes of acute vision loss in the nontraumatic setting and provides management pearls to help health care providers better triage these patients.

  18. Acute Vision Loss.

    PubMed

    Bagheri, Nika; Mehta, Sonia

    2015-09-01

    Acute vision loss can be transient (lasting <24 hours) or persistent (lasting >24 hours). When patients present with acute vision loss, it is important to ascertain the duration of vision loss and whether it is a unilateral process affecting one eye or a bilateral process affecting both eyes. This article focuses on causes of acute vision loss in the nontraumatic setting and provides management pearls to help health care providers better triage these patients. PMID:26319342

  19. Antenna Technology and other Radio Frequency (RF) Communications Activities at the Glenn Research Center in Support of NASA's Exploration Vision

    NASA Technical Reports Server (NTRS)

    Miranda, Felix A.

    2007-01-01

    NASA s Vision for Space Exploration outlines a very ambitious program for the next several decades of the Space Agency endeavors. Ahead is the completion of the International Space Station (ISS); safely flight the shuttle (STS) until 2010; develop and fly the Crew Exploration Vehicle (Orion) by no later than 2014; return to the moon by no later than 2020; extend human presence across the solar system and beyond; implement a sustainable and affordable human and robotic program; develop supporting innovative technologies, knowledge and infrastructure; and promote international and commercial participation in exploration. To achieve these goals, a series of enabling technologies must be developed or matured in a timely manner. Some of these technologies are: spacecraft RF technology (e.g., high power sources and large antennas which using surface receive arrays can get up to 1 Gbps from Mars), uplink arraying (reduce reliance on large ground-based antennas and high operation costs; single point of failure; enable greater data-rates or greater effective distance; scalable, evolvable, flexible scheduling), software define radio (i.e., reconfigurable, flexible interoperability allows for in flight updates open architecture; reduces mass, power, volume), and optical communications (high capacity communications with low mass/power required; significantly increases data rates for deep space). This presentation will discuss some of the work being performed at the NASA Glenn Research Center, Cleveland, Ohio, in antenna technology as well as other on-going RF communications efforts.

  20. Real-time registration of video with ultrasound using stereo disparity

    NASA Astrophysics Data System (ADS)

    Wang, Jihang; Horvath, Samantha; Stetten, George; Siegel, Mel; Galeotti, John

    2012-02-01

    Medical ultrasound typically deals with the interior of the patient, with the exterior left to the original medical imaging modality, direct human vision. For the human operator scanning the patient, the view of the external anatomy is essential for correctly locating the ultrasound probe on the body and making sense of the resulting ultrasound images in their proper anatomical context. The operator, after all, is not expected to perform the scan with his eyes shut. Over the past decade, our laboratory has developed a method of fusing these two information streams in the mind of the operator, the Sonic Flashlight, which uses a half silvered mirror and miniature display mounted on an ultrasound probe to produce a virtual image within the patient at its proper location. We are now interested in developing a similar data fusion approach within the ultrasound machine itself, by, in effect, giving vision to the transducer. Our embodiment of this concept consists of an ultrasound probe with two small video cameras mounted on it, with software capable of locating the surface of an ultrasound phantom using stereo disparity between the two video images. We report its first successful operation, demonstrating a 3D rendering of the phantom's surface with the ultrasound data superimposed at its correct relative location. Eventually, automated analysis of these registered data sets may permit the scanner and its associated computational apparatus to interpret the ultrasound data within its anatomical context, much as the human operator does today.